Exploring Complexity in the Past: The Hohokam Water Management Simulation

Exploring Complexity in the Past: The Hohokam Water Management Simulation
Exploring Complexity in the Past: The
Hohokam Water Management Simulation
by
John Todd Murphy
A Dissertation Submitted to the Faculty of the
Department of Anthropology
In Partial Fulfillment of the Requirements
For the Degree of
Doctor of Philosophy
In the Graduate College
The University of Arizona
2009
2
THE UNIVERSITY OF ARIZONA
GRADUATE COLLEGE
As members of the Dissertation Committee, we certify that we have read the dissertation prepared by John Todd Murphy, entitled “Exploring Complexity in the Past:
The Hohokam Water Management Simulation,” and recommend that it be accepted
as fulfilling the dissertation requirement for the Degree of Doctor of Philosophy.
Date: April 27th, 2009
J. Stephen Lansing
Date: April 27th, 2009
Paul R. Fish
Date: April 27th, 2009
Suzanne Fish
Date: April 27th, 2009
Steven L. Kuhn
Date: April 27th, 2009
Mark S. Aldenderfer
Final approval and acceptance of this dissertation is contingent upon the candidate’s
submission of the final copies of the dissertation to the Graduate College.
I hereby certify that I have read this dissertation prepared under my direction and
recommend that it be accepted as fulfilling the dissertation requirement.
Date: April 27th, 2009
J. Stephen Lansing
3
Statement by Author
This dissertation has been submitted in partial fulfillment of requirements for an
advanced degree at The University of Arizona and is deposited in the University
Library to be made available to borrowers under rules of the Library.
Brief quotations from this dissertation are allowable without special permission,
provided that accurate acknowledgment of source is made. Requests for permission
for extended quotation from or reproduction of this manuscript in whole or in part
may be granted by the head of the major department or the Dean of the Graduate
College when in his or her judgment the proposed use of the material is in the interests
of scholarship. In all other instances, however, permission must be obtained from the
author.
Signed:
John Todd Murphy
4
Acknowledgements
This research was funded by a grant from the McDonnell Foundation and sponsored by
the Global Institute of Sustainability (formerly the Center for Environmental Studies),
Arizona State University.
Profound thanks are due to Charles Redman and Ann Kinzig, who provided the
initial direction for the simulation project and gave me the opportunity to pursue it,
and whose advice and guidance helped me keep it moving forward even during difficult
times. The members of my dissertation committee are also due immense thanks for
enduring a seemingly endless sequence of meetings and drafts while this work came
together. Steve Lansing’s heroic ability to read, thoughtfully comment upon, and give
helpful direction on drafts that were scarcely half-formed was especially laudable. The
dissertation would never have been finished without his kindness and his help.
In addition to my committee members, many other researchers contributed their
time and expertise on this project, including David Abbott, John Ravesloot, Glen
Rice, Scott Ingram, Kevin Lansey, Marco Janssen, Marty Anderies, Peggy Nelson,
Ben Nelson, Glenn Stuart, Robert C. Hunt, and Richard Felger. Jeff Dean provided
me with reconstructed streamflow data in an electronic form. Special thanks are
due to Jerry Howard, not only for his direct involvement and assistance but also
because his pioneering work on hydraulic analyses of the canal systems made this
work possible. While I have always hoped to repay the community of Hohokam
archaeologists by giving them a useful simulation, I can only fall far short of repaying
my debt to them for their kindness, support, and assistance. There are too many
others to thank here; all errors, of course, are my own.
Thanks are also given to the ASU Center for Social Dynamics and Complexity,
especially Michael Barton and William Griffin, and to Peggy Nelson, for providing me
with lab space during my writing phase. Peter McCartney, Wayne Porter, and Robin
Schroeder provided valuable conceptual and technical support above and beyond the
call of duty during the construction of the simulation.
I am especially grateful for the kindness and feedback I’ve received from my University of Arizona colleagues Luke Premo (whose thoughts were a source of much
inspiration for many parts of this work), Brandon Gabler, John Pepper, Mary Ellen
Morbeck, Jon Scholnick, and Jenny Jandt, and from my friend and colleague at ASU,
Shade Shutters.
For her support and patience I thank my wife, Kerry Lynn Sagebiel.
5
Dedication
To Kerry.
6
Table of Contents
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
Chapter 1. The Complex Past . . . . . . . . . . . . . . . . . . . . . . .
1.1. Archaeology, Complexity, and Coupled Human and Natural Systems .
1.1.1. Fundamental Challenges of Apprehending Complex Systems .
1.1.2. Variations on a Theme of Complexity . . . . . . . . . . . . .
1.2. Problem Domain: The Hohokam . . . . . . . . . . . . . . . . . . . . .
1.2.1. Why Study the Hohokam from a Complex Systems Perspective
1.2.2. Objectives: Multiple levels of knowledge . . . . . . . . . . . .
1.3. Simulation Models in Archaeology . . . . . . . . . . . . . . . . . . . .
1.3.1. Varieties of Archaeological Simulations . . . . . . . . . . . . .
1.3.2. Applicability for Exploring Complex Systems . . . . . . . . .
1.3.3. General Assessment of Complex Systems Simulations in Archaeology . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4. A Flexible Modeling Framework . . . . . . . . . . . . . . . . . . . . .
1.4.1. The Modeling Framework: An Introduction . . . . . . . . . .
1.4.2. Epistemological and Theoretical Concerns . . . . . . . . . . .
1.5. Organization of the Discussion . . . . . . . . . . . . . . . . . . . . . .
13
15
15
18
23
23
29
33
34
36
Chapter 2. The Hohokam as a Complex Coupled Human/Natural
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1. The Outlines of the Hohokam Trajectory . . . . . . . . . . . . . . . .
2.1.1. Environmental background . . . . . . . . . . . . . . . . . . . .
2.1.2. Hohokam Irrigation and Subsistence . . . . . . . . . . . . . .
2.1.3. The Hohokam Trajectory in greater detail . . . . . . . . . . .
2.1.4. Difficulties in our Understanding of the Hohokam trajectory .
2.2. Other Approaches to understanding the Hohokam . . . . . . . . . . .
2.2.1. Regional Approaches . . . . . . . . . . . . . . . . . . . . . . .
2.2.2. Studies of Canals and Rivers: Paleohydraulics and Geomorphology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.3. Studies of Agriculture and subsistence . . . . . . . . . . . . .
2.3. The Hohokam Social System . . . . . . . . . . . . . . . . . . . . . . .
2.4. Coupled Human and Natural Systems: Approaches to Complexity . .
2.5. A Modeling Approach . . . . . . . . . . . . . . . . . . . . . . . . . .
40
41
41
42
44
46
48
48
49
54
56
60
61
62
65
68
71
74
7
Table of Contents—Continued
Chapter 3. Archaeological Models and Inferences: Toward a
new toolkit for exploring the archaeological record . . . . .
3.1. Models and Science in the 21st century: A Fractured Toolkit . . . . .
3.1.1. Logical Positivism . . . . . . . . . . . . . . . . . . . . . . . .
3.1.2. Postpositivism . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2. Logical Bases for Modeling . . . . . . . . . . . . . . . . . . . . . . . .
3.2.1. Abduction: One Operation or Many? . . . . . . . . . . . . . .
3.3. Using Models as Part of Larger Scientific Practice . . . . . . . . . . .
3.3.1. Models as Deductive Engines . . . . . . . . . . . . . . . . . .
3.3.2. Intrinsic Generality of Models . . . . . . . . . . . . . . . . . .
3.3.3. Stochastic Models . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.4. Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.5. Models in the context of a process scientific inquiry . . . . . .
3.4. Modeling for Exploring Archaeological Complex systems . . . . . . .
3.4.1. A catalog of goals . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.2. Implications of the collective goals . . . . . . . . . . . . . . . .
3.5. An Exploratory Archaeological Complex Systems Modeling Toolkit .
3.5.1. Operations of exploratory modeling in complex systems archaeology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.5.2. Characteristics of a toolkit for modeling archaeological complex
systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.6. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.7. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 4. The ABCM Modeling Framework . . . . .
4.1. The ABCM Toolkit: A database-centered approach . .
4.2. The ABCM framework in detail . . . . . . . . . . . . .
4.2.1. Code Models and the ‘Cartoon’ . . . . . . . . .
4.2.2. Configurations . . . . . . . . . . . . . . . . . . .
4.2.3. Histories and Narratives . . . . . . . . . . . . .
4.2.4. Parameters . . . . . . . . . . . . . . . . . . . .
4.2.5. Probes and Probe Sets . . . . . . . . . . . . . .
4.2.6. Larger organization of these units . . . . . . . .
4.3. Completing the argument: Analyses and Summaries . .
4.4. Conclusion: Databases as Platforms for Models Theory,
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
and Voice
Chapter 5. The Hohokam Water Management Simulation:
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1. The Structure of the HWM Simulation . . . . . . . . . . . .
5.2. HWM Simulation Components . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
79
81
83
86
90
93
98
98
100
101
104
106
109
110
113
114
115
116
118
119
121
121
129
130
132
133
136
137
138
142
144
An ABCM
. . . . . 149
. . . . . 151
. . . . . 155
8
Table of Contents—Continued
5.2.1. The Landscape . . . . . . . . . . . . . . . . . . . . . . . . . . 155
5.2.2. Rivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
5.2.3. Streamflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
5.2.4. Headgates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
5.2.5. Canal Systems and Canals . . . . . . . . . . . . . . . . . . . . 163
5.2.6. Rain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
5.2.7. Fields and Soil . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.2.8. Plants and Crops . . . . . . . . . . . . . . . . . . . . . . . . . 176
5.2.9. Agents and Actors within the larger HWM system . . . . . . . 180
5.3. Conclusion: Model Variations, Questions, and the Exploratory Approach 182
Chapter 6. Dimensions of Exploration in the Hohokam Water
Management Simulation . . . . . . . . . . . . . . . . . . . . . . . . . 187
6.1. Life on Landscape 3: The HWM Simulation in action . . . . . . . . . 189
6.1.1. Landscape 3: Topography . . . . . . . . . . . . . . . . . . . . 190
6.1.2. Rivers, Headgates, and Stream Flow . . . . . . . . . . . . . . 194
6.1.3. Canal Systems and Water Flow . . . . . . . . . . . . . . . . . 204
6.1.4. Fields, Plants, and Plant Growth . . . . . . . . . . . . . . . . 226
6.1.5. Chemicals: Evaporation, Soil Processes, and Plant Growth . . 241
6.1.6. Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
6.1.7. Special features of the ABCM for Agent-Based Modeling . . . 247
6.1.8. The need for Agent-Based Modeling in the HWM . . . . . . . 249
6.1.9. Examples: Agents on Landscape 3 . . . . . . . . . . . . . . . 251
6.1.10. Conclusions: Agent-Based Modeling and exploratory archaeology 257
6.2. Building argument chains . . . . . . . . . . . . . . . . . . . . . . . . 259
6.3. Conclusion: Dimensions of Exploration outward from the Middle Ground 266
Chapter 7. Summary and Prospectus . . . . . . . . . . .
7.1. Three intellectual issues: glancing backwards and looking
7.1.1. How ‘New’ is a Model-Based Archaeology? . . . .
7.1.2. Running the tape again . . . . . . . . . . . . . .
7.1.3. Models, Experiments, and Scientific Research . .
7.2. The ABCM Framework . . . . . . . . . . . . . . . . . . .
7.2.1. Status and Prospects . . . . . . . . . . . . . . . .
7.3. The HWM Simulation: Status, Prospects, Possibilities .
7.4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
ahead
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
269
271
271
280
281
283
285
286
291
Appendix A. A Guide to the Attached Files . . . . . . . . . . . . . . 294
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
9
List of Figures
Figure 3.1. Categories of Inference . . . . . . . . . . . . . . . . . . . . . . .
94
Figure 4.1. Data structure underlying an ABCM Simulation Run . . . . . . 139
Figure 4.2. Construction of an ABCM Simulation Argument . . . . . . . . 140
Figure 6.1. The ‘central basin’ of Landscape 3 . . . . . . . . . . . . .
Figure 6.2. Landscape 3 with north and south mountains . . . . . . .
Figure 6.3. Landscape 3 with north and south mountains in 3D . . . .
Figure 6.4. Landscape 3 with Default River and headgates. . . . . . .
Figure 6.5. Efficiency curves of two headgates . . . . . . . . . . . . .
Figure 6.6. Absolute flow from two gates of varying efficiency . . . . .
Figure 6.7. Salt River seasonal flow . . . . . . . . . . . . . . . . . . .
Figure 6.8. Gila River seasonal flow . . . . . . . . . . . . . . . . . . .
Figure 6.9. Headgate efficiency demo using Salt seasonal flow . . . . .
Figure 6.10. Headgate efficiency demo using Gila seasonal flow . . . . .
Figure 6.11. Salt River headgate efficiency over a range of values . . . .
Figure 6.12. Gila River River headgate efficiency over a range of values
Figure 6.13. Headgate efficiency on the Salt River using reconstructed
and seasonal variation . . . . . . . . . . . . . . . . . . . . . . . . .
Figure 6.14. Landscape 3 Central Basin with canals and fields . . . . .
Figure 6.15. Landscape 3 flow shortage demonstration . . . . . . . . .
Figure 6.16. The flow shortage demonstration with physical constraints
Figure 6.17. Flow shortage demonstrated on a steeper landscape . . . .
Figure 6.18. Plant yield with constant rainfall at optimum amount . .
Figure 6.19. Map view of Plant Demo 1 . . . . . . . . . . . . . . . . .
Figure 6.20. Yield and water for Plant Demo 2 . . . . . . . . . . . . .
Figure 6.21. Yield and water for Plant Demo 3 . . . . . . . . . . . . .
Figure 6.22. Lapse rate demonstraton map view . . . . . . . . . . . . .
Figure 6.23. Lapse rate demonstraton results . . . . . . . . . . . . . . .
Figure 6.24. Chemical flush demonstration . . . . . . . . . . . . . . . .
Figure 6.25. Agent Demo 1 . . . . . . . . . . . . . . . . . . . . . . . .
Figure 6.26. Agent Demo 2 . . . . . . . . . . . . . . . . . . . . . . . .
Figure 6.27. The HWM Interactive Window . . . . . . . . . . . . . . .
Figure 6.28. The HWM Simulation Menu . . . . . . . . . . . . . . . .
Figure 6.29. HWM Run Simulation Screen . . . . . . . . . . . . . . . .
Figure 6.30. Field Manager Analysis graph . . . . . . . . . . . . . . . .
Figure 6.31. Canal Manager Analysis graph . . . . . . . . . . . . . . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
data
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
191
192
193
195
197
198
199
199
200
201
202
203
205
207
211
213
214
227
229
232
233
238
239
244
253
257
260
261
262
265
266
10
List of Tables
Table 5.1. Hierarchy of software for the HWM Simulation . . . . . . . . . . 154
Table 6.1. Phoenix average mean daily temperature, by month, and mean
percent of annual daylight hours . . . . . . . . . . . . . . . . . . . . . . 237
11
Abstract
The Hohokam Water Management Simulation (HWM) is a computer simulation for
exploring the operation of the Hohokam irrigation systems in southern Arizona. The
simulation takes a middle road between two common kinds of archaeological simulation: large-scale, detailed landscape and environmental reconstructions and highly
abstract hypothesis-testing simulations. Given the apparent absence in the Hohokam
context of a central authority, the specific aim of the HWM is approaching the Hohokam as a complex system, using principles such as resilience, robustness, and selforganization. The Hohokam case is reviewed, and general questions concerning how
the irrigation systems operated are shown to subsume multiple crosscutting and unresolved issues. Existing proposals about the relevant aspects of Hohokam society
and of its larger long-term trajectory are based on widely varying short- and longterm processes that invoke different elements, draw different boundaries, and operate
at different spatial and temporal scales, and many rely on information that is only
incompletely available. A framework for approaching problems of this kind is put forward. A definition of modeling is offered that specifies its epistemological foundations,
permissible patterns of inference, and its role in our larger scientific process. Invoking
Logical Positivism, a syntactic rather than semantic view of modeling is proposed:
modeling is the construction of sets of assertions about the world and deductions that
can be drawn from them. This permits a general model structure to be offered that
admits hypothetical or provisional assertions and the flexible interchange of model
components of varying scope and resolution. Novel goals for archaeological inquiry
fall from this flexible approach; these move from specific reconstruction to a search
for more universal and general dynamics. A software toolkit that embodies these
principles is introduced: the Assertion-Based Computer Modeling toolkit (ABCM),
which integrates simulation with the logical architecture of a relational database, and
12
further provides an easy means for linking models of natural and social processes
(including agent-based modeling). The application of this to the Hohokam context
is described, and an extended example is presented that demonstrates the flexibility,
utility and challenges of the approach. An attached file provides sample output.
13
Chapter 1
The Complex Past
The modern city of Phoenix, Arizona, is one of the fastest growing urban areas in
the United States, and its greater metropolitan area is now home to more than 3
million people (Redman and Kinzig 2004). But barely 150 years ago, the area in
which this sprawling urban center is now located was by comparison empty. The
climate is not welcoming; rainfall is scarce (but occasionally, if briefly, severe) and
temperatures soar during a summer that can last from April to October. But the
decision to found a city here was in no way arbitrary, for the landscape was far from
pristine: the Hohokam had lived there for centuries, disappearing only a short time
before the valley would first be seen by Europeans. To survive in the harsh climate
they constructed large-scale irrigation works to divert water from the Salt and Gila
Rivers onto their extensive fields. Although the people themselves are now gone, what
they left behind provided the bases for the modern city. The landscape contained,
in the form of the large trunk canals still easily visible, an inarguable demonstration
that large-scale agriculture could be undertaken there successfully; moreover, the
same canal systems that demonstrated this were, in some cases, still in reasonable
condition, and could be used either as actual channels or as guides to where channels
could be put (Redman and Kinzig 2004, Turney 1929b).
The connections between the modern residents of Phoenix and the Hohokam who
lived there before us run deep; not only do we exist in the wake of their changes to
the landscape, but we face today the many of the same issues that shaped their lives.
We also create our own effects, and just as the Hohokam lived through time with
the outcomes of their centuries-long efforts, we live in a world that we are rapidly reshaping. On small and large scales the impacts of these changes are often unforeseen.
14
When humans occupy a place in the landscape they necessarily interact with their
environment, and when that occupation endures over long time scales this interaction
becomes complicated. Among other interconnections, the control of resources provides a direct connection between the environment and the structure of power within
the society; thus the structure within which the society manipulates the environment
is shaped by the opportunities that can be crafted from it, while its actions in this
regard impact the environment and change those opportunities. General principles
of such extended interaction, if they can be found, are of great interest to those who
study past societies as well as those who study the same processes in the modern
world, and hence the study of human-environmental interactions has become a topic
of increasing research interest and activity.
This dissertation presents a new modeling approach that can be used to address
these questions. It has two complementary purposes. First, it is presents an approach
that is rooted in the demands of exploring the possibilities of archaeologically attested
complex coupled human and natural systems. The approach asks new questions and
offers new paths to securing their answers, and these are treated fully here in the
abstract before being applied to a concrete case. The secondary purpose of this
dissertation is to explore the specific Hohokam example; while it is of paramount
importance that the model be grounded in a concrete example, the modeling approach
should be more general, so the specifics of the Hohokam context are illustrative but
not determinant, and in any case the initial results in the specific case are only the
first steps, for it is assumed that the exploratory modeling approach is open-ended
and will continue to grow.
15
1.1
Archaeology, Complexity, and Coupled Human and Natural Systems
Archaeology offers a distinctive window into the dynamics of coupled human and
natural systems: it allows a time depth no other approach can provide. If we wish
to understand the way a society can interact with its environment, how the people
and the landscape act and react upon each other through centuries of longer, archaeology is the only option; historical records are too spotty, usually fail to address the
details of interest, and more often than not are too short. Moreover, the processes we
would wish to study are complicated: they occur at a range of scales and at different
paces, involve many different elements of several different kinds, and may be found
in evidence of several varieties. Only archaeology covers such a broad swath for the
durations needed. Despite this, the challenges are large.
1.1.1
Fundamental Challenges of Apprehending Complex Systems
Suppose we put aside the limitations of an incomplete archaeological record and
instead assume that we had a life span and vantage point from which we could obtain
what might be considered a full record of the occupation of a specified region for a
given time frame, possibly even millennia. We could see all the comings and goings of
every erson, and watch, from any point of view, all of a population’s activities. The
landscape would change, and the population grow and decline. Despite this record,
the deep principles of organization that we seek might still elude us; we would need
to move from the particulars of history to the generals of science.
To do this we would need to address certain fundamental issues. The three simplest are:
• What are the elements that are involved in these issues? Possibilities might
include vegetation types, soil quality, settlements, fields, etc.. We can refer to
this issue as composition.
16
• On what scales do these principles operate? A better term for this might be
resolution, as it suggests that the principles might be operating across small
distances or across entire regions, and may be related to micro or macro effects,
but also that fine details may matter or they may not. Resolution can be either
temporal or spatial, and it would be important to know which principles were
operating on a time scale of days or on centuries.
• What is the scope of the dynamics that are driving these changes? This suggests that some dynamics of interest might involve elements that are outside the
boundaries of the observed area, perhaps even dynamics taking place on continental or global scales. As with resolution, scope has a temporal component:
the observer may have arrived at the tail end of a process of interest, or left
during the incipient stages, or been present only during the middle of a much
longer process.
It should be obvious that these issues crosscut. Resolution, for example, can be
spatial or temporal in the usual sense, but can also be related to composition, in the
sense that what may be a unit for one level of analysis may have to be considered in
terms of its components for another: perhaps individuals matter, or only households,
or villages.
In addition to these issues, another one can be offered, which, for want of a better
term, I will call coherence. This refers to the idea that the dynamics that might be of
interest may or may not be in play at all. This covers a range of possibilities. There
may be interference: if one hopes, for example, to explore the dynamics of urban
formation, but soon after one begins observations the area under study is struck by
a meteorite, the observations are unlikely to reveal the hoped-for patterns. But fire
from above is not the only kind of problem; it is also possible for several different
kinds of processes to crosscut each other, and teasing out one from the other may
be impossible. History provides few controlled experiments; gravity ‘works’ all the
17
time, but studying it in a tornado is (literally) chasing the wind. Finally, of course,
dynamics that are hypothesized may simply not exist.
But the most frustrating aspect of the exercise might be the fact that, despite
the extensive record, we would still be limited to only the history that happened; we
could not know if it was typical.
Computerized simulation modeling offers a chance to break away from history, or
rather to replay versions of it repeatedly and observe the patterns of results obtained.
The effect of this is being able to ‘rewind the tape and run it again’ (to paraphrase
Lansing’s (2003) summation of Gould’s original discussion). The issue that can be
pursued is much deeper than the other issues, for it allows us to get at a more profound
question: how much of what took place was merely happenstance, and how much was
determined by an inexorable logic we hoped to glean? Phrased another way, how
likely was a given outcome- was it the product of a few key variables falling within a
narrow range, or was it (or at least its broad outlines) likely to happen in more or less
the same form regardless of some of the details. This is a key question that permeates
our understanding in all historical fields (consider the Annales school of history); in
technical parlance, the question is one of understanding the basins of attraction that
exist within the suite of possible histories. As a placeholder for reference in the rest
of this discussion I will refer to the entire issue simply by the label chance.
However, simulation models raise their own problems. The issue is the change
in direction, from observing what goes on in the real world which must be ‘right’ in
an absolute sense, even if our perception of it is incomplete or incorrect, to working
from our recreations of that reality. The first issue is directly related to the issue of
chance; the parameter space of any given simulation is the number of possible initial
conditions. For simulations of coupled human and natural systems, this is almost
always huge, for the addition of any new variable multiplies the simulation’s overall
parameter space by the number of possible values for that variable.
A second issue is completeness, which may be seen as the complement to composi-
18
tion: whereas reality always includes everything, a simulation may fail to include the
elements that are actually relevant to the dynamics to be discovered. It is important
to note that this is not necessarily a failure to include detail, nor always a simple
omission; rather, it may be a failure to recognize the units that are most relevant,
perhaps because they are comprised of other components and do not always conform
to our expectations of what a unit will be.
A final issue with simulations is validity, by which is meant the general conformity
of the simulation to reality. This is important, but several issues are immediately
raised by it. First, ‘general conformity’ is a dodge, because a simulation is never
identical to reality, and hence it must be judged by some more flexible criteria of
similarity. A second issue is that the model may be asked to represent components
that we believe to have existed in the past but are not archaeologically attested, and
hence those components of the model can’t be validated. A third issue, however, is
that the model may be asked to expand beyond the trajectory that actually occurred
and consider alternative histories; in this situation, the concept of validation is much
more slippery.
Here I present the results of an archaeological simulation model of the Hohokam,
but focusing on the structure of the model and the ways it overcomes these challeges.
The specific instance of this simulation effort is used as a springboard for a deeper
epistemological inquiry into the uses, strengths and weaknesses of the archaeological
uses of simulation models. Crosscutting these challenges, however, are challenges
that arise from the nature of the object of study, and the potential for the long-term
trajectory of human societies to exhibit nonlinear, ’complex’ behavior.
1.1.2
Variations on a Theme of Complexity
More recently, there has been a recognition that trajectories of certain kinds of systems
systems, including ecosystems and human and natural systems, can be nonlinear, so
19
that responses to particular perturbations can be varied and unpredictable. This is a
hallmark of what are termed “complex systems” (per Beekman and Baden 2005; see
also Bentley 2003 for a more historical discussion of complex systems in archaeology).
This nonlinearity, especially in the face of apparently simple systems, is surprising
but is intuitively in keeping with our perceptions of the vicissitudes of living systems.
The importance of this realization to those who manage ecosystems or work with the
management of valuable resources cannot be overstated, for it suggests that changes
that appear at first to be small or limited may lead to unintended, and in some cases
disastrous, consequences.
Recent work on systems ecology has offered progress that pushes forward in a
slightly different direction. Previously ecosystems were considered to move through
simple stages, leading to a point of ‘maturity’; however, frameworks such as Holling’s
(2001) ‘resilience’ suggest that a given system (other kinds of systems, including
economic and social systems, can be included in this) itself may have properties that
allow it to respond to challenges and perturbations in specific ways, and that what
was perceived as a progression to maturity may actually be a cycle that allows the
system to be, at different times, either more specialized or more varied. Applied to
human social systems, this suggests that there may be different ways to perceive the
processes of functional integration and ‘maturity’ through time of a human society.
These approaches have led to a new way in which human and natural systems can
be explored. Descriptive dynamics of how the system responds to different stressors,
at different points in its trajectory, can be developed into general rules. Resilience is a
good example: at certain times during a long-term cycle a system may be generically
resilient to an array of stressors, and may respond by embarking on trajectories that
leave them attuned to that stressor but more vulnerable to others.
But there is an additional level to which research in coupled human and natural
systems can be taken. A point of departure can be the recent book by Wagner
(2005) entitled Robustness and Evolvability in Living Systems. In it he examines a
20
number of living systems, repeatedly ascending and descending a ladder of biological
organization that begins with DNA and rises to the level of the organism. He addresses
two main questions: is the system able to deal with problems of specific (and common)
kinds?; and, is the system able to adapt to new conditions? He finds that not only
are the systems able to deal with problems well, but this ability is related to the
ability to adapt: the two kinds of capability are actually complementary aspects of
the system, and both balance each other and provide the components from which the
other can act.
Wagner is particularly interested in how natural selection may have driven the
systems to arrive at these points. DNA, for example, may be made out of molecules
other than the four (GATC) we see in nature; but the other possibilities fail in
specific ways, and thus would be selected against in competition with GATC-based
organisms. Similarly, the coding system for amino acids is a balance of scope and
redundancy; each amino acid can be coded for by one of several codons. Other
systems are possible, but, Wagner shows, would be more prone to problems and less
able to change. The common element among the systems he explores is a neutral
space, which provides a region within which change can occur without immediate
consequences. This characteristic provides for both the robustness and evolvability
of the system simultaneously.
The conclusion is that for these systems, nature has hit upon not only workable
but elegant solutions that balance a number of competing factors: the systems have
found a ‘sweet spot’ in the universe.
Wagner gives two reasons for this. The first is that systems that are not robust
do not endure, and cannot be observed; “fragile systems are fleeting” (Wagner 2005:
309). The second, however, is the effect of natural selection, which quickly forces
systems to find the most robust configurations. This has an important implicaton:
Wagner’s ladder extends only up to the level of the organism. Supra-organismic
entities, such as ecosystems, may be comprised of living things, but they do not
21
undergo natural selection (group selection is an occasional exception). While Wagner
discusses some man-made systems (such as phone switching systems), he declines to
address biological systems above the level of the organism, where natural selection
cannot be invoked.
What, then, of ecosystems or coupled human and natural systems? Are they
outside the purview of nature’s elegant solutions?
Possibly not. Kauffman is the champion for the generic view that, sometimes,
the universe offers ‘order for free’ (Kauffman 1995). He discusses many systems that
arrive at complex yet stable solutions through the actions of their internal mechanics.
Another view is provided by Lovelock (1990), whose ‘Daisyworld’ example gives an
example where the organisms shape the environment (see also Lansing, Kremer and
Smuts (1998)), rather than simply being shaped by natural selection. Perhaps coupled
human and natural systems can act in the same ways.
This possibility is supported by an elegant example, given by Lansing’s (1991)
research on the island of Bali. Among rice farming villages in the island’s upland
forests, a system of dams is used to allocate water. The logic of water flow would
seem to require that water be used as asynchronously as possible, because this would
prevent water shortages in any given area; moreover, water is easier to obtain at
the upstream ends of the system, for it (of course) becomes scarcer as it is used,
so the downstream farmers must make do with what is not used by their upstream
counterparts. However, this system has a catch, in the form of pests that prey on
the crops as they are planted. If the fields are planted asynchronously, the pests are
able to move from field to field and gain strength; eventually they would begin to
wipe out the crops altogether. The solution to this problem is synchronous planting:
large swaths of fields lie fallow, depriving the pests of food, until they are all planted
simultaneously; the crop is harvested before the pests can gain enough strength to
wipe it out entirely.
The system carries a cost, in that water allocation must be organized. This is
22
a problem of cooperation, and its solution lies in the organization of communities
and their interrelationships. Lansing was able to show that this solution carried
implications that permeated the rice-farming village society it supported: its influence
could be seen in its effect on religion, kinship, identity, and genetics1 . The result is
that the Balinese system is self-organizing; it finds the ‘sweet spot,’ and claims its
share of the available ‘order for free.’
Lansing has recently (Steve Lansing, pers. comm.) revisited this, and asked an
important question: is the Bali case unique? There is an argument to be made that
it is not. One piece of supporting evidence comes from a parallel example discovered
by Curran (Lisa M. Curran, pers. comm.). Her research in the rainforests of Borneo
has revealed a remarkable story. Across the forest, trees delay the deposition of their
seeds for years, until cued by an El Ninõ event; in El Ninõ years the trees all drop
their seeds within a single 6-week span, and thus effectively simultaneously. The
reason is that there exist pigs who feed on the seeds, and in this lies the parallel with
the Bali case; for just as the synchronous planting in Bali deprives the pests of the
ability to gain strength, so the synchronous mast events in the forests deny the pigs
the opportunity to grow in numbers.
There is an abstract way in which these two cases represent the same dynamic;
both suggest that individuals may, by virtue of a shared regimen of predation, find
coordination- and thus a version of cooperation- to be advantageous. Could other
such principles be at work in driving the florescence of coupled human and natural
systems? Put another way, how much of our daily existence might be attributable
to the unexpected dynamics of complex systems? Lansing suggests that there are
many such systems (consider, for example, Skyrms 1996), and, moreover, that their
1
An interesting and important inversion is that Lansing claims that the genetic relationships
among these villagers are driven by the establishment of kinship ties through marriage rules, which
are themselves driven by the need to cooperate; thus, people become genetically related because
they cooperate, and the contrary position, as sociobiologists would argue, that people cooperate
because they are genetically related, is only secondary- a contributing factor to the dynamics only
after it is an outcome of the more central elements.
23
invisibility may due to the fact that they are so successful that they infuse our lives
to such a degree that they become unremarkable. If any of this proves true, the
implications are important, for if the universe has ‘sweet spots’ it befits us to find
them and, whenever possible, to live within them.
1.2
1.2.1
Problem Domain: The Hohokam
Why Study the Hohokam from a Complex Systems Perspective
The Hohokam context offers an excellent test case for a study of a coupled human and
natural system from a complex systems perspective. One reason is that there exists
an extensive archaeological record, which provides a rich source for ideas and also
for supporting or disconfirmatory data; however, a second and complementary reason
is that despite the rich record there are still (at least) two big mysteries about the
Hohokam, mysteries that are echoes of important arguments in anthropology, and to
which systems-level explanations might be productively explored. These two issues
are the operation of a large-scale canal system- the ‘irrigation society’- and the overall
trajectory of flourescence and eventual decline- a societal ‘collapse.’
When a group of people must rely on irrigation, the irrigation system necessarily
ties their actions together across space. There is a logic that irrigation imposes.
Some of this is simple: the group that controls water at the head of the canal system
can take what they need, while those downstream must deal with what remains;
moreover, pollution that enters the upstream water supply affects the water that is
available downstream. But water movement has more complicated dynamics, and
what happens in one part of the system can affect the others in ways that are difficult
to predict. Under some conditions, changes at the tail end of one branch of one
canal can affect the way water flows through the entire remainder of the system. A
given volume of water can only be in one place at a time, so if water is limited then
care must be taken to move it around as it is needed, and this must account for the
24
challenges of the dynamics of flow. The need for water, and the timing of that need,
can vary depending on the demand that various crops have for it, and how well each
kind of crop can withstand a shortfall, and for how long (which may vary at different
points in the crop’s life cycle, and with different weather conditions). These kinds of
challenges permeate the operation of a canal system, and even affect its design and
construction: Howard (1993b) has argued that changing a canal system’s structure
must be done within serious constraints, so that, for example, merely extending a
canal cannot be done without modifying the entire upstream conduit that delivers
water to it.
Especially in an arid environment, water is life. I earlier alluded to the connection
between the allocation of resources and the power structure of a society, and the nexus
of this is the suite of decisions that must be made concerning the management of the
irrigation works. Even if the decisions to be made are only those of who gets water
and when, the result is a complex game that must involve alliances and cooperation,
so that mutual benefits can be exploited; the forms of alliance can involve kinship
or community identities that make coordinated action possible, and that govern how
decisions are made and who gets to make them. But there are even more elements
to consider. Risk must be weighed, and storage must be assessed against the possibility of drought next year; the strengths of alliances and any collective storage can
factor into these calculations. And the decisions must include participation in laborconstruction, maintenance, cleaning, and repair of the canals, which are demanding
pursuits.
Tang (1992) has noted that the conflicts that can arise in a water management
context can be intense:
Water allocation is a major source of conflict in irrigation. When the
amount of water is not sufficient to satisfy everyone’s cultivation needs
simultaneously, farmers face the prospect of decreases in crop yields or
25
even losses of entire crops. It is not uncommon to see this conflict develop
into bloodshed or even murder [Maass and Anderson 1986] (Tang 1992).
One consequence of this is that institutions tend to arise to adjudicate and prescribe
water allocation issues. This is one point of entry into the broader issue of how
water management is related to the structure of power in a given society, and this is
a venerable problem in Anthropology; a thorough review of this debate is given in
Scarborough (2003). Wittfogel is credited with first pushing forward the idea that the
problems of water management impel the rise of a central state with strong authority
and power -Wittfogel termed it despotic2 . This principle has a certain simplicity:
what other than a state can encompass the scope of a large irrigation system, and
can both coerce and organize the concerted action required to construct and manage
it? And having achieved control of the irrigation works, they become a source of power
as well, for a state that controls water is a powerful entity indeed. The implication
that large-scale irrigation both compels and fosters the rise of centralized states is
straightforward.
This idea has been denounced by some while being championed by others. Scarborough (2003) traces the debate (especially through the contrasting positions of luminaries in Mesoamerican archaeology R. McC. Adams and William Sanders) through
a long period during which contestants took extreme positions, and concludes that
Wittfogel’s case was lost because it was overstated. The defeat has caused the more
extreme versions of the hydraulic hypothesis to be dismissed explicitly, but Scarborough argues that the basic ideas continue to hold influence in the formulations of
many of today’s researchers.
More recent research provides further counterarguments to Wittfogel’s central
thesis. Tang (1992) shows that, while it is true that institutions do arise to manage
2
Wittfogel may have been preceded in elucidating these principles by Chi, and some of his
thoughts resemble ideas proposed by Marx; see Scarborough’s discussion for details; see also Mitchell
(1973).
26
water, centralized state control is in no way a natural solution to the problems that
water management presents. Smaller-scale, local control can be more effective on
its own, or may operate more efficiently within a larger state, which if it exists may
leave many smaller-scale issues to these lower-level entities. Tang explores these
issues within a common-property framework, essentially reducing the decisions that
must be made within a water management context to a set of economic games (like
the ‘prisoner’s dilemma’; see Sigmund 1993) where each player’s choices can impact
the benefits that accrue to the other players. Ultimately Tang concludes that it is
inadequate to attempt top-down central control of an irrigation system, but it is
also inadequate to assume that local actors will spontaneously self-organize; rather,
a nested collection of institutions is usually needed to ensure that the system is
managed effectively, but the nature, scale and scope of these institutions, and their
interconnection with other societal institutions, can vary widely.
One implication of this is that comparisons among irrigation societies can be
problematic. Scarborough has recently undertaken an ambitious study that considers
several archaeologically attested societies, including the Hohokam, in terms of their
water management strategies. He notes that water management systems exist within
a framework of power, and proposes that these be considered heterarchical organizations (see Crumley 1995), in which divisions of power are not hierarchical but consist
of horizontal interactions among different elements. Crosscutting this axis are different ‘planes’ of power. In Scarborough’s framework, these may represent different
domains (such as religion vs. water management) and/or they may have different
geographic extents. The implication he draws is that in some cases there may be
centralization, in which these different planes are aligned, and that this occurs in
hegemonic states; however this is not necessary to the management of an irrigation
system, and that in many cases the exercise of power at the smaller scale required
by the irrigation system is antagonistic to the processes of centralization at a larger
scale, and hence the occurrence of an irrigation-based large-scale state is unlikely.
27
Yoffee (2005) broaches these issues from a slightly different perspective, but one
that may be useful here. His purpose is to explode the typological thinking that
has recently led to the archaeological category of ‘Archaic States.’ His method is to
dispose of top-down thinking about the state and instead focus on the roles individuals
played in the ancient societies under study. Power inheres in the creation of identity
and the suites of choice available to individuals occupying certain roles; it is suffused
throughout gender, community, identity, labor, and economy. His point is that there
is not a simple entity that could be called the ‘Archaic State’; all archaic states may
be different, even though there is still a role for social evolution in our understanding
of the rise and fall of civilizations. For our purposes, we can ask how power might
have existed across the Hohokam landscape, and if the picture is as complicated as
Yoffee suggests, how we might reach for an understanding of it.
In light of the complications and challenges of managing a large canal system,
the persistence of the Hohokam way of life is notable, which leads to the second big
question. The continued occupation itself would imply that the phenomenon could be
treated as a system, for it is difficult to imagine human presence over such a long time
in a given place without the characteristics of a systemic solution for the provision of
life’s essentials. In the Hohokam case, the persistence is on a scale we are unused to
in our modern society: a millennium is a track record to be envied.
But there is slightly more to the issue of Hohokam persistence. The Hohokam
trajectory can be divided into distinct periods; each period is characterized by differences in settlement organization, demography (as clearly as it can be inferred),
and other aspects of the Hohokam florescence that might easily be related to system
dynamics at large spatial and temporal scales. We may, for example, ask if the largescale changes we observe are akin to the phase changes posited by frameworks like
Holling’s resilience (see Hegmon et al. 2008).
An alternative to this is offered by Howard (1993b), who notes that canal systems
of the kind built by the Hohokam are difficult to expand; the implication is that the
28
large extent of the canal system must have been built in a single, initial effort, which
then must have been replicated at intervals marking the end of the useful lifespans
of the canals. Scarborough (2003) proposes a different approach; he offers a framework wherein societies can be classified according to what he calls ‘economic logic,’
of which there are three kinds: societies in which labor is highly specialized and routinized (“labor-tasking”), those in which novel (and frequently changing) technology
is used to exploit the landscape (“technotasking”), and those in which a flexible mix
of strategies is used (“multitasking”). Those that are “technotasking” tend to expand rapidly and engage in overexploitation, and thus risk racing toward a point at
which new technology will not come the rescue. He contends that this describes the
Hohokam case.
We are also interested in the end of the Hohokam story: why did a system that had
worked for 1,000 years decay and disappear? This question, of course, provokes greater
attention from those in our own society who would like to understand- and avoidthe kind of collapse that the Hohokam system apparently underwent. The arguments
of Howard, Yoffee, Scarborough, and others offer their own versions of cautionary
tales, but their applicability to our modern world is uncertain. Ultimately, too, the
issues of the power structure that operated the irrigation works and the trajectory of
Hohokam society are not two questions, but one.
We can view these issues in the terms given above: resolution, composition, coherence, scope. In all of these the Hohokam context offers challenges. With respect
to scope, for example, we know that the Hohokam trajectory parallels those of other
societies throughout the SW, but we do not know the degree to which it is tied to
them or whether such a connection was only secondary. It is necessary to be more
precise than I have been about the use of the term ‘Hohokam,’ for at times I have
been discussing only a single canal system (especially the large canals around the
Salt River) whereas the Hohokam phenomenon encompasses not only the canal systems along the Salt and Gila rivers but a way of life that extends to the far south in
29
what is today Tucson; while these areas are marked by common archaeological identifiers, including elements that can be presumed to be either stylistic or functional,
the way in which they might actually have been integrated is unknown (and raises
additional complications; for example, the division into periods is slightly different
for each region). Regarding the issue of coherence, we can presume that something
link a coherent system must have existed because of the integration that we believe
is required in an irrigation system, but we do not know the composition of it, and we
also know that the dynamics we seek to understand occur across a variety of temporal
and spatial scales. With respect to resolution, we can easily see that very large scale
phenomena may be determined by small-scale elements like the relative flexibility of
particular plants in their demand for water, but the composition of the system, which
elements carry these significant impacts, is an open question.
But these questions are only part of the issue, and only return us to the problem
presented by the hypothetical omniscient observer; any understanding of the Hohokam
as a complex system must take a different form than a simple record of what took
place; our understanding must be layered, and may ultimately encompass not only
what actually happened but also how this actual record compares to an array of
possibilities that might have happened, but did not. The kind of knowledge we
seek must be found on several interrelated levels. If, as proposed above, computer
simulation models offer a window to these levels, it is important to understand what
the different levels might be and how they differ or are interrelated.
1.2.2
Objectives: Multiple levels of knowledge
We can consider what a systems-level understanding of the Hohokam context might
entail. We will return to these themes in more detail in the next chapter, but for now
an outline will help introduce the objectives of the modeling solution to be presented.
Below I list four ‘levels of knowledge’- different ways of approaching the questions we
30
have about the Hohokam, which should be accommodated by the modeling solution.
What was? We would like a modeling solution to be able to deal with the ‘real’ Hohokam past. That is, it ought to be possible to include data that are archaeologically
attested in the model; in general this could be given as a set of constraints, and might
include: the actual structure of the canal system; the actual climate reconstruction;
actual values for landscape, including elevation and soil type; known locations for
settlements; known characteristics of plants; etc. Given such constraints, the model
should be able, under what might be called its most restrictive mode, to reproduce
something approaching the actual Hohokam florescence. In this respect the model
acts as a kind of reconstruction, a repository where the ’real’ Hohokam story (as it is
fleshed out by evidence) can be stored and replayed.
This level is the clearest connection to the firm, empirical basis of the modeling
effort, for it can store either data that have been acquired through empirical research
or data that are provisionally supposed, in lieu of values from research that is pending
(or merely hoped for). It is also, however, incomplete: in later chapters (see Chapter
3) I will address the limitations of the logical bases of modeling to reveal ’what was’,
and show that while models permit us to address questions about the ’real’ past they
do not, and cannot, tell us what actually took place. Nevertheless, the target of
some models is the reconstruction of the real past in greater or lesser detail, and this
differentiates this kind of effort from the other kinds I will discuss here.
What was necessary? A second level allows the model to address what elements are
necessary to achieve something like the Hohokam florescence. Can some possibilities
be eliminated because it can be shown that no combination of such alternatives is possible? For example, it is possible thta the Hohokam could extract all the sustenance
they needed from their irrigated agriculture, while others hold that connections with
outside regions were necessary to deal with shortfalls. The variables that play roles
in this- demography, productivity, etc.- are myriad, but if the implications of each
31
variable can be quantified, it should be possible to establish under what ranges the
system could be self-sufficient. Other examples of this kind of reasoning pertain to
the characteristics of Hohokam cooperative relationships, and are discussed at greater
length below. As with the preceding category, there is again a point of logic to be
addressed: the actual result of these models is to illustrate what was not necessary,
rather than what was. This applies whether the approach is intended to show that
something was possible given only certain components (implying that the others were
not required) or in the case where two previously held assumptions are shown to be
incompatible, so that at least one must be discarded.
What if...? As mentioned above, the main advantage of a simulation is not the ability
to reconstruct the single history that actually happened, but the ability to run alternative histories. This has been elegantly represented by the metaphor, introduced
by Stephen J. Gould and discussed by Lansing (2003), of what would happen if we
could ‘rewind the tape and run it again.’ This touches on the most fundamental of
questions- not merely fundamental to this research effort but even within the domain
of human history: what part of what happens in the world is chance, and what part
would have happened in more or less the same way regardless. The Annales school
of history asks the same question; recent work by Jared Diamond (1999) suggests
that deeper principles shape what has become the history of the world, so that the
dominance of Europe and western capitalism was the product of currents that run so
deep that it was, in effect, inevitable, even if the details of how it actually took place
were incidental.
Simulations can address this by re-running similar histories numerous times, and
charting the distribution of outcomes. In the Hohokam context, we could ask, for example, what if the pattern of wet vs. dry years had been different- what if more dry
years had happened early in the Hohokam trajectory? Would it still have been worthwhile for the Hohokam to have invested in the canal system? The crucial question is
32
whether the Hohokam florescence was one very unlikely outcome of many possibilities,
or whether within a fairly broad range of parameters the same general outlines arise.
In technical parlance, the basins of attraction that lead to generic outcomes can be
mapped.
It is also through this kind of approach that the system’s ability to withstand
stressors- its resilience, in Holling’s framework, and also its robustness, to return to
that used by Wagner- can be assessed.
Why? The final hope is to find the most abstract principles that might be used to
explain the Hohokam system. This level is the furthest from the ‘reconstruction,’ for
it is at this level that we are broaching the most general knowledge, the kind that will
be applicable to contexts far removed from the Hohokam. This is the level at which
we can dispense with the actual Hohokam landscape, the actual canal system, the
specific plants they farmed, and the details of the climate reconstruction, and instead
focus on abstract landscapes of generic characteristics, or a generic canal system, or
coarse categories of plants, or other such abstractions. It is at this level, too, that
we can explore whether the Hohokam were exploiting the kind of ‘order for free’ that
Kauffman and Lansing suggest is possible.
One implication of these varying levels is that the actual archaeological record
cannot be the only guide to our understanding. Certainly this is not a call to abandon archaeological field research (although it must be admitted that research on the
Hohokam in the Phoenix area is becoming more difficult every day; in ten years it may
be ‘done’ in the sense that there are no more places to explore that are not already
covered by the modern city). Rather, it suggests that our goals of understanding
must be coupled with the empirical record in a new way. It will not be possible to
arrive at the knowledge we seek through a simple review of the empirical data from
the past exactly as it played out, nor is the reconstruction of that past our only goal.
Instead we must construct models of how that past came to be, which necessarily
33
means that we address how it might otherwise have been.
In the next section, I will review extant modeling approaches, and assess how well
each can achieve this set of goals. One clear implication of the above discussion is that
‘the Hohokam problem’ is actually multiple problems; questions about the (possible)
system-level behavior of the Hohokam florescence are composed of problems that exist
at a number of other levels below. Whether these always form a neat and reductive
hierarchy is an additional question that cannot be resolved a priori. This means that
any modeling framework that hopes to address these larger issues must be flexible
enough to integrate the subsidiary ones; the modeling framework that this dissertation
presents attempts to achieve this.
1.3
Simulation Models in Archaeology
Models have a long history in archaeology (see Clarke 1972b); however, the term
‘model’ has too many meanings to be useful here (I will revisit it at a more appropriate
point; see Chapter 3). Here it is enough to point out that there is a shift underway in
archaeology; following Hegmon (2003), it can be said that the emphasis in archaeology
is shifting from “models” to “modeling.” What were formerly proposed as models
(such as “optimal foraging strategy” or other models; c.f. Winterhalder (2002)) are
now being supplanted by ‘bottom-up’ approaches in which components are assembled
and their results evaluated. This is reflected in a wide shift, perhaps epitomized
in a recently published book entitled The Model-Based Archaeology of Socionatural
Systems (Kohler and van der Leeuw 2007), and while the shift this represents can be
considered to be purely conceptual, it is clearly related to technology: in general this
new kind of modeling is done via computer simulation.
34
1.3.1
Varieties of Archaeological Simulations
Computer simulation modeling in archaeology has a history that goes back further
than most people think (the first example was by Wobst [1974]). But technological change as well as changes in theory and interests have led to several distinct
strands of archaeological modeling, of which I will discuss three: Large-scale simulation; hypothesis-testing simulation; and agent-based models. These categories
actually can crosscut and merge, so that it is wrong to consider them entirely separate, and their points of overlap are illuminating- a point to which I will return.
But the division also reflects much of common practice and discussion in the literature: the three approaches are dealt with largely separately and treated differently,
even though in many cases exactly what makes them different is as yet only poorly
mapped.
Large-Scale Simulation One variety of archaeological models is particularly relevant
because it is claimed to be a mechanism to address the very issues that are the objects
of this modeling effort; I will call these efforts, collectively, ‘large-scale simulation.’
Examples include the work of Kohler et al. (see Kohler et al. 1999), Barton et
al. (see http://www.asu.edu/clas/shesc/projects/medland/), Altaweel (2008), and
Christiansen and Altaweel (2006a, 2006b). The term ‘large-scale’ is not intended to
imply that the simulations cover a large spatial extent (although they often do), but
rather that they are ambitious enough to try to encompass a wide array of aspects.
These models are usually integrated with Geographic Information Systems (GIS)
platforms, which allows the simulations to be played out across representations of real
landscapes. Often these are constructed from satellite photographs, digital elevation
models, or other available geodata. Sometimes the landscape data are supplemented
with detailed reconstructions of vegetation, soil time, or climate, and these may be
temporally varied, so that they represent reconstructions through decades or even
centuries. Typically the effort that is required to craft such reconstructed landscapes
35
is considerable (i.e. see Kohler et al. 1999). One result of these models is that
they are easily accessible: the depictions of the past landscapes are given by the GIS
systems, and they are intuitive and compelling.
In addition to copious real-world detail, these simulations have more recently
begun to employ ‘off-the-shelf’ simulations from other disciplines. Thus, a model
of climate developed by climatologists, a model of plant productivity developed by
biologists, and a model of water transport developed by hydrologists may be joined
together and run as a package. The technical issues in doing this are daunting, but
the promise offered by this approach would seem to be exactly what is needed to
address a complicated, changing system: as one component is changed, the others
can react based on ‘real’ dynamics, thus allowing the system to change in the ways
the ‘real’ system would have, and thus the simulation can claim to demonstrate the
way the natural world would react to anthropogenic impacts.
Hypothesis-Testing A second class of current archaeological models is represented by
a pair of examples given by Brantingham (2003) and Premo (2006). Both are agentbased models, which will be discussed as a distinct type of model below, but the
aspect of interest here is that both test prior claims through a specific procedure:
each simulation is created without incorporating assumptions that other claims have
made concerning a particular phenomenon, and thus the simulation is said to form
a ‘null hypothesis’ against which to test these claims. In both cases, the simulation
is highly abstract: Brantingham’s simulated landscape is devoid of all detail except
that which is directly necessary to his hypothesis, while Premo asks his agents to play
out their lives on a grid where resources are spaced geometrically at intervals that
can be varied as an experimental parameter. This abstraction allows the researcher
to ask a single question, and to control for all possible conflating factors, and thus to
arrive at a firm conclusion regarding that question.
36
Agent-Based Models Agent-based models (sometimes also called ‘Individual-Based
Models’) are models in which heterogeneous collections of agents are endowed with
the ability to perceive and act on their environments. Two basic kinds of agent-based
models can be distinguished: those in which a single agent (or a small number of
very different agents) act on their landscapes, and those in which large populations
of agents interact with each other. The distinction is important because it leads to
two different ways that agent-based models can be interpreted. The key element
that both methods share is that the larger properties of the system are difficult or
impossible to infer from the rules that the individual agents follow. In the case
of a single agent (Brantingham’s model is like this), the important outcome can
be a distribution of something that results from the unpredictable actions of the
single agent; in the case of interacting populations, the significant outcome can be
some global characteristic of the population. Often the outcome of interest in the
latter case is that the global population dynamics appear to be determined not by
the characteristics of the individual agents but by the context in which they are
placed: a global property that seems to transcend the idiosyncracies of individuals,
and thus can’t be studied merely by examining those individuals. This characteristic
of emergent properties is a hallmark of complex systems and is thus of great interest
to those studying systems, such as coupled human and natural systems, where similar
dynamics might apply.
1.3.2
Applicability for Exploring Complex Systems
We can assess each of the kinds of simulation discussed here by considering it in light
of the fundamental challenges of such studies, as enumerated in section 1.1.1, and with
respect to the kinds of knowledge we hope to gain, as outlined in section 1.2.2. Thus
we can consider how each deals with the issues of composition, resolution, scope,
coherence, chance, parameter space, completeness, and validity; and how well each
37
allows us to deal with questions of all four levels: What was?, What was necessary?,
What if...?, and Why?
Large-scale Simulations Large-scale simulations have the highest claim to validity, in
the sense that they incorporate models that have been thoroughly tested within their
respective subfields. Equally, they can most easily be the repositories of information
that present ‘what was’; reconstruction is part and parcel of the approach. But they
begin to show strain when applied to the other issues. The approach seems to presume
completeness through sheer brute force: that is, the emphasis is on shaping a model
that is so realistic that in can be assumed to include the relevant elements. In some
ways, the challenge is to find these elements, through extensive sensitivity analyses.
This dodges the problem of composition and coherence: the connections among the
submodels are presumed to be such that all of the elements in any presumed system
are present and that they must work together to operate that system.
There is an issue with resolution as well. These models tend to be shoehorned
into GIS frameworks and/or include such a variety of submodels, each of which may
be geared toward a particular resolution, that navigating the multiple resolutions in
which each portion operates is a nontrivial challenge. The technical issues of getting
a submodel that operates on, say, 30 m resolution data to work with another that
operates on 100 m data are not insignificant3 . (Some frameworks [Christiansen and
Altaweel 2006a, 2006b] do attempt to overcome these difficulties.)
Where large-scale models begin to show their greatest strain is with respect to the
issues of chance and parameter space. Because the input requirements are so large, the
parameter space is immense. In some cases variables that are inarguably irrelevant to
the problem at hand are nevertheless incorporated into the model when a particular
3
A recent project at Arizona State University attempted to create a framework within which
such transformations were automated; while any single transformation is not difficult, the range of
variations for all possible transformations is large, and operations that were initially imagined to
take only one or two steps proved to take more than twenty. Eventually the programmer left for
another position and the project was abandoned.
38
submodel is adopted (for example, the SWAT model of ‘soil and water transport’
used in some of these archaeological models includes a variable for ‘Frequency of
Street Sweeping’; see Neitsch et al. 2005). It may be possible to control for these
variables by either sweeping on them and performing sensitivity analyses or imagining
that their value is irrelevant if it is controlled for or held constant. But in most
of these cases, these variables become merely part of the large background that is
accepted as ‘realistic’ and not addressed as a part of the problem solving operation.
Whatever method is adopted, the variables that are thought to be possibly relevant
are so numerous that sweeping through a parameter space becomes impossible, and it
becomes difficult to address the issue of ‘chance’ that might otherwise be of interest.
Because of these limitations, large-scale simulation models excel at questions of
‘what was’ and can occasionally be used for ‘what was necessary’ (although the removal or selective inclusion of elements is difficult); they are less able to deal with
‘what if’ questions of alternative pasts that might have been, and are more often
focused squarely on what was (i.e. Kohler’s team’s work). There is, indeed, a greater
emphasis than might be wished on ‘tuning’ the model to arrive at the past that is
attested. Finally, the most interesting, abstract questions are very difficult to deal
with, because these simulation frameworks are not designed to work with abstract
situations. The most aggressive (to date) project to deal with this limitation is the
team associated with M. Barton, who attempt to work with a common model in two
different landscapes (see link above); nevertheless, the break from specific landscapes
to generic and abstract ones, which might bolster the case that the principles revealed
are more generally applicable, is not yet demonstrated.
Hypothesis-Testing Models Hypothesis-testing models lie at the other end of the spectrum from large-scale models. They can address many of the issues faced by the
large-scale models by abstracting them away; because they are not tied to real-world
data sets they can generally choose the resolution and scope at which they operate.
39
They can be explicit about the elements they include, and can bound parameter space
in a way so that chance can be explored. However, they may be so abstract that they
may have problems with validity: the degree to which the dynamics they represent
are congruent with the dynamics observed in the real world can be difficult to assess
and demonstrate. It is also true that they general presume a form of coherence, in
that they often consist of a system that is built and put into action a priori.
As a consequence, hypothesis-testing models excel at the most abstract ‘why’
questions, are able to approach ‘what if’ and ‘what was necessary’ questions, but are
distant from the issue of reconstruction that the large-scale models can address so
well.
Agent-Based Models Agent-based models are difficult to discuss in the framework I’ve
arranged, primarily because the agents are usually created within one of the other two
modeling frameworks and tend to take on the attributes of the framework in which
they live. In general, they can do this because they are flexible enough to overcome
some of the limitations of the other approaches presented here. One aspect of this
tendency is interesting, however, because it reveals something fundamental about
agent-based modeling: that the agents’ repertoires must be defined in the context in
which they are placed. In some cases the environment contains only other agents,
while in others it contains rich detail about a hypothetical landscape. Whatever
the case, agents’ decisions about their actions, and the actions they undertake, are
ultimately rooted in the environment in which they exist. Hence the modeler must
make decisions about how the agents are able to perceive their environments and
what they can do based on those perceptions. This touches on validity when the
agents are intended to represent some real-world phenomenon like a human being
or a household, but is also related to what we have called completeness. It is also
important in relation to one of the key ways we learn from models during the model
construction (see below).
40
Agent-based modeling is one of the forward prongs of studies in complexity, and it
should not be surprising that it can be used to answer fairly abstract ‘why’ questions
as well as ‘what if’ and ‘what was necessary’; it is only more rarely used in the ‘what
was’ sense, but it can be so used in certain circumstances.
1.3.3
General Assessment of Complex Systems Simulations in Archaeology
It should be noted that some of the criteria on which I have assessed the models
are inconsistent- certainly the levels of reconstruction and abstraction are opposed.
Levins (1966) provides a framework for discussing models, and argues that models
achieve some goals only at the cost of others. It may also already be obvious that
my sympathies lie with the hypothesis-testing approach (possibly integrated with an
agent-based approach). The end goal of science should be general statements that can
be applied to a wide array of contexts, for which individual cases are only examples.
The problem, then, for existing approaches is to be able to move back and forth
between the more concrete and detailed information such as the large-scale models
and the abstract hypothesis testing approach. One of the key elements of this is
the interplay between composition and completeness: finding the units of particular
systems that are relevant to the general principle proposed and/or revealed. This can
involve the construction and decomposition of different units of analysis, at different
resolutions, and with different scopes and scales; one challenge is to do all of this
while being able to incorporate ‘real’ data when appropriate, yet still not become too
tied to a specific instance or context. If this is achieved, it becomes possible to ask
questions at all four levels, bridging the gap between ‘what was’ and ‘why.’
41
1.4
A Flexible Modeling Framework
In this dissertation I present an alternative to the approaches discussed above, and
suggest that this alternative carries the best of the hypothesis-testing and ‘large-scale’
approaches while overcoming their limitations.
1.4.1
The Modeling Framework: An Introduction
The are two key elements of this modeling framework: a specific technical architecture, and a process for working through modeling problems that is facilitated by this
architecture. The technical framework consists of a database linked with simulation
code; this parallels the idea that data are static but relationships among elements are
dynamic; the detailed discussion of this architecture can be put off for now, but a
brief discussion of the process of modeling within the new architecture can be given
here.
This process consists of the construction of an initial simulation engine that simulates some small collection of dynamic components. Data that are required by these
components can be assembled in the database, and the implications of these components and these data can be explored using the simulation engine. As the limitations
of these are encountered, more can be added or irrelevant ones can be removed. The
process can proceed to simpler or more elaborate approaches, or both in parallel.
The important result of this is that both the more ‘realistic’ simulations embodied
by the large-scale approaches and the more abstract ones achieved by the ’hypothesis
testing’ approaches can be attempted.
This accretionary approach to modeling also accommodates a collaborative approach; as I mentioned above, one practical aspect of the Hohokam project is that
the expertise (both insight and knowledge) required to approach the problems of interest is distributed among scholars from an array of backgrounds. The accretionary
approach, especially when combined with the persistent audit trail that the database
42
allows, is especially conducive to this collaborative approach.
Another important aspect of this framework is that the implementation of the
different ideas that are brought to the modeling process is required, by the physical nature of computer simulation modeling, to be reconciled completely with the
other elements of the model. Modeling can too easily become an exercise in thinking with ‘fictional objects’ (Frigg and Hartmann Spring 2008) which, especially in a
collaborative context, leads too easily to unfounded conclusions.
Eventually agents can be added as specific components with dynamic relationships
to the other components in the model. Agents will contain a repertoire of perceptions
they are allowed and actions that are permitted to undertake on the basis of the information they can obtain. In the modeling framework proposed here, the vocabulary
agents can use is defined by the universe of objects that have been included in the
model and the ways these objects can be aggregated, ‘perceived’ by the agents, and
acted upon by them.
1.4.2
Epistemological and Theoretical Concerns
In fact, this modeling approach touches on deep epistemological issues. Modeling
deserves careful thought. The introductory discussion in which an omniscient observer was proposed indicates part of the nature of the problem: perception must be
undertaken in categories, and so it seems that it is not possible to perceive without
a model as a guide; there are echoes in this of debates going back to Plato. What
constitutes a ‘model’ is the subject of considerable discussion among philosophers of
science (Frigg and Hartmann Spring 2008), and in the last century there has been
a sharp diversion in the use of models that traces the course of scientific thought
from Logical Positivism of the beginning of the 20th century to Postmodernism at its
close; this period of time has seen a shift in the idea of what constitutes a scientific
theory, and in the relationship of theory to models, generally described as syntactic
43
vs. semantic approaches. Some have argued that certain kinds of models, including
agent-based models, abjure a Positivist approach entirely (Perez and Batten 2003; c.f.
Lansing 2002). It is argued here that this modeling framework bridges a gap between
the two different kinds of theory, and restores a positivist approach. One example
of this is the use of a database, which harkens to the original purposes of databases,
which were first designed as logical engines based on a syntactic view of theories, but
in practice have been put to uses in keeping with a less rigorous semantic view.
One large epistemological question that modeling offers is, What is the nature of
the process through which we learn about the real world through models? There is
general agreement (Frigg and Hartmann Spring 2008) that part of the learning process is achieved during the construction of the model, before the model is ever used
(or, in a simulation environment, run). The framework proposed here is intended
to maximize this effect; this is done through the (sometimes uncomfortable) requirements that each new model element must meet in order to be integrated into the
modeling environment, for this guides the modeler to finding those points at which
the ideas he or she wishes to include are incomplete or inconsistent.
These and other epistemological considerations will be addressed again more thoroughly in a Chapter 3. For now we can consider how this framework can address the
different levels of knowledge that we hoped to achieve in the Hohokam context. Regarding reconstruction, there is no limit on the level of detail or the addition of
constraining details, so all of the benefits of the ’reconstructive’ level are available.
Regarding the second level, “what elements are necessary,” these kinds of questions
can easily be asked within the proposed framework.
It is at the third level that the framework begins to show its potential. Part of the
‘static’ data that can be provided can be provisional or hypothetical. This can accommodate any range of ‘what if’ questions that might be proposed. In the collaborative
setting, this is especially important because the expertise that one researcher might
bring when considering hypothetical or provisional possibilities into the simulation
44
environment are available to others to make use of.
But the final, most abstract level, is the most significant, and where the modeling
framework offers the most advantages. The central advantage of the new framework
is that it allows the modeling process to work along the axis of most specific to most
abstract. Data can be entered that are highly detailed and specific to the Hohokam
context, or that are abstract and generic; relationships among elements can represent
the most complex and subtle components, or coarse proxies can be substituted for
relationships that are actually known to be much richer. The value of doing this is
that it makes it possible to ask which kinds of detail actually matter; it is proposed
here that this allows the modeling framework to move to highly abstracted principlesperhaps even to the level of abstraction at which Lansing’s fields and Curran’s forests
are alike, thus illuminating the general principles that may have allowed the Hohokam
to find the ‘sweet spot’ that allowed them to live in a hostile desert for a thousand
years.
1.5
Organization of the Discussion
The main contention of this dissertation is that understanding the archaeological
record in view of the possibilities offered by complexity theory requires a new approach. The various threads that have formed this history are not yet knit into the
coherent form that will be necessary to be used in this new pursuit. We are in need of
new intellectual tools. I will try to offer a view of what this new toolkit may look like,
and to provide not only a description of it but also a justification of it in theoretical
terms, and an example of it at work on the Hohokam example.
To these ends, the remainder of the text is organized as follows. Chapter 2 discusses the Hohokam context in detail, beginning with previous approaches and ending
with a more complete discussion of complex systems approaches and how they might
be applicable to the Hohokam case. Chapter 3 steps back from this to discuss the the-
45
oretical and epistemological foundations of the new and more exploratory approach
in archaeological modeling that is to be applied to the questions the Hohokam context raises. This is presented as a new ‘toolkit’, but in Chapter 3 this is disucssed
only abstractly; in Chapter 4 a software implementation called the Assertion-Based
Modeling Framework (ABCM) that makes the approaches in the proposed intellectual
toolkit possible is presented in more concrete detail, along with some additional theoretical implications. Chapter 5 presents the details of how the abstract framework
the ABCM modeling system provides is implemented in the Hohokam case- that is,
how the ABCM approach is applied to the Hohokam context. The ABCM simulation
created for the Hohokam case is called the Hohokam Water Management Simulation,
or HWM.
Chapter 6 puts the HWM simulation to use, demonstrating its application by
constructing an extended example through a series of simulations that show how its
various components interact. The conceit is the creation of an elaborate Hohokam
landscape, called ‘Landscape 3’, which includes varying topography, rivers, headgates,
canals, fields, and environmental aspects like rainfall and temperature changes, and
eventually is home to agents who interact with the landscape and with each other,
allowing the construction of a simple model Hohokam world. The emphasis is on how
the elements that we have decided impact our set of questions about the Hohokam
are integrated into a single framework, and how that framework provides for a wide
collection of possible avenues of exploration.
The prospects for the ABCM approach and the Hohokam are presented in Chapter
7, which summarizes the arguments in the preceding chapter and looks forward by
placing the ABCM in a wider context of archaeological theory and practice, and by
projecting the plans and goals of the HWM simulation environment.
Appendix A describes a small collection of sample simulation output available as
attachments to the electronic version of this document.
46
Chapter 2
The Hohokam as a Complex Coupled
Human/Natural System
In this chapter I turn attention to the specific context that is the focus of the simulation modeling effort described in this dissertation. The Hohokam present to archaeologists two related puzzles. First, it is known that they managed large scale canal
irrigation works along the Salt and Gila rivers, but the details of how they organized
this complicated undertaking are not yet understood. Second, the Hohokam arose
as a distinct society around the core area dominated by the canal systems, and persisted for over a millennium; but during the later portions of their trajectory their
society underwent an apparently dramatic transformation, the conclusion of which
was abandonment of the canal systems and depopulation of the areas the Hohokam
had inhabited. The causes of this restructuring are also poorly understood. It should
be obvious that these two issues may actually be two sides of a single coin, for the
canal system would have been so central to the Hohokam way of life, certainly in
the core areas and probably well beyond, that it is difficult to argue that the two
issues can be entirely independent. However, the details of each and of their likely
interactions remain unresolved.
I begin this chapter with an overview of the larger Hohokam trajectory, what
characterized it and the details of the larger change I intend to examine; this overview
begins with a discussion of the of the irrigation works and their clear central role
in Hohokam society. I punctuate this overview by noting that even though it is
relatively well-established there are difficulties with it, and these represent challenges
that will also impact the modeling effort to be described later. Following this I
address several more specific issues of current interest in Hohokam archaeology; each
47
of these contributes something to the understanding of the two larger puzzles, but I
also note that they are not merely separate threads but rather must be understood
to form a single fabric, and in later chapters I show that these are the key elements
that are selected for inclusion in the simulation model. Next I turn more explicitly to
explanations of social organization that have been applied to the Hohokam, focusing
on two categories of such approaches: the first set derive from traditional approaches
within anthropology and the second, more recent set arise from complex systems
theory. These latter approaches offer new ways of framing our understanding of the
Hohokam trajectory, and specifically address questions of reorganization and even
collapse.
The most apparent purpose of this chapter, then, is to present an overview of the
Hohokam context and some of approaches that have been applied to its study. The
larger purpose, however, is to show that each of these approaches has limitations,
and that when taken singly they inform on the larger questions about the Hohokam
trajectory only incompletely. I argue that a new approach is appropriate for attacking
the larger questions by integrating the various other approaches. This approach
will be rooted in simulation modeling, but it will be required to be an example
of the ‘middle ground’ of archaeological simulation modeling: neither an abstract,
hypothesis-testing model nor a simulation that purports to include everything, but
something in between this. At the end of the chapter I raise the issue of the goals this
simulation pursues, but rather than settle on a single goal, I argue that the appropriate
goals are multiple and open-ended, and must be understood to be a product of the
set of related questions being pursued and the new kinds of understanding that we
are attempting to pursue.
This chapter, then, presents the Hohokam background and the argument that a
new kind of simulation model is appropriate for its study. The model itself, however,
is not presented in this chapter; the modeling effort I propose raises certain epistemological and methodological issues that are addressed in the next chapter, and only
48
after these are brought into the light do I return to the specifics of the Hohokam
simulation.
2.1
The Outlines of the Hohokam Trajectory
What we know of the Hohokam comes from nearly a century of archaeology in the
areas of what is today Southern Arizona. Much of this work was intended to provide
a basic picture of who was living in the area at what times, how they lived their
lives, and how this picture changed through time. Throughout this the canal systems
have received considerable attention. In this section I offer a review of this basic
picture, and provides some additional detail on the larger Hohokam trajectory. I
close, however, by observing that there are difficulties with this basic picture that
will have implications for the modeling effort undertaken.
2.1.1
Environmental background
The geographic area in which the Hohokam lived is taken to include the northern
Sonoran desert generally, extending from the Growler mountains eastward to the
Dragoon mountains, and falling south of the Mogollon rim (per Fish 1989, figure
3, p. 20). Its focus is the areas around those today occupied by the modern cities
of Phoenix and Tucson. Its topography includes wide flat expanses with occasional
ridges of mountains and widely spaced rivers (Fish 1989). Its climate is unforgiving
for its heat in the summer and for its aridity all year round. The rivers would have
created genuine oases, though two of the main rivers in the Hohokam, world, the Gila
and the Salt rivers around what is today modern Phoenix, are today dammed, and
it is difficult even to imagine them running to their full widths. Monsoon rains in
the late summer and occasional storms in the winter (Crown 1990) give the area its
only water outside of what the rivers would have offered, but annual rainfall averages
49
only 29cm per year (Crown 1990), and in some places be as little as 15cm annually
(Bayman 2001).
Despite these challenges, the Sonoran desert must also be recognized as an ecologically diverse area that harbored a wide array of plant species, which the Hohokam
would have been able to exploit (Fish 1989). Within its boundaries are found many
areas at different elevations with noticeably different climate and attendant variation
in flora and fauna (Fish 2000). The bimodal rainfall, in contrast with adjacent areas,
provides the Sonoran desert with an abundance of gatherable species (Fish 2000). In
comparison to other deserts, the Sonoran is rich in species useful to humans, which
might have allowed the Hohokam to view the desert as a place of relative abundance
and plenty- certainly a place with possibilities and opportunities in counterbalance
to the difficulties that are so apparent to the modern eye.
2.1.2
Hohokam Irrigation and Subsistence
The distinctive and central method employed by the Hohokam for subsisting in this
environment was the use of irrigation to provide water for agricultural crops. The
critical role of irrigation in the Hohokam world cannot be denied: the scale and extent
of the irrigation works makes this clear. One simple measure of this scale is the length
of the system’s main canals. Several main lines stretching from the Salt River that
exceed 10 miles, and feed a network of additional branches and secondary canals.
Turney (1929b) estimated that the main lines on the north side of the river totalled
95 miles (approx 150 km), while those on the south totalled 135 miles (approx 215
km). Other, more recent, estimates are even higher, in part because archaeological
fieldwork continues to bring more canals to light; Doyel (2007) offers an estimate that
suggests that the Salt River canal networks now are known to total over 300 miles
(480 km).
A second useful measure of the scale of these systems is that of the area that could
50
be irrigated. Hunt et al. (2005) note that this is a more appropriate measure for use
in comparisons among systems, but also that it must be estimated and involves a
number of factors that cannot be known with certainty. As a result, estimates can
vary widely. Turney’s original estimate was about 100,000 acres, or just over 40,000
hectares, for the Salt River systems in total (Turney 1929b). Howard (1993b) more
conservatively estimates a total for just Canal System 2 of nearly 19,000 acres (approx
7,600 hectares), assuming strong limitations on the water that would have been deliverable through the canal systems and correcting to include contemporaneous canals
only.
These figures make the Hohokam system on the Salt River quite large when viewed
in comparison with other irrigation works from prestate societies. They are the largest
such examples in the New World north of Peru (Fish 2000) (where prehistoric irrigation works of the may have supplied over 600,000 ha across 36 valleys; see Denevan
2001). Hunt (1988) considers a range of irrigation examples from modern states,
some of which employed national control, while others did not; he considers any system over 4,000 ha to be “large from the point of view of operations” (p. 347). Hunt
et al. (2005) find that there are few ethnographic parallels to irrigation works of the
scale of that of the Hohokam worldwide from non-state soceties, though they also
argue that the existence of a state may have less impact than is presumed on the
operation of a canal system.
The details of the canal systems’ functioning also shed light on scale of a different
kind, that of the challenges the Hohokam overcame in building and operating them.
The main lines that drew water away from the rivers were large channels; the largest
exceeded 5m in width and several meters in depth. Turney (1929d) suggested that
water control features were not needed at some intake points (though he also assumes
some were used at others); however, Howard argues that two water control features
were probably used: the first a weir extending into the river, to divert water into the
canal, and second a gate placed some distance into the canal that could be closed if
51
the need arose to prevent too much water from invading the system. These he and
other colleagues (Howard 1990, Ackerly et al. 1987) infer from indirect evidence, as
the features themselves were lost to the archaeological record. Many authors agree
that the points from which water was taken from the river by the Hohokam were
those places where it was best to do so due to the morphology of the channel, where
the existence of bedrock or a channel constriction could determine whether the river’s
flow would be diverted easily or only with difficulty; Turney (1929d) notes the the
historic re-use of the Hohokam canals along the Salt River made use of all of the
original extraction points used by the Hohokam, and no others.
The main lines were frequently extended along the contours of the landscape, so
that they moved away from the river, but remained at nearly the same level; that is,
they have very little slope. The topography determines their placement in this respect;
often they traced along the downward edge of a terrace, to be able to water the land
on the downslope beneath them (Howard 2006). Evidence remains in some channels
of a lining of clay or of adobe created by burning (Doolittle 2000); channels could
have cross sections of parabolic, trapezoidal, or elliptical shape, though occasional
irregularly shaped channels are found as well. Main lines may have fed directly
to small channels used to distribute water to the fields, or may have branched into
smaller, but still arterial, channels. Post holes at branch points may be the signatures
of water diversion structures. Some of these, however, were more likely erosion control
features; these operated in conjunction with other techniques, such as cobble stone
linings, and indicate that the effort of controlling water through the system was a
challenge the Hohokam sometimes overcame only imperfectly (Ackerly et al. 1989).
Howard (1990, 2006) describes a pattern seen in Canal System 2. Diversion of
water from the main canals into fields was achieved through side canals that he terms
distribution canals: these were extended perpendicular to the main line and toward
downward-sloping ground. The bottoms of these channels were at a gentler grade,
even as the ground surface descended along them; the result was that after a certain
52
distance the ground surface and the water surface were at the same level. At this
point the distribution canal would turn parallel to the main line, and proceed along
it, punctuated by ‘turnouts’ that would deliver water to fields (see also Abbott 2000).
Howard later argues (Howard 2006) that this pattern was also employed in Canal
System 1, despite earlier interpretations, and that what appear to be branch points
are almost aways points where two canals cross in space but were chronologically
distinct, the later one being built after the initial one had fallen into disuse. In some
cases ‘tapons’ were employed; these are gates that close a canal downstream of a
junction, backing up water in the primary channel and allowing the water surface to
rise so that flow could be more easily directed down the alternative channel (Howard
2006).
There are several possibilities for the last steps of water delivery. Ackerly and
Martynec (1989) suggest that three broad categories of delivery systems exist: flood
irrigation, furrow irrigation, and check dam techniques. Of these, furrow irrigation,
which uses a series of furrows to channel water across fields, would leave archaeological signatures that are absent in the Hohokam case. Flood irrigation, however,
cannot be discounted; it involves breaching the berms of the delivery channels and
allowing water to flow across the surfaces of the fields downslope. Check dam techniques include several kinds, but all involve berms around fields and a technique of
flooding one field, then allowing water to flow from it to the next fields in a series (see
also Doolittle 1990). The observance that smaller canal lines criscrossed each other
prompted Midvale (1968) to wonder whether the Hohokam were ’water stealers’- that
is, tapping each others’ water lines, but these may, again, simply be chronologically
distinct channels.
Drainage is mentioned by one source (Wilson 2003), but is little examined. Wilson
notes (ch VI, p. 4) that drainage is required to avoid waterlogging fields and to reduce
the rate of salinization. Howard (2006) notes that in some cases distribution canals
would feed into each other, so that water not turned off into fields by one distribution
53
canal would be diverted into the subsequent one, but also that some canals ended
abruptly without outflows.
These efforts were not wasted. Estimates of the productivity of their agricultural
systems are hard to discern, but if historic examples offer any guide, the potential
yield garnered from this system would have been quite high. Wilson’s review of the
history of the Pima Peoples on what is today the Gila River Indian Community
(Wilson 2003) provides agricultural census data from the 1860’s, during which the
Pima were practicing irrigated agriculture in many ways similar to that practiced
by the Hohokam. The 1860 data suggest that maize, raised as one of two groups
grown through the year (wheat was grown in the winter, corn in the summer), could
be produced at rates as high as 40 bushels per acre. Given the estimates of acreage
above, and assuming two crops of corn instead of just one, the estimates of production
for the Salt River are very high indeed: if Howard’s 19,000 acres is correct, Canal
System 2 could have produced 1.5 million bushels of maize; if Turney’s 100,000 acres
is correct, the Salt River systems collectively might have produced 8 million bushels.
The estimates of 40 bushels per acre, however, might by Wilson’s reckoning be too
high; extending it backwards in time to the Hohokam may offer myriad risks because
production technology may actually have been different, and plant yields from ancient
to modern varieties may change, and it does not account for fallow periods (which
the Pima are reported to have used; Wilson 2003, ch. XI, p. 20). (In passing I will
note that the Pima’s fortunes turned downward during the later part of the 1860’s,
largely due to a shortage of water caused primarily by the removal of water by settlers
upstream; their productivity per acre declined so severely that by the 1870’s they were
forced to virtually abandon the summer crop of corn altogether.)
The critical and central role in Hohokam subsistence played by irrigation may seem
above dispute. However, it was only a part of the larger picture, elements of which
are only recently being illuminated. The large-scale irrigation works found along the
Salt and Gila Rivers are echoed at smaller scales in other river valleys throughout the
54
Hohokam region (Doolittle 2000). These smaller-scale examples would have brought
the benefits of water transportation away from the rivers to extended field plots, but
didn’t rise to the same level of complexity that must have inhered in the more complicated management scenarios of the larger Salt and Gila systems. At these sites, the
more typical pattern is one that shows a range of subsistence strategies as distance
from the major water source increases. Rainfall and surface wash provided another
source of water; these varied according to local topography and elevation. Fish et al.
(1992b) show that a Hohokam community could be discerned to consist of multiple
zones, within which subsistence activities appropriate to the local microclimate were
practiced. Fish and Fish (2004) note that rockpiles, previously given little systematic
treatment by archaeologists, may reflect a widespread agricultural practice centering
on the cultivation of agave, a perennial succulent that would have contributed calories to the Hohokam diet as well as raw materials for some Hohokam goods. They
comment that agave, because it is perennial, can be cultivated at multiple stages
simultaneously and is therefore not susceptible to catastrophic loss due to a single
season of low precipitation, may have formed an important buffer that affected their
ability to withstand short-term disruptions in other sources of their diet, and note
that this practice may be only one of many that are either unrecognized in or absent
from the archaeological record. Hence while irrigation would have occupied a central
and important role, it was accompanied by a number of other subsistence activities,
and may have been reliant on them for its long-term persistence.
2.1.3
The Hohokam Trajectory in greater detail
At the beginning of the Hohokam trajectory this region was inhabited sparsely by
groups practicing hunting and gathering lifeways, but sedentism (facilitated by the
comparatively rich environment), agriculture and the production of ceramics appear
early, and predate the appearance of what is typically considered ‘Hohokam’ by cen-
55
turies if not millennia (Bayman 2001). Archaeologists typically begin the Hohokam
chronology with a period called the ‘Pioneer’ period (Crown 1990; Fish 1989) which
sees the appearance, against the Archaic background, of precursors of the material
hallmarks of what is termed ‘Hohokam’: red-on-gray ceramics are produced, and these
eventually become red-on-buff; disposal of the dead via cremation appears alongside
inhumation and eventually replaces it; and houses built in shallow pits appear, precursors to the true Hohokam ‘pit house’ (Crown 1990). In some places villages included
structures that appear to justify an interpretation of communal, ceremonial function
(Bayman 2001; Crown 1990; Reid and Whittlesey 1997).
Irrigation works appear at some very early sites (Bayman 2001); but the scale
of irrigation works expands rapidly in the period following the Pioneer, traditionally
called the ‘Colonial’ period. During this time, red-on-buff ceramics are widely used
and cremation burials are the norm. The expanding irrigation works are accompanied
by a new form of architecture, the ball court, which appears in many locations and
has been surmised to play a role in integrating communities along the canal systems
in both the Phoenix and Tucson basins (Bayman 2001). The growth in size of the
irrigation works is paralleled by a growth in the population, both in terms of number
of sites, which begin to be found in non-core areas and in the sizes of the populations
at the major sites (Bayman 2001).
Following this expansion is a period termed the ‘Sedentary’ period, which sees
the peak of the trends in the Colonial period and the beginnings of their reversals.
(Bayman (2001) collapses this into a single period that continues the Colonial.) The
construction of ball courts and other ceremonial architecture continued, but the influence of Hohokam at sites removed from the core areas diminishes during this time,
with some of these sites being abandoned, and there is a change in craft production away from the elaborate and specialized approach that had been increasingly
practiced (Crown 1990).
The ‘Classic’ period follows: it is during this period that dramatic changes occur.
56
Settlement patterning becomes more nucleated, and different kinds of settlements appear and become widespread, including “walled compounds, pueblos, and small platform mounds” (Bayman 2001, p. 281). Some ball court settlements are abandoned;
other settlements appear in areas removed from the core, including the Papagueria to
the west. Some sites in the Phoenix Basin, such as Casa Grande, constructed ‘Great
Houses’, multistory adobe structures (Bayman 2001, Crown 1990). The implications
of this reorganization are not well understood, but undoubtedly reflect changes in
the structure of Hohokam society in the core areas. Other archaeological evidence,
such as the change in the locus of production of high value and elite goods from the
periphery to the larger centers, suggests equally profound changes in economic ties
between core and outlying areas (Bayman 2001), while evidence from burials from
some of the large platform mound sites suggest increased social differentiation, in
terms of status and also separate social groups, within core sites (Bayman 2001).
Sometime prior to the arrival of Europeans the Hohokam cultural area became
largely depopulated; at the time of Spanish contact the valleys were sparsely occupied by groups known as the Pimans. The cause or causes that led to the eventual
depopulation of the area and the disappearance of the Hohokam way of life is little
understood; suggestions have included a wide range of endogenous and exogenous
possibilities (see Bayman 2001).
2.1.4
Difficulties in our Understanding of the Hohokam trajectory
This short overview provides the broad outlines that ultimately form the core of the
focus of this dissertation’s central project. But it is worth pausing here to note the
difficulties that can be easily leveled even at such a broad discussion. Three areas
of difficulty are easy to single out: issues related to time, to space, and to cultural
continuity.
With respect to time, the Hohokam region has long been one in which contro-
57
versy has existed over the calibration of the sequence of periods. Other areas in
the archaeology of the American Southwest are blessed with extraordinarily precise
chronological markers; dendrochronology offers special advantages that are worthy
of envy. The Hohokam context lacks this convenience. Consequently, the Hohokam
chronology was the subject of considerable debate and discussion, with proposals for
the start and end dates of each period sometimes varying quite widely. Historically,
the controversial calibrations began early in Hohokam archaeology. Initial proposals
for the times to be assigned to early periods derived from estimates of the duration
of individual phases within the periods- arbitrary suppositions, in fact, about how
long each phase had lasted (Doyel 1992; Haury 1976), which provoked the extended
discussions that followed (see Fish (1989) for a table summarizing some of these). A
recent volume (Fish and Fish 2007) gives the following dates:
• 450 CE Beginning of the Pioneer Period
• 700 CE Beginning of Colonial Period
• 900 CE Beginning of Sedentary Period
• 1150 CE Beginning of Classic Period
• 1450-1500 CE End of Classic Period (PostClassic)
These are accepted in this dissertation, but cannot be claimed to be universally
recognized, in part because of the second issue to be raised here, variation across
space.
We can link spatial variation with chronology by noting that different areas within
the Hohokam region make use of different chronologies (see Bayman (2001) for examples). But beyond the issue of chronology, we have seen that Hohokam material
culture varies considerably from region to region. One example alluded to above is
that core areas are notably different from peripheral areas. It is worthy of note also
58
that Haury attempted to create a distinction between ‘river’ and ’desert’ Hohokam
(Doyel 1992), a distinction later argued to reflect seasonal occupation by one group
rather than two separate groups (Bayman (2001) cites Masse on this point), and in
any case an oversimplification of Hohokam adaptations to a range of environments
(Doyel 1992). The spatial variation is such that the geographic boundaries we place
on what we call ’Hohokam’ can be attacked, depending on the purposes driving our
definitions.
Similarly, the issue of cultural continuity is one that has caused great difficulty
among Hohokam scholars. The issue involves all of: the origins of the Hohokam; dynamics within specific periods of the Hohokam trajectory and possible influences on
the transitions between periods; and continuity with modern-day potential descendants of the Hohokam.
Early speculations on Hohokam origins were many; these were surveyed by Doyel
(1992). Gladwin and Gladwin proposed that the Hohokam were unrelated to Puebloan
peoples and had arrived from somewhere outside the Sonoran desert during the Colonial period (the Pioneer period being then unknown). Gladwin later offered the
revised proposal that a single group arriving in the area during the Pioneer was ancestral to both Mogollon and Hohokam, and that the Colonial period represented
their divergence. Haury proposed that a ‘Cochise’ culture was ancestral to the Hohokam and gave them many of their material culture traits, but that the Hohokam
later assimilated (over time) components of a Mesoamerican culture, including maize
agriculture. Gladwin offered still another proposal by Gladwin that suggested that
Pioneer period was not Hohokam but rather Mogollon (or developed in parallel with
Mogollon), while the Hohokam were immigrants from Mexico. Charles DiPeso’s suggested that the Cochise culture had given rise to the O‘otam at sites like Snaketown,
but that the Hohokam were immigrants from Mexico who brought stratified society
and irrigation. Schroeder went farther afield with a proposal that the Pioneer period
reflected a culture from Mexico or Guatemala, conquered and assimilated by the Mex-
59
ican Hohokam, who brought with them irrigation technology. Haury offered a revised
proposal accepting the Hohokam as immigrants from Mexico and disregarding the
‘Cochise’ culture hypothesis. More recent research suggests that the actual origins of
the Hohokam are local; although some have suggested a genesis in the Phoenix Basin
(see Bayman 2001), evidence- including the recovery of canals dating back more than
3000 years (Mabry 1999, 2005, 2008)- suggests an origin in the Tucson Basin and a
radiation of Hohokam traits northward rather than southward.
Throughout the duration of the Hohokam trajectory, there are movements of
groups within the Hohokam area, possible incursions from without, and blending at
the boundaries of what we may choose to call ’Hohokam’. The ‘Salado’ phenomenon
may be an expression of Classic Hohokam, or it may be a group that migrated from the
Tonto Basin to influence the Phoenix Basin (Bayman 2001, Clark 2001). Certain sites
appearing in the southern and western parts of the Hohokam region and exhibiting
what are called cerros de trincheras- terraced hills- represent a concurrence of traits
that indicate a complicated relationship with the Hohokam core to the east. While
some have thought them to be defensive structures required by an increase in conflict,
others note that the sites may have served a primarily agricultural purpose (Bayman
2001; Reid and Whittlesey 1997). A recent appraisal (Bayman and Sullivan 2008)
suggests that they reflect an exclusionary strategy practiced by a group that was
initially separate from the Hohokam, and that occurred, along with the adoption of
some Hohokam traits, in response to changes within the Hohokam heartland.
Finally, the connection between the Hohokam and the (comparatively few) people
who were living in the Hohokam area when Europeans arrived has not been firmly
established, representing another case in which a group is attested in the Hohokam
area but may not be properly considered Hohokam (Bayman 2001, Wilson 2003, but
also Gumerman 2007).
We therefore possess a general outline for which we have much confidence, one
that describes: the early florescence of the Hohokam and the initial construction of
60
the large-scale irrigation works; the extension of Hohokam cultural elements to a
wide distance away from the rivers that anchored the Hohokam core areas; the appearance of settlements of increased size with greater ritual and public architecture
(ball courts); a period of consolidation into fewer settlements of greater size and corresponding new forms public architecture (plazas); a period of reorganization into
newly founded large settlements with a distinctive residential architecture and new
forms of public architecture; and a period of abandonment and depopulation. But
we also have a picture that includes: difficulty understanding the ethnic or cultural
continuity from one period to the next in almost every case; difficulty reconciling common cultural markers across space with elements that are distinct between separate
regions, including a marked difference manifest between settlements that were closely
integrated into the main, large canal systems and those that were not; difficulty understanding the connections between the different areas within the territory we think
of as Hohokam, and between that territory and its surroundings; and difficulty reconciling the chronological framework on which all of these issues must be pinned. The
combined effect of these challenges led Bayman (2001) to propose that the definition
of Hohokam as a cultural entity might need to be reformulated or abandoned; here
I will argue that these difficulties are linked to the issues of scope, resolution, composition, and completeness that were raised in the first chapter, and will return to
prominence in a modeling context.
2.2
Other Approaches to understanding the Hohokam
Archaeological research of the Hohokam phenomenon has taken a number of paths
over the last several decades, building on the work of earlier archaeologists whose
object was the basic chronological, culture-historical account just given. These have
employed an array of new techniques and been directed at a swath of objectives. In
this section I present a discussion of several of these that are selected for their bearing
61
on the larger questions posed at the start of this chapter regarding the management
of large-scale irrigation works and the overall trajectory of the Hohokam florescence.
These programmes, which I discuss under separate headings, do not form entirely
separate tracks; they are actually quite closely related. I do not intend to obscure
this, but it seems convenient to present them as distinct, and it is not a misrepresentation to suggest that they are often practiced separately even when the connections
between them are well-recognized. These connections, moreover, are central to the
larger argument of this chapter- that the questions they each address separately must
actually be considered in conjunction.
I have chosen to group the approaches in three broad categories: ‘regional’ approaches, studies of canals and river morphology, and investigations into agriculture.
2.2.1
Regional Approaches
By regional approaches I refer to approaches that cross spatial boundaries, and include
both approaches that discuss dynamics across the Hohokam area as well as those
that address connections between the Hohokam and adjacent areas and with the
wider Southwest. I leave aside the issue of actual movement of peoples, however, and
discuss two classes of regional approaches: those that focus on climate and those that
address economics.
Climate-Driven The trajectory of the Hohokam parallels and is almost certainly
linked to broader changes in the prehistory of the southwestern United States. It
is possible (Cordell 1984; Cordell and Gumerman 1989) to place each of the periods
of Hohokam prehistory into a broader pattern that saw echoes of the same changes
throughout the region, widely affecting many peoples across a broad space. One explanation for this is that the Hohokam were compelled to respond to climate changes
that also impacted a much wider area; in this view, climate is an external driver for
62
both internal dynamics and interactions among groups of people over wide ranges of
space and on long time scales.
Economics and Regional Interaction A second approach to understanding interactions
across space is an understanding of economics and trade. One area of research focuses
on the exchange of ceramics. Abbott and colleagues (2006) propose that ceramic
sourcing evidence supports a model of integration along canal routes, rather than
centered on single, “focal” villages; Abbott (2006) also proposes that the system of
ceramic exchange was related to the ritual observances at the ballcourts, and that
such exchange was impacted by the abandonment of the ball courts in the Classic.
Another (Bayman 2002) focuses on the control by elites of exotic goods (marine shell)
that are believed to have been necessary for social reproduction- that is, considered
symbolic and required to attain and perform certain social roles. Crown (1990) notes
that there have even been attempts to apply world system theory to Hohokam coreperiphery interactions, but also that the trend attested in the archaeological record
is toward a divergence between those to areas, the opposite of that predicted by a
world-system approach. More recent efforts (Bayman and Sullivan 2008) have applied
Common-Pool-Resource approach to understanding the changes in the Papagueria,
considered a ‘hinterland’ of Hohokam, Patayan, and Trincheras cultures, after about
C.E. 1200; they claim that the appearance of monumental architecture in these areas
signals the establishment of exclusionary property rights and the end of open access
to the area’s resources (including marine shell and obsidian). Such attempts grade
into discussions of not only economic interaction but also the construction of wider
regional systems, to which I will return under separate heading below.
2.2.2
Studies of Canals and Rivers: Paleohydraulics and Geomorphology
The second broad category of research relates to the flow of water, a crucial component
of the Hohokam trajectory and of clear relevance to both of the larger questions that
63
are the focus of this dissertation. This category encompasses two different lines of
research, the first examining specifically the operation of canal systems, and the
second exploring the implications diachronic changes in river channel morphology on
the Hohokam’s ability to exploit them as resources.
Canal Studies and Paleohydraulics The large-scale irrigation systems of the Hohokam
were obviously essential to the people who created them, and are equally important
to modern investigators hoping to understand them. Interest in the canal systems began early; a series of articles by Omar Turney (1929a, 1929b, 1929c, 1929d) outlined
a number of investigations that had already taken place (which is, of course, unsurprising, given that the inhabitants of the new city of Phoenix needed to deal with
water in much the same way as the Hohokam had, and also had to understand the
extant canals both prehistoric and historic), and provided a map that impressively
shows the extent of the prehistoric irrigation works. Work on mapping the canals has
included aerial photography (i.e. Midvale’s work in the 1930’s, cited as background
for Showalter’s work in the 1990’s [Showalter 1993]) and continues to this day.
Others have investigated the canals more recently and have focused specifically on
the engineering and social challenges of building, maintaining, and operating a largescale irrigation system. Woodbury excavated a pair of canals near Pueblo Grande
(1960), and discussed these and the canal system in general in (1961); in this latter
work he speculated that the canals could be built slowly over time, requiring no great
effort and growing in extent only through accretion.
Busch et al. (1976) made one of the earliest efforts to understand the Hohokam
canals in a more purely engineering sense, noting that channel cross-section and slope
are more important than seepage or evaporation in determining the amount of water
delivered by a given canal, and estimated flow capacity using the ‘Manning’ equation,
which relates water flow velocity to the slope of the canal, the cross-sectional area
of water flow, a linear measure of the channel surface touched by the flowing water
64
(the ‘wetted perimeter’) and friction coefficient based on the character of the surface
of the channel bed. This was later echoed by Masse (1981), who further speculated
that the canal systems around Pueblo Grande might have been capable of draining
the entire Salt River.
The most impressive example of research in a similar vein is Jerry Howard’s investigation of what he terms Paleohydraulics (Howard 1993b, 1990). These studies1
directly address the engineering challenges in constructing a canal system; such challenges are considerable, and by themselves they impose boundaries on what can and
cannot be accomplished. One important outcome of his work is the recognition that
canal systems are very nearly impossible to build using the ‘accretionary’ model
proposed by Woodbury: expanding a canal system alters the way that water flows
through it in ways that are difficult to predict. Graybill et al. (2006) extend this
by suggesting (following other commentors) that the canals may have been built to
manage water in excess of the usual demand in order to mitigate flooding. They
also note that differences between the Salt and Gila river systems would have been
determined by such factors as the topography through which the canals would have
to be built and even the sediment loads on the two rivers (Gila canals, transporting
water with a higher sediment load, would have required more labor to maintain and
clean). The engineering details of how these canal systems would have worked must
be taken into consideration when asking about the social organization that must have
existed to support their construction and management.
An additional aspect related to the study of the canals has to do with the delivery
of nutrients to the fields, and especially with the accumulation of deleterious chemicals
that would also have been transported by the river water, primarily salts. This effect
would have varied based on the properties of the canal systems and according to
the positions of the fields along them, and it has been suggested that long-term
anthropogenic change in soil chemistry contributed to the downfall of the overall
1
To which the HWM Simulation effort owes a debt that cannot be overstated.
65
system. This is a view that Haury himself mentioned (1976; see Bayman 2001) but
rejected, noting that sites not dependent on irrigation were abandoned at about the
same times as ones that were; it is raised again by Ackerly et al. (1987).
Geomorphology A related area of investigation examines the impact on Hohokam settlement and irrigation of aspects of the landscape they inhabited. An example of such
research is Rice (1998, who contends that channel morphology dictated the locations
of effective headgates and thus constrained the choices of actors in competition for the
most effective position of canal systems, ultimately dictating that competing blocks,
rather than centralized authority, would arise. Research in geomorphology need not
assume the landscape is static, but can ask how that landscape changed through
time. Changes to the landscape can include aggradation and entrenchment, which by
themselves can affect settlement location (Ellis and Waters 1991). An arguably more
direct impact can be changes to the morphology of the rivers on which the Hohokam
depended. Waters and Ravesloot ( 2000) document that the Gila riverbed downcut
and significantly widened roughly during the late 11th and early 12th centuries AD,
and argue that this contributed to the reorganization of Hohokam lifeways that began
during that period and led into the Classic period (Waters and Ravesloot 2001); the
effect arose because a wider and deeper channel- particularly one that is widened
by some specific occurrence but later carries only the same volume of water as the
original- leads to greater difficulties in diverting water into a canal system2 .
2.2.3
Studies of Agriculture and subsistence
The large-scale irrigation systems in the core areas of the Hohokam world were of
irrefutable importance to Hohokam subsistence; and the key role that was played by
maize is equally undeniable. These facts have prompted researchers to follow two
2
This line of research led to a debate with Ensor et al. (2003), who argued that the original work
failed to incorporate the broader perspective of Political Ecology; Waters and Ravesloot respond to
this in 2003
66
related lines of research: first, to track the availability and timing of the delivery
of water by the canals (i.e. Graybill et al. 2006), and second to reconstruct the
properties of the varieties of maize available to ancient farmers (see Muenchrath 1995
and Muenchrath et al. 2002 for examples). The properties of maize that are of
interest range from the plant’s need for water (and the timing of that need during its
growing season) to its caloric yield, and the combination of the plant’s properties and
the timing of water impacted whether double or only single cropping was possible.
The sharpening of our information about the Hohokam agricultural calendar is of
great importance (Hunt et al. 2005).
Despite the centrality of irrigated maize agriculture, the focus of our understanding of Hohokam subsistence must be broadened. I have mentioned the relative richness
of the Sonoran desert; this richness has another expression in the wide array of cultigens available to the Hohokam, and it is unsurprising that the Hohokam developed an
equally broad array of techniques to grow these plants, techniques adapted to varying
conditions that were found at different places throughout the Hohokam world. These
techniques ranged from artificially creating fields in the floodplain (Schaafsma 2007)
to creating berms, check dams, and other features appropriate to the specific microenvironment, even within zones of the a single community (Fish 1989; Fish et al. 1992b).
Crops other than maize (for example, agave) were important to the Hohokam (Fish
et al. 1992a; Fish and Fish 2004), and their properties have also begun to receive
attention in the same way as maize (see Leach 2007 for an example). The class of
plants that were important to the Hohokam is undoubtedly very large; P. Fish notes
“Farming and gathering may not properly be defined as polar categories in Hohokam
subsistence” (1989, p. 46), and there is evidence (Dean 2005) that in the earliest
stages of adoption domesticated cultigens were only a part of a subsistence strategy
that was not completely sedentary but remained quite mobile, presumably exploiting
a range of resources in different areas. Finally, in addition to empirical studies of
plants and archaeological investigations of Hohokam farming practices, ethnographic
67
work (i.e. Castetter and Bell 1942, Wilson 2003) and consultation with informants
from the probable Hohokam descendants (Fish 2006) contribute important information to our understanding of the range of opportunities available to the Hohokam.
Collectively these approaches offer a wealth of information about the Hohokam trajectory. However, like the view offered by the culture-history approach, there remain
important difficulties. We are offered a picture of a Hohokam society that mirrors
and was impacted by changes that occurred far beyond its own horizon; that relied on
irrigation agriculture, operating under the inarguable constraints of hydraulics and
impacted by short- and long-term changes brought about by their own efforts and
by geological dynamics beyond their control; that was linked through trade internally and beyond its horizons; and that practiced an agriculture based, potentially,
on a myriad of plant resources exploited with an impressive technology that we are
only beginning to understand. But we are also left with many questions: about the
nature of the Hohokam’s links to their neighbors; about the means to overcome the
challenges of managing water flow through a large canal system; about the nature of
economic interaction in the Hohokam world, including what was produced and traded
and who controlled that trade; and whether the opportunities afforded by different
plant resources could have been merged into a subsistence strategy that provided
benefits beyond our initial assumptions. There are others, too, and they are, again,
our modeling issues of scope, resolution, composition, and coherence. But the most
pressing question for our purposes is one of dominance: is the Hohokam trajectory
to be told primarily because of one of these storylines- the simple accumulation of
salt on fields, for example, or the downcutting events on a major river system- or is
it some combination of all of these, with none of them holding the single answer or
primary cause?
68
2.3
The Hohokam Social System
The preceding section focused on approaches that examined elements of the natural
environment or that were the product of the Hohokam efforts, but largely ignored
the people themselves; this section reintroduces them and focuses on the structure of
Hohokam society.
One strategy to gather information about the structure of Hohokam society takes
a regional approach and compares site sizes and settlement hierarchies across the
Hohokam area (see Fish et al. 1992b for an example); this is done in conjunction
with an analysis of markers that indicate special site function, such as ball courts and
platform mounds, and we have noted that this yields part of the larger picture of the
changes in Hohokam society with the onset and progression of the Classic Period.
A second approach attempts to apply templates of social organization to the Hohokam3 . The Hohokam have only rarely been subjected to analyses in the terms offered by neoevolutionary approaches involving “bands, tribes, chiefdoms, and states”;
these concepts are “at best an uneasy fit” (Fish 1999, p. 45). The elements of the
archaeological record typically interpreted as evidence of the social differentiation and
distinct roles associated with these kinds of social arrangements are generally lacking,
and Yoffee et al. (1999) make cogent arguments for discarding these units completely
in the Hohokam case (but see Doyel 2007 for a contrasting view). In the absence
of such markers, other researchers look to ideas drawn from other, similar contexts.
Rice (1998, 2000) suggests that the conditions of managing the canal systems were
such that they promoted a balance between cooperation and competition (including
violence) that promoted a ‘segmentary’ organization among relatively equal communities; Wilcox (1999) goes further, and invokes a segmentary state model (per Southall
1988) explicitly4 . Still others (Bayman 2001; Fish and Fish 2000; Harry and Bayman
3
For a critical review of this same process applied to another nonstate yet complex society, the
Maya, see Murphy 2000.
4
Wilcox also proposes a ‘peer-polity-interaction’ model (per Renfrew and Cherry 1986) for most
69
2000; Mills 2000) draw from the theories of Blanton et al. (1996a, 1996b), and have
suggested that “Hohokam society may have contained dual systems of leadership
that involved both network [individual] as well as corporate [group] power-seeking
strategies” (Bayman 2001, p. 294).
A third approach focuses more specifically on the management and operation of
the canal systems within the core areas; these presume organizational difficulties in
the systems’ operation and ask how the structure of Hohokam society may have been
related to these challenges. The connection between this second domain and the
‘hydraulic’ hypothesis of Karl Wittfogel (see Mitchell 1973) should be clear, but as
Mitchell (1973) notes, it cannot be assumed that large-scale irrigation works require
centralized authority. However, the construction, maintenance, repair, and operation of the largest Hohokam canals undoubtedly required the collective effort of large
numbers of people (particularly if Howard’s refutation of Woodbury’s speculation
that the canals could be constructed in an accretionary way is accepted), and, further, impacted virtually all of the residents of the core areas. Water has an iron
logic and its dynamics can be unforgiving. Unfortunately the range of possible social arrangements in the Hohokam case is wide; Scarborough (2003) has observed
that water management schemes, and the character of the social arrangements that
support them, exist in a rich array of forms that can depend on the challenges specific to that environment; these forms can offer different opportunities for the social
arrangements can deal with them most effectively. Howard and Abbott (per Hunt
et al. 2005) both propose that there existed levels of management but that there was
some integration at levels above that of individual canals; Abbott does so on the basis of ceramic evidence that stretches across canal systems, while Howard makes the
case on the evidence of the structure of the canal systems, the challenges that these
of the southwest but avoids applying these to the Hohokam specifically, preferring to consider the
Hohokam as a ‘regional system’ in which ceremony and exchange linked the disparate Hohokam
communities.
70
would have imposed on the various parties attempting to use them simultaneously,
and ethnographic analogy, additionally proposing a shift from corporate to network
strategies in the later stages of Hohokam history. Hunt et al. (2005) essentially concur that higher-level organization must have existed, but attempt to find analogous
examples from the ethnographic record. They cite etnnographic examples that suggest that command areas exceeding 1,800 ha, typically encompass several villages,
and the area irrigated by, in their example, Canal System 2, exceeded this and must
have required some arrangements among the different parties involved, and that it
was too large to be acephalous. Ultimately, however, they are forced to conclude that
no available ethnographic analogies can be easily applied to the Hohokam case.
We can here echo the concluding remarks given in the preceding two sections, for
as in the case of the culture history approaches and the more recent archaeological
approaches that have studied the Hohokam, these present us with some solid cluesthe need for some form of organization for building and managing irrigation works, the
apparent settlement hierarchy and the ways that this changes through time, etc.- but
also leaves us with larger holes. But here, fortunately, the story need not end, because
there is another set of approaches that offers a means to integrate the social and the
natural components of the Hohokam phenomenon. These consider the Hohokam,
including both their social relationships and institutions and their physical landscape
and the natural processes therein, within an integrated framework as what may be
termed a “coupled human and natural system”5 . These approaches, however, also
open the doors to new ways of understanding the Hohokam- new questions to be
asked and new kinds of answers to be gained- and these demand a new approach in
their pursuit.
5
This term is used by the National Science Foundation; see http://www.chans-net.org/.
71
2.4
Coupled Human and Natural Systems: Approaches to
Complexity
We can introduce the coupled human and natural systems approach by referring to
Chapter 1’s discussion of the Lansing and Kremer model for Balinese irrigation. A
key point from this model is that the system it revealed included components of the
natural landscape, the irrigation system, and a set of social relationships and institutions; although coming from very different domains each element was shaped by
the operation of the system to which it belonged. The Balinese example is one of
managing an irrigation system without a central authority, and as we do not have
evidence of a central authority in the Hohokam case it is an easy leap to ask whether
the Hohokam florescence might also have operated in a similar fashion, and if so how
this system might have varied along with the shifts in the long-term Hohokam trajectory. If it was then we can ask which of the elements in the natural landscape, what
characteristics of the irrigation system, and what social relationships and institutions
participated in this system, how they were shaped by it, and how this system changed
through time.
Broadly, the Bali example is an instance of a complex adaptive system (in the sense
defined by Miller and Page 2007); it illustrates an important principle found in such
systems: the principle that local effects can lead to changes in global properties of the
system- in this case the local decisions of the subaks led to a global improvement (and
even optimization) in the productive capacity of the system as a whole. We would
like to know if these kinds of dynamics are at play in other systems generally, and,
for our purposes here, if they can be applied to our understanding of the Hohokam
case. The difficulty, and opportunity, that this offers is that such systems can best be
understood through an approach not typically applied in archaeology. To understand
how the components of such a system interact requires asking new kinds of questions
about it. Commonly, this requires asking not only how the system acted under the
72
conditions we observe, but also how it would have reacted to a wide array of other
possibilities; in the archaeological context, this leads to the playing out of alternative
histories. The Lansing and Kremer model provides an example of this at work. In the
basic case of their model, subaks observed the success of their immediate neighbors
to determine whether to alter their farming strategy for the next year by matching
their most successful adjacent subak; the result was a pattern of synchrony of cropping
patterns and an optimization of the production from the system as a whole. However,
when subaks were allowed to observe neighbors beyond their immediately adjacent
subaks, the system became overconnected and failed to optimize. Our understanding
of the system is improved through demonstrations like this; they require, however,
that we consider the system not only as it was but as it might have been.
The opportunities offered by approaches like this are wide and encouraging. Two
of these were discussed in Chapter 1. One of these is resilience, first introduced by
Holling (1973) and applied to the study of ecosystems. Resilience is the property of
some systems to endure perturbations and yet maintain systemic integrity; they do
not necessary return to some equilibrium point, but provided the perturbation is not
too extreme they keep their state variables within ranges of accepted values and relationships. Holling later (2001) added to this, in a discussion that now included human
social systems, that certain qualities that permit a system to remain resilient are opposed to other properties that might make it in some way more efficient. Systems, he
argued, cycle among conditions in which resilience, potential, and connectedness are
interrelated. The least resilient states are those in which connectedness and potential
are high; in these conditions, the system has adapted to one set of conditions, but
the adaptations have become increasingly specific to a narrow range of circumstances.
When confronted with perturbations, systems in this state must reorganize.
The second concept, robustness, pertains to the ability of systems to change and
to withstand changing conditions. While difficult to define, it may be thought as
a somewhat paradoxical effect in which, by virtue of the properties of the initial
73
system, failures, unexpected in their specifics but of anticipated kinds, contribute to
long-term success (sensu Jen 2005). Wagner (2005) provides an extended discussion
of this property in living systems at multiple levels from the basic alphabet of DNA
through larger ecosystems.
The question raised is whether the long-term success of the Hohokam system, as
well as its ultimate collapse, might not best be understood through the application of
these principles. Was the Hohokam system resilient to some kinds of challenges, but
ultimately not to others? How might the Hohokam system have responded to different
kinds of challenges, or challenges at different points in its history? Was the frequent
destruction of canal infrastructure by flooding an example of a failure to which the
Hohokam system was robust, so that the flexibility required to confront it was a key
component in the system’s long-term success? These frameworks appropriate redirect
the focus of our inquiry to the persistence as well as the eventually end of the Hohokam
florescence. They additionally ask us to reconsider what, precisely, we are saying
persisted and then either ended or was reconfigured; it is worth noting in this respect
that the Hohokam’s irrigation system did not, strictly speaking, come to an end:
many of its channels were put to use again during the historic period, so that Turney
(Turney 1929b) could enumerate which modern canals followed which ancient ones.
What must have ended for the Hohokam trajectory was not the physical infrastucture,
but some component of the Hohokam’s ability to use it; the infrastructure remained
viable, and today forms part of the new system that supports the millions of modern
people living along the Salt River.
This direct link between past and present in the Hohokam case holds great appeal,
but even without it there is a logical link that is equally important. Questions drawn
from paradigms like robustness and resilience not direct our efforts to understand
the Hohokam system, but ultimately help work toward fulfilling a further promise
of such approaches: that the results may contribute to our understanding of our
own society’s trajectory and its resilience to expected and unanticipated kinds of
74
perturbations. The thrust of these efforts, then, is not only to understand a specific
context in the past, but to work toward more general principles that can be applied
across contexts.
2.5
A Modeling Approach
Emil Haury, in commenting on the frustrating problem of Hohokam origins and development as they were understood in the 1960s wrote that “the only way to solve
the dilemma was by launching new studies, and the only reasonable way to argue the
points under contention was with the shovel” (Reid and Whittlesey 1997, p. 85). For
the problems of culture history, and given the evidence available at the time Haury
was speaking, this was an entirely acceptable approach. But Hohokam archaeology,
reflecting a wide range of trends in archaeology generally, has moved in a different
direction- or, perhaps better, in several of them. The various approaches to the Hohokam discussed above, and the new possibilities offered by complexity theory and
concepts such as resilience and robustness, raise a suite of questions of very different
kinds.
This was the motivating factor behind a workshop held in 2003, entitled “Social Dimensions of Hohokam Irrigation: Perspectives Across Cultures and Time.”
The general purpose of the workshop was to examine what old questions could be
answered, and what new questions asked, concerning the character and degree of Hohokam social organization and complexity. Confronted with these new challenges, one
recommendation of the workshop was the recommendation that the problem could be
addressed through the creation, drawing on Lansing’s example among many others,
of a simulation model. This model, the justification, design and structure of which
occupies the bulk of the remainder of this dissertation, came to be called the Hohokam
Water Management Simulation, or HWM.
Multiple goals were set for the HWM Simulation; some of these were open-ended.
75
Let me return to the observation that the domains of research I discussed in the
second section of this chapter are all interrelated: that irrigation, climate, agriculture,
economic interaction, etc., all impact one another. Consider, then, how each of these
individual components might contribute to some broader explanation regarding the
two overarching questions about the management of the canal systems and the longterm trajectory of the Hohokam. Could we find that some elements are crucial and
some are irrelevant? In fact, there is a long list of unknowns, including:
• We do not know where to draw boundaries for the Hohokam; possibly the
dynamics of cooperation along the canal system had little to do with the coreperiphery dynamics of the River and Desert Hohokam, and these were, in turn,
effectively unrelated to the connections of the Hohokam to the broader southwest. Conversely, these connections may have been key.
• We do not yet fully understand in detail the engineering challenges of building
and operating the canal system given the specific details of the landscapes in
which the Hohokam lived
• We do not know the repertoire of foodstuffs available to the Hohokam nor the
potential agricultural calendars that could have been supported
• We do not know if the Hohokam collapse could be attributed to purely internal
and unidirectional processes, such as the inexorable buildup of salt concentration due to repeated irrigation, or if complex system dynamics were at work
and may describe a process with a nonlinear trajectory.
• We do not know (and cannot presume) that robustness, resilience, or complexity
theory can be applied to the Hohokam case; the Hohokam case must be explored
as a possible complex system, keeping open the possibility that the kind of
dynamics we see in complex adaptive systems were not in play at all.
76
These concerns reflect the the suite of issues presented in chapter 1; they are issues
of scope, resolution, composition, and coherence. The interests and concerns raised
from the varying viewpoints of the conference participants reveal that these issues
are unresolved. Ultimately what actually happened involved all of the issues raised;
however, our understanding of what happened may need to include some elements
and not others. It may need to be broad or might be sufficient even if narrow, but
there is no a priori way to decide this.
For example, the conference attendees often differed in the scale at which they
were interested in the Hohokam problem. Some believed that the long-term trajectory
of the Hohokam system could be understood by asking questions at a time scale
of decades, i.e. periods of relative drought vs. relative moisture that might span
generations. Others believed that important elements had to be understood at a time
scale of days, i.e. the management of scarce water across a landscape of potentially
competitive farmers. It is difficult to refute the former position without testing the
latter; in any case, it is clear that some questions of interest, such as the resilience
of the social organization that structured Hohokam society, have both source and
implications in both time scales.
A second issue of scale is raised by several of the discussions cited earlier in this
chapter. We would like to compare the Hohokam system with other irrigation works,
but the most comparable would be works of similar scale. But scale must be addressed
in terms of other elements in the system. What makes the Bali system comparable,
or not, may not be the length of the irrigation system nor the area of the fields that
could be irrigated, but rather the number of decisionmakers and/or the structure of
the network of how those decisions affect the other participants. This illuminates a
distinction between the modeling approach and that of ethnographic analogy, in that
the ethnographic approach assumes a wide array of variables are invisible to us but
presumed to act in similar ways across contexts, so that, by way of example, area
irrigated- which is presumed to involved similar labor and transportation costs as well
77
as yield- provides a convenient proxy for what we assume to be constant or nearly so
as we move from context to context. The modeling approach allows- indeed, often
requires- us to consider such issues from the bottom-up perspective, in which we must
explicitly define the components to be used and what this implies when situations of
comparable ‘scale’ are to be compared.
This leaves us with a modeling interest in an array of questions that are all tied
to a single domain, but a range of options and opportunities within that domain.
One strategy for attacking these is to preface the more general questions of social
complexity, resilience, etc., with specific ones that are, in effect, prior to the larger
questions we would like to ultimately address, but that represent parts of the picture
that we might be able to provide before we have all of the information we would
need to attack the larger issues. These questions may be of any of the four kinds
I presented in the opening chapter- reconstructive (“what was”), delimitive (“what
was necessary”), hypothetical (“what if. . . ”), and causal or explanatory (“why”), but
the list of these questions is not known a priori. It is worthy of note that some of the
questions that were raised by preceding approaches are omitted, so that the simulation
is not a model of everything: for example, the issue of cultural continuity that so
occupied the early Hohokam researchers is pushed aside almost entirely. But the
vision for the simulation is also expandable; it is, first, open-ended, so that additional
questions not yet foreseen can be incorporated, and, second, accumulative, so that
as our knowledge of each domain increases what was used in that domain becomes
available in others. Rather than a single demonstration of one view of the Hohokam,
the HWM simulation is intended to be an expandable test-bed in which competing
views of the Hohokam can be played out on a single, unified platform, and wherein
our collective knowledge of the Hohokam will increase over time.
In short, the HWM Simulation derives its purpose and its structure from a range
of questions that must be considered collectively. This is a necessary danger. Models
that serve to address more than one question can fall into any of several traps, and
78
thus it is a practice that is generally to be avoided. Here, however, it is an inescapable
truth that the answer we get to any one of the questions will be related to the issues
involved in the others. More properly, any argument about one aspect could easily
be challenged by referring to another aspect. Ultimately the driving force behind the
open-ended approach to modeling is the recognition that the important pieces to the
Hohokam puzzle may not be recognizable in advance. Unlike Lansing and Kremer’s
ethnographic study, we cannot ask informants or make direct observations of the
Hohokam system ourselves. We must instead propose ideas and test them, making
use of other pieces of the puzzle as we need them. The HWM Simulation framework
is intended to permit this.
This returns us the dilemma discussed in the first chapter: is it better to retire
to the abstract, single-hypothesis testing approach, or pursue the all-encompassing,
‘big-real’ approach? In the next chapter I will propose that there is, in fact, a middle
way: a modeling approach that embraces the challenges this kind of situation offers,
and a strategy for dealing with the difficulties and opportunities that it presents. I
will also show, however, that this modeling strategy takes us into unfamiliar logical
territory, especially when we look beyond a single archaeological context and apply it
to more general questions of complexity theory and the trajectory of human societies.
We lack an understanding of the appropriate logical structures: how several different
but related objectives can be pursued within a common architecture and crafted into
rigorous arguments. What is required is an intellectual toolkit that encompasses a
modeling framework with open-ended possibilities and the opportunity to balance
multiple goals. I present such a framework in the next chapter, before returning in
subsequent chapters to the specifics of the Hohokam case.
79
Chapter 3
Archaeological Models and Inferences:
Toward a new toolkit for exploring the
archaeological record
Models play an important role in science. But despite the fact that they
have generated considerable interest among philosophers, there remain
significant lacunas in our understanding of what models are and how they
work. (Frigg and Hartmann Spring 2008)
In Explanation in Archaeology, Gibbon (1989) notes that archaeology typically
deals with what he terms ‘open systems’. He characterizes open systems as those in
which:
“expected conjunctions do not necessarily occur, owing to the operation of
intervening mechanisms or countervailing causes. This mesh of influences
and cross-influences causes an instability of empirical relationships in open
systems in space or over time” (Gibbon 1989, p. 149).
Gibbon argues that these compel archaeology toward a scientific process that involves
“a constant interplay . . . between taxonomic and processual concerns, i.e. between
identifying the significant entities of concern and their explanation.” (p. 103). Premo
(in press) echoes Gibbon, and turns it into a call for archaeological modeling to
embrace an ‘exploratory’ approach. Similar calls have been put forward by others
(see the Mediterranean Landscape Dynamics project1 for one example; also Hegmon
2003), and are growing stronger among those who want to investigate archaeology
from a complex systems perspective, where it seems appropriate and necessary.
1
Online at http://www.asu.edu/clas/shesc/projects/medland/
80
One given is that the ‘modeling’ approach is sure to be one that involves computer
simulation modeling; the ever-increasing software toolkit permits investigations that
were hardly to be conceived at the time that Gibbon wrote, and is now, justly,
considered an exciting avenue for archaeological research. There is, however, no
road map for an ‘exploratory’ approach; we do not know what such an approach
might look like. The fact that it must be called for explicitly, and that it is not
immediately embraced by the field of archaeology as a whole, suggests that it is novel
and unfamiliar; it can even be argued that it runs counter to the normal practices
of the field. And, further, because archaeological modeling tends to fall toward the
two poles discussed in Chapter 1- that is, between large-scale very detailed models
and small-scale abstract ones- there presumably exist some limitations that stand in
the way of the ‘exploratory’ approach, which might be argued to encompass and go
beyond both of these.
What is lacking, I propose, is a toolkit for the exploratory modeling approach.
Secondarily this refers to some kind of implementation- presumably software. But
primarily it refers to the intellectual toolkit that can be used to guide the activity.
What would an exploratory approach look like? What would be its goals? What
would be its permitted operations and its recognized constraints? In other words,
how would such an approach work?
In this chapter I propose that our current understanding of models and modeling
derives from two distinct and separate strands of intellectual history, from which we
inherit different pieces that apply differently to our scientific practices. One of these
strands, the later and predominant one, contains challenging ambiguities that add
to this confusion. From these strands we have received differing and contradictory
descriptions even of the basic intellectual operations of our scientific thinking: patterns of inference (such as deduction and, commonly, induction, but also others) are
defined in terms that elide and overlap. This in turn has left us with a less-than clear
understanding of the role of models, which we have attempted to fill with the idea of
81
an experiment; this, I will argue, rests on a misunderstanding of the form of knowledge that models provide. Even more fundamentally, models and modeling rely on
epistemological foundations that are not clearly understood in any context. Applied
to the special tasks required in an exploratory archaeology targeted at potentially
complex systems, these points of confusion are magnified.
3.1
Models and Science in the 21st century: A Fractured
Toolkit
As a starting point to building such a toolkit we can look to broader literature on
modeling in science. The state of modeling as a practice in science is quite murky.
One recent overview (Frigg and Hartmann Spring 2008) outlined the following issues
related to modeling in science:
1. The ontological status of models is unresolved. Models may be physical things,
but may also exist purely in the realm of thought (a fact recognized with respect
to archaeological models by Clarke [1972a]). Among this latter category, models
may include descriptions, ‘fictional objects’, set-theoretic structures, equations,
or even (in some formulations) combinations of the above.
2. The means of representation- how a model is construed to stand for some other
phenomenon- can vary, and will certainly vary depending on the ontological
category of model being considered. Scale models, idealized models, analogical
models, and phenomological models can be used in different ways.
3. The means of learning from models, including how the model operates to generate new knowledge, and how that knowledge is applied to the real world, is
poorly understood; in the case of computer simulation models, it is only newly
being explored.
82
4. The underlying epistemology of models (of each of the different kinds)- what
the modeling activity either requires of or implies about the real world and how
we gain knowledge of it- is not always clear.
It seems reasonable to argue that the ‘road map’ for exploratory archaeological
modeling should begin with a firm position on these issues. It is unlikely that this
position would be applicable- or, perhaps better, acceptable- across a wider domain
than that for which it is proposed; it cannot be taken to resolve the long-standing
issues of modeling in general. However, any framework offered should make the
position it takes clear to its audience, and show that the implications of that position
are useful with respect to the task being undertaken.
If the toolkit we have for thinking about modeling in the 21st century is fragmented
it is because of the history that has left it to us; if a new toolkit is to be created,
it will necessarily draw upon and from these pieces, even if selectively. A review of
this inheritance is thus useful, and the next pages will give an overview of the history
that has bequeathed them to us. It will be, necessarily, an abbreviated glance; while
the trajectory of 20th century intellectual history should not be painted with such
a broad brush as to oversimplify it, what is presented here is inarguably one of the
main story arcs: the rise of the intellectual stance called positivism and its subsequent
decline. The thread of positivism that will be the focus here was put forward by the
philosophers of the so-called Vienna Circle, and was termed Logical Positivism or
Logical Empiricism. Ultimately I will argue that the Logical Positivist programme has
much to offer to today’s efforts in archaeological thinking and, especially, simulation
modeling; these modeling efforts take up threads first spun by the Logical Positivists,
and the simulations that we create today are large-scale implementations of a kind
that they might have created had they possessed our technological aids.
83
3.1.1
Logical Positivism
The Logical Positivist program was a reaction against the work of Immanuel Kant,
which by the end of the 19th century had held sway among philosophers for nearly
a century. Kant had attempted to resolve a much older dispute between Cartesian
rationalism and Humean empiricism. His Critique of Pure Reason (Smith 1964)
provided a framework for classifying statements into four categories along two axes.
The first axis was a priori vs. a posteriori. A priori statements were those that
required no experience to know were true, while a posteriori statements required
experience with the phenomenological world to know were true. The second axis was
a distinction between synthetic and analytic statements. Analytic statements were
true by virtue of the definitions of the words themselves; the truth was ‘contained in’
the meanings of the words; synthetic statements required logical extensions to their
reasoning.
Of the four possibilities, one (analytic, a posteriori) was impossible. Examples of
the other three include (these examples are traditional and appear in several sources,
for example Rey 2003):
analytic, a priori
synthetic, a posteriori
synthetic, a priori
All bachelors are unmarried
All bachelors are happy
7 + 5 = 12
Kant’s reasoning led him to the conclusion that synthetic a priori statetments were
known to be true but were neither true by definition alone nor exclusively based on
experience. By permitting knowledge of this kind, Kant permitted philosophy to
explore metaphysical topics such as the existence of god or morality.
The Logical Positivists mounted an offensive against this. The crux of their argument was that statements like 7 + 5 = 12 were actually analytic, not synthetic.
Kant’s formulation of an ‘analytic’ statement had relied on the idea that one concept
‘contained’ another, as the concept of ‘bachelor’ might be said to contain the concept
of ‘unmarried’. The Logical Positivists contended that a better interpretation was
84
that the truth of ‘All bachelors are unmarried’ actually derived from the fact that
the concepts could be manipulated in accordance with the rules of language to reveal
an incontrovertible structure. In the above example, ‘bachelor’ could be replaced
with a synonym, ‘unmarried male’, leading to the structure “All unmarried males are
unmarried”; this is a structure that essentially says ‘All X’s are X’, which is true by
virtue of the rules of language, not because of the conceptual content of the terms.
With this definition of analytic, statements like 7 + 5 = 12 were brought out of the
synthetic category and into the analytic, thus folding all of math and logic into a
domain in which truth value could be said to be independent of experience.
The starting points for these deductive chains were empirical statements. Proper
science, the Logical Positivists’ argument went, began with observable phenomena.
In this way also they excluded metaphysical topics; statements that dealt with metaphysics they regarded as not merely false but meaningless (Feigl 1969). Hence a
properly scientific argument began by being grounded in observations and continued
through long sequences of deductions from these observations. Ultimately this led to
a framework known as ‘verificationism’: the belief that meaning is equivalent to the
sensory effect associated with a given term (this is a dramatic oversimplification, but
see Mayhall 2003 for more detailed discussion).
Modeling and Formal Logic: Two Views What of the Logical Positivist program and
models? One touchstone for understanding the role of modeling in science is the
relationship held to inhere between ‘model’ and ‘theory’; Frigg and Hartmann (Spring
2008) examine this in depth. One strong view, in keeping with the Logical Positivist
programme and persisting among some circles today, is that models are pale reflections
of theory. Theory is the formal statement, in axioms and deductions from those
axioms, of what is known about a given category of exploration or field of inquiry.
Models, conversely, are merely imperfect dopplegangers of theory; they are useful for
describing theory and communicating about it, and in some cases can be used as
85
placeholders for known gaps in theory, but they are to theory as toys, and nothing
more. This view is commonly called the syntactic view of theories.
A second definition of the relationship between theory and model exists, which
is also strongly grounded in the kind of formal logic that the Logical Positivists
advocated; this is still in use by logicians, and will be useful to us later. Theory,
by this definition, is a collection of axioms and deductions from these. Each theory,
however, can be instantiated with any number of interpretations; an interpretation
essentially takes the variables of the theory and posits concrete values for them. Some
interpretations are consistent with the theory, and some are not; for a given theory, the
set of all interpretations that are consistent with the theory, such that all the theories
axioms and deductions remain true for the set of values in each interpretation, is
called a model (Hamilton 1978).
This kind of logic both embodied and relied upon the rebuttal of the Kantian
‘analytic/synthetic’ distinction. It ultimately required a new kind of logic, which
came to be called the Predicate Calculus; this replaced the simple syllogism with
a more formal kind of logic derived from set theory. I will return to the Predicate
Calculus in Chapter 4. For now it is enough to note that it allowed statements in
natural language to be replaced with abstract variables, and the operations of logic to
be rewritten formally as one might write statements in mathematics2 .; this sundered
the operations of logic from the meanings of the terms, so that Kantian notions of
the ‘containment’ of meaning were irrelevant. The result was that long chains of
deductive argument could be held to be true based on their structure, a reliance on
the correctness of the formal operations, and on the arguments’ initial statements.
This framework is analogous to computerized simulation models. We have technological means that allow us to go far beyond what was possible in turn-of-the-century
Vienna, and simulations can take many forms that sometimes seem to bear little
2
Actually the power of the logical system was such that one goal was to rewrite mathematics in
it; see Mayhall (2003) and Hamilton (1978) for comments on this.
86
relation to a chain of logical deductions. I will argue that there is, nonetheless, a
fundamental congruity between creating a simulation and implementing a chain of
logical deductions, in just the way the Logical Positivist program described. However,
during the 20th century the foundations of this approach were attacked and rebutted;
what had been the major movement in the philosophy of science and had attempted
to become the template for scientific practice and progress was eventually abandoned.
If inspiration is to be drawn from the Logical Positivist programme, the criticisms
of it must be adressed and understood. Some of these criticisms will also shape our
modeling efforts today, both by contributing to them and showing their limitations.
3.1.2
Postpositivism
The Logical Positivists’ program rose to prominence in the early part of the 20th
century, but was to fade quickly. In tracing out the impact of Logical Positivism in
archaeology, Gibbon (1989) focuses on both internal and external components of the
LP programme that made it falter and eventually led to its status as a ‘thoroughly
discredited’ approach. He addresses a suite of problems in both domains, but for the
purposes of this essay, three are of the most importance: the eventual dissolution of
the idea that a series of deductive arguments could unambiguously create statements
confirmed to be true; the idea that deductive efforts could isolate single elements
of some framework of belief and test these in isolation; and the refutation of the
underlying epistemological stance of Logical Empiricism.
The central argument of the Logical Positivists, the syntactic vs. analytic distinction, was attacked by Quine in Two Dogmas of Empiricism. Quine argued:
It is obvious that truth in general depends on both language and extralinguistic fact. . . . Hence the temptation to suppose in general that the truth
of a statement is somehow analyzable into a linguistic component and a
factual component. Given this supposition, it next seems reasonable that
87
in some statements the factual component should be null; and these are
the analytic statements. But, for all its a priori reasonableness, a boundary between analytic and synthetic statements simply has not been drawn.
That there is such a distinction to be drawn at all is an unempirical dogma
of empiricists, a metaphysical article of faith. (Quine 1951, p. 34)
Quine was arguing that the Predicate Calculus was indefensible: the idea that abstract variables could contain logical structure devoid of semantic content was impossible. By refuting the distinction between synthetic and analytic statements the
whole program of verificationism was undermined. This effectively ushered in a second breakdown, discussed at length by Gibbon (1989): the ultimate rejection of the
epistemological foundations of the LP programme.
A strong part of the LP programme rested on the elimination of concepts that were
‘metaphysical’ from proper scientific discourse (in this context the quote from Quine
given above can be seen as a quite pointed rebuke). However, the nature of human
perception became a recognized limitation of their approach. While they permitted
speculation about things that simply had not been perceived but theoretically could
be (Mayhall 2003), they strictly prohibited speculation about things that could not
ever be perceived. One defensive manoeuver they executed was to make explicit
the notion of empirical evidence as ‘protocol statements’: a protocol statement was
a specified observation, understood to be contingent on the time, place, nature of
the object being observed and on the observer. A challenge was to find a language
in which this kind of statement could naturally be made; those who pursued this
language, which of necessity would have to derive from a priori principles but be
capable of making statements about the empirical world, termed the result ‘protocol
sentences’ (Cirera 1994). An implication of this was that the fundamental unit of
scientific data was not the external world but was limited to only our perceptions of
it. There are a number of recognized problems with this position; Gibbon notes that
88
this restricts our inquiries in an unduly anthropocentric way- the universe having
no obligation to make its secrets easily perceptible to humans- and that other, later
epistemological frameworks were more cognizant of the impact of our concepts on our
perceptions (see examples in Kuhn 1970). In any case, the strict empiricism of the LP
programme ruled discussion of evidence that was not directly perceived scientifically
out of bounds, and this came to be a restriction that could not be maintained because
it eliminated numerous interesting avenues.
It is useful to contrast this epistemological approach against two others. Realism
is taken by Gibbon as a framework in which it is presumed that the concepts science
uses map to real entities; within this framework it is possible for a scientist to pursue
his work in the belief that reality is his objective (Gibbon 1989). Gibbon suggests that
the ‘open systems’ we study as archaeologists necessitate a realist epistemology. In
contrast, instrumentalism presumes that our scientific concepts and the units through
which we try to make sense of the universe are ultimately only artificial creations that
do not necessarily have any true congruence with reality. The applicability of these
two approaches to the modeling of archaeological complex systems is a topic to which
I will return in section 3.5.2.
For now, the most significant implication is that the critiques of Quine and his
successors illuminated a wide-ranging problem with the Logical Positivist programme:
effectively, verification of individual elements was impossible, with the result that any
framework of beliefs (a theory) that sought confirmation could only be confirmed in
toto, rather than by individual elements. Quine’s metaphor was of a fabric of belief,
upon which evidence could be brought only to the edges and would impact the whole
of the cloth rather than the individual threads.
Postpositivist Science and Models The critiques of Quine led philosophers of science
to explore how real scientists operate, rather than deduce from first principles the
way it ought to. They found that the syntactic view represented an ideal that was
89
almost never achieved and, in many cases, not even sought. Instead, scientists tend to
operate with many competing models, not all of which are consistent with each other
or even complete in isolation (Achinstein 1965). Rather than these models being a
pale reflection of some formal theory, theory as a body of working knowledge came
to be seen as comprised of the multiple models held by scientists at work. Models are
primary, theories only secondary. This view is termed the semantic view of models
and theories (see Frigg and Hartmann Spring 2008).
As a description of how science ‘works’ this is certainly correct. However, it must
also be recognized that this leaves practicing scientists with a fairly sketchy definition
of ‘model’. Models have a primacy above theory; yet models themselves are only
poorly understood. The main characteristics of models that differentiate them from
theory are that they are not claimed to represent the actual, correct way that the
phenomenon in the real world behave, but rather some useful distortion of it, and
that they are permitted to be incomplete and even inconsistent.
The denial of the verificationism of the Logical Positivist program and the consequent requirement to test suites of collected knowledge in toto rather than in individual components is another side to the recognized need for an exploratory approach in
archaeology. But the parts that models play in this enterprise are multiple and poorly
defined. In the semantic view, models ‘mediate’ between theory and reality (Morrison and Morgan 1999, Read 1990) to make up for (inevitable) gaps between abstract
theory and observed phenomena. Some argue that models go beyond the deductive
conclusions that theory can apply in ways that are not only outside the original theory but contradictory to it (Cartwright 1999). Models become autonomous (Morrison
and Morgan 1999). Yet constructing models is usually described as having an elements of ‘theory, practice, and a bit of art’ (as in this quote from Miller and Page
2007, p. 43), and considered to be not only difficult but impossible to characterize
formally. Models tell good ‘stories’ (Hartmann 1999), but they are able to achieve
this utility because of their flexibility: models have few rules. The result of this is a
90
fair amount of confusion over, as noted above, what models are, how they operate,
how they can be constructed and employed, and what are their constraints. To be
useful when the target is a potential complex system in the archaeological record, a
modeling toolkit must clarify these points.
3.2
Logical Bases for Modeling
One consequence of this relative disarray with respect to the role of models and
modeling in science is that the fundamental logical operations of modeling are also
poorly defined. In contrast with the relatively well-defined deductive operations that
inhere in theory in the syntactic view, the semantic view leads down other roads.
Cartwright (1999), for example, uses models to bridge theory to real-world situations,
and remarks that in this effort the Ansatz, or the set of assumptions proposed to bound
the solution so that the theory can be made appropriate to the situation under study,
are key. But if this is true then modeling must encompass a range of intellectual
activities and operations- all conducted with a central object, a model, that is itself
indefinite.
Unsurprisingly, then, there is currently a debate about the kinds of logic that
apply specifically to simulation models. Relevant literature can be found easily in the
domain of a specific kind of modeling, agent-based modeling. As a relatively novel
field agent-based modeling is undergoing more discussion than modeling in general,
but the points it makes apply to the broader field without much modification. The
discussions about the logic of agent-based modeling fall into several categories. Critics
of the agent-based approach claim that it is nonscientific because it is inherently
inductive, and thus fails to conform to a proper, deductive scientific method. Epstein
(2005) has mounted the most extensive response to this by arguing that this claim
misunderstands the practice of agent-based modeling, and that agent-based modeling
is inherently deductive. More recently, Griffin (2006) has argued that the fundamental
91
logical operation in agent-based modeling is neither induction nor deduction, but
another operation called abduction. Returning to archaeological modeling in general,
Premo (in press) has argued, following Gibbon (1989), that exploratory archaeological
modeling may need to be retroductive. And a recent article in the flagship publication
for American archaeology (Fogelin 2007) has claimed that archaeology primarily uses
a form of inference termed Inference to the Best Explanation3 .
Like the muddiness of the more general state of modeling, this confusion has
its roots in history; as is often the case in debates like these, resolution is made
more difficult by terminological confusion. Different authors within the philosophy
of science literature define the various forms of inference in different ways. However,
one recent article by Borgelt and Kruse (Borgelt and Kruse 2000) offers a framework
that is not only clear but contains some specific advantages for the programme offered
here (if, ultimately, only as signpost and eventual foil). Hence I will start from their
definitions and initially use their categorizations.
In the Borgelt and Kruse framework (echoing many others, in a tradition that may
be traceable to Aristotle) the first broad division is between deductive and what are
usually termed ‘inductive’ inferences. Deduction- putting aside for the moment the
counterarguments of Quine- is rarely seen as problematic. An inference is deductive
if the conclusion must be true if the premises are true. Borgelt and Kruse examine
inferences that can be depicted as two premises followed by a conclusion, and represented symbolically in a convenient notation; applying this, deduction can be shown
thusly4
P1
P2
∴
3
A⇒C
A
C
The discussion presented here addresses inference from the perspective of simulation modeling; a
treatment of inference beginning in specifically archaeological literature might arguably be relevant
here, but has been omitted for now. The topic of specifically archaeological inference is taken up
again in Chapter 7.1.1
4
For convenience, Px represents a premise; A or Ax represents an antecedent; and C represents
a consequence.
92
An English translation would be “It is known that A implies C; A is observed; C is
concluded.” An example would be the well-known: “All men are mortal; Socrates is
a Man; therefore Socrates is mortal.”
Non-deductive arguments are distinct from deductive arguments in an important
way: the conclusion may not be true even if the premises are accepted.
One way of looking at these kinds of arguments is to recognize that the conclusion,
in order to rise to the same level of certainty as a deduction, requires other premises
to be brought in; perhaps because of this, some authors simply hold that all nondeductive argument is inductive (i.e. Salmon, as cited in Borgelt and Kruse , but also
others). Various names are used for these non-deductive arguments, and this gives
rise to confusion that is at least terminological, if not deeper.
Borgelt and Kruse offer a useful illustrative classification of non-deductive arguments. Following Lukasiewicicz, they choose to define the complement to deductive
arguments not as “inductive” but as reductive. The characteristic form of a reductive
argument is:
P1
P2
∴
A⇒C
C
A
An English translation would be “It is known that A implies C; C is observed; A
is concluded.” Thus a reductive argument is one in which the inference goes in the
direction opposite the implication in the initial premise. “Retroductive” (the term
preferred by Gibbon and Premo) is a synonym for this kind of argument. A simple,
English example might be “If it has rained, the sidewalk will be wet; the sidewalk is
wet; it has rained.”
Borgelt and Kruse address another, commonly used form of argument, the extension of particular to general: having observed N objects of a given kind all having a
given characteristic, one concludes that all objects of that kind have that characteristic. This is a classic form of what is usually called an inductive argument. Formally
93
it would appear to be:
P1
P2
∴
A
C
A⇒C
However, Borgelt and Kruse show that this is only a transformation of the more
general reductive form given above, and can be restated as:
P1
P2
∴
If all objects of this type have this characteristic, this
implies that all observed instances of this type will have
this characteristic.
All observed instances of this type have this characteristic.
Therefore, all objects of this type have this characteristic.
In fact Borgelt and Kruse note that this transformation better explains the operation
being performed (for example, why in the original form the conclusion is A ⇒ C
instead of C ⇒ A).
An abductive argument has the same formal structure as a reductive argument.
Borgelt and Kruse contend that all ‘inductive’ and ‘abductive’ arguments are ‘reductive’; all arguments are thus either deductive, on one hand, or reductive, on the
other.
However, they adapt the terms ‘inductive’ and ‘abductive’ to reflect a second
distinction apparent in the more common usages of the terms: abductive arguments
are applied to specific instances, and inductive arguments are applied to generalities
(see Figure 3.1 ). Hence their use of ‘induction’ covers the transformed case given
above, and their use of ‘abduction’ covers the more common use of that term, which
is the kind of explanation applied to a specific event like the rainy sidewalk.
3.2.1
Abduction: One Operation or Many?
There is, however, more to this story. Premo (in press) addresses with concern the
issue of equifinality; this is the possibility that a single observed outcome can be the
94
Figure 3.1. Categories of inference (drawn from Borgelt and Kruse [2000])
95
product of many possible antecedents. This is a well-recognized problem in archaeological reconstructions (find citations). Based on the above we can easily see that
equifinality is a result of the reductive form of argument. For example, to ensure that
the conclusion “it has rained” is correct in the sidewalk example above, we could add
another premise: “Only rain can cause the sidewalk to be wet.” However, absent that
premise, the existence of other possible causes- other processes that also result in the
sidewalk being wet- is unaddressed.
Following the notation used thus far, this case would be represented as:
P1
P2
P3
P4
P5
∴
A1
A2
A3
A4
C
?
⇒C
⇒C
⇒C
⇒C
This issue is made more ambiguous by virtue of inheritance from its history, and
a step backward to understand this is warranted. The term abductive was introduced
by C. S. Peirce, who described abductive reasoning as follows (Flach and Kakas 2000):
P1
P2
∴
Puzzling phenomenon A is observed.
But if H were true, A would be a matter of course
There is some reason to suspect that H is true
It is important to note that, while there is a temptation to view the examples
given above as quite formal and mathematical (and indeed the interest in abduction
has recently grown in large measure due to its importance in the field of artificial
intelligence; see van Benthem 2000), logicians such as Peirce were attempting to explain how people actually think; that is, their starting point was not the manipulation
of symbols like A ⇒ B, but rather understanding the operations people go through
when they are confronted with opportunities for reasoning. Peirce observed that
people actually reasoned abductively and attempted to understand how.
5
5
Note that Peirce was examining abduction as a phenomenon, and asked how people could arrive
96
Peirce reasoned that induction must include two distinct operations: the generation of hypotheses and the selection from among competing hypotheses. Thus the
first is, “whence ‘H’ ?”, and the second is “why was that particular H selected from
among the alternatives?” Peirce’s writings shifted during his lifetime, and this has
contributed to the confusion surrounding this issue; over time he became less interested in the first question and more in the second. In his later writings he would
extend this far from the domain of an instantaneous thought process and into the
more drawn out and deliberate operations of science, in which hypotheses were evaluated more thoroughly and selected on various criteria. Ultimately in this he was even
concerned with some of science’s practicalities, and assumed that hypotheses were
chosen based on a principle of economy (Flach and Kakas 2000; Chauviré 2005), such
that the practice of science was guided down reasonable and achievable paths.
The broad logic Peirce was exploring contrasts with the more formal hypotheticodeductive approach that was advocated by Popper, whose writings are rather later
than Peirce’s and until late in Popper’s career do not seem to address Peirce’s directly (Chauviré 2005). There is an important distinction between the two and their
approaches. Peirce, especially when he considered the first question of abduction
(‘whence H?’) was attempting to breach the walls of what has come to be called the
context of discovery (Borgelt and Kruse 2000, Chauviré 2005), and to categorize and
eventually understand how hypotheses were developed. Popper would later renounce
this, along with all induction (as he used the term), and claim that the origins of
hypotheses were mysterious, random, and/or irrelevant to truly scientific (deductive)
operations, but Peirce, writing in a different time, considered it a valid object of
study.
at conclusions through some process that, he reasoned, must resemble the abductive process. This
is in contrast to the kinds of symbolic manipulation that an abstract depiction of deduction and
reduction gives. Note also that Borgelt and Kruse’s transformation of ‘induction’ may not be valid
in both of these senses, but only in the one related to symbolic logic; much of the recent interest in
abductive logic comes from the field of artificial intelligence, and it is from this view that Borgelt
and Kruse are writing.
97
During the New Archaeology, archaeology flirted with the Popperian approach
(Gibbon 1989), but it was eventually supplanted by postpositive and postmodern
approaches. Today something akin to Peirce’s approach may more accurately describe
how archaeology is actually done. The adbuctive reasoning process, in the broader
formulation that encompasses both the ‘whence H’ and ‘which H’ formulations, has
been given the name “Inference to the Best Explanation” (Lipton 2004 [1991]); in
fact, some authors claim that abduction is entirely equivalent to “Inference to the
Best Explanation” (Josephson 2000, Josephson and Josephson 1994). According to
Fogelin (2007), Inference to the Best Explanation characterizes the most common
operations in the logical arguments of archaeology today.
Borgelt and Kruse- who, like Popper, explicitly avoid the ‘context of discovery’would note, however, that the conclusions obtained by all reductive reasoning are
properly considered conjectures or hypotheses- as Peirce put it, “There is reason to
suspect that H is true.” As such they should be treated not as conclusions but as
hypotheses to be tested as a part of a larger program of study. This contrasts sharply
with the idea of modeling as an abductive operation, in the broad sense in which
Peirce used it and to which Fogelin, following Lipton, subscribes.
It is this larger program of study to which I will now turn, and building on the
definition of deductive and reductive inferences given in this section I will next address
the role that modeling can play in the broader context of scientific archaeological
investigation. A part of the challenge in this arises from the tension between the
appealing qualities of the Logical Positivist programme and the legitimate criticisms
of Quine and his successors. I will argue that modeling be considered in a restricted
way, derived from the inspiration of the Logical Positivists’ programme, but freed from
that programme’s strictly empiricist epistemology. This role for modeling, however,
must appear in a larger production that allows for an investigation of the kind Quine
noted was necessary: the exploration of a fabric of knowledge in toto rather than in
single elements.
98
3.3
Using Models as Part of Larger Scientific Practice
In section 3.4 I will address characteristics of a modeling toolkit that specifically addresses the issues related to exploring archaeological complex systems. Here, however,
I will take a stand on one position that is intended as a resolution of the issue in the
preceding section, the nature of inference that characterizes models and modeling.
The need for this is more general than the specific modeling target of archaeological
complex systems, and once a position is established it can be used in the discussion
of the special concerns that arise in the modeling of archaeological complex systems;
however, there is not a direct requirement from those special concerns to the position
I take here.
I propose that there is a straightforward resolution to the debate about the form
of inference that best describes modeling. The action of a model is best considered
deductive, while models may be used in a larger process that includes activities that
are deductive or include one or more of the reductive forms of logic.
3.3.1
Models as Deductive Engines
The principal action of a model is to provide a means of deducing, from some set
of principles, implications of relationships and interactions among model elements
that are not immediately apparent without the use of the model. This seems to me
to apply generally- that is, to both physical models and conceptual ones. Physical
models encode the ‘rules’ for deduction in the materials they use. While Frigg and
Hartmann (Spring 2008) give little attention to physical models, it bears noting that
physical models are just as much deductive engines as conceptual ones: the design
of the model enforces some set of rules for maintaining or transforming the elements
of which they are comprised, and provided those rules are strictly kept the outcomes
can claim to be deductions from them. This seems to apply to all physical models,
whether they are static models (such as scale models, which simply use their rigidity
99
to maintain a consistent set of spatial relationships among their elements) or dynamic,
as, for example, a model of an engine. We note intuitively that when the physical
properties of such a model are incapable of maintaining the rules- consider a map that
is originally drawn to scale but is stretched or distorted- the conclusions the model
reaches are suspect. Software, too, is a kind of physical model, even though the
physicality of it is usually hidden6 . Conceptual models (Frigg and Hartmann Spring
2008) including thought experiments are sometimes less secure and can introduce
errors, but this does not change the fact that their principle is to provide a framework
wherein deduction from initial principles is possible (but see Brown Spring 2008).
The physicality of a model ensures its purely deductive nature. We can compare
it to the objections of Quine that the analytical/synthetic distinction does not exist.
The implication he draws is that any chain of argument relies intimately on the
meanings of the terms involved; this implies that the idea of that an independent
calculus can proceed through a series of deductions without regard for the semantics
of the terms involved (i.e. without the meaning of one term ’containing’ parts of the
meaning of another, as Kant’s framework required) is poorly founded. Meaning of
terms is inescapable, and it is this that gives the semantic aspect primacy, and forces
theory to be comprised of models, for it is in models that meaning inheres. But
even if this is carried to its conclusion, and deduction is merely social convention,
it is possible to establish that convention and use it. Implementing deductions in a
physical medium makes the convention clear and inarguable. If we are to use models
as deductive engines, the rules that connect deductive arguments must be made clear
and open; in a physical model they can be discussed without the difficulty of relying
6
It is hidden from casual users; programmers must be aware of the underlying physicality of
the processes on which software are based. For example, in the Java programming language it is
possible to find that 999 ÷ 33.3 = 30.000000000000004 ; the reason is that the numbers 999, 33.3,
and the operation ÷ are represented imperfectly by ‘bits’ on a logic circuit, which can represent
only ones and zeroes, and not easily represent decimals or fractions. For this reason simulation
programmers sometimes run alternative versions of a given simulation using different underlying
numerical implementations.
100
on merely verbal descriptions.
In effect this means that the semantic aspect of models is repudiated; models must
be considered to behave like theory. There is another side effect of this that enhances
models: conceptual models can have incompletenesses, but physical models can’t.
While scientists may generally move forward with a multiplicity of models that are
incomplete, inconsistent, or incompatible, an approach that seeks to work deductively,
and certainly a simulation approach, must fill in these gaps (even if only provisionally)
and either learn from or work through any inconsistencies (see Frigg and Hartmann
Spring 2008) for additional comments on learning from the construction of models,
even before their operation).
If models are primarily deductive engines, then the principal task of a model is
to help determine whether a statement akin to A ⇒ C actually holds. The main
distinction is that the target can be far more difficult than “Rain ⇒ Wet”; the initial
conditions (A) can include large numbers of premises and multiple rules to connect
them. This echoes and extends the approach of the Logical Positivists: long chains
of deductive arguments can be assembled. Previously this was quite limited by the
intellectual faculties, time, and patience of individual thinkers; today, computers have
given us the ability to create ever larger models of this kind.
3.3.2
Intrinsic Generality of Models
One modeling operation should be kept in mind when thinking about these possible
modeling goals. The match between the models A and C and the reality the model
hopes to capture can never be exact. There is always an accepted disconnection between the two, so that the claim that can be made is that in a situation approximately
equivalent to A the outcome will be approximately C. This touches on the unresolved
issue of representation as mentioned by Frigg and Hartmann (Spring 2008) and discussed above. A challenge it raises is that the ‘goodness of fit’ between the model
101
and reality is not measurable on an absolute scale.
This also introduces a complication with respect to the Borgelt and Kruse framework for classifying logical operations, as introduced above. Borgelt and Kruse’s
distinction between abduction (reductive arguments applied to specific cases) and
induction (reductive arguments used to arrive at generalities) is difficult to apply
because models have an intrinsic generality built in. Effectively, to apply a model
to a real-world case is to say that real-world instance X is an instance of A and
real-world outcome Y is an instance of C. Depending on the conceptual components
from which the model is constructed this may be more or less likely to encompass
a range of real-world cases beyond those of the intended target, and with a heavy
level of detail the probabilities are low that any other real-world instance would be
appropriately categorized. There is no logical reason why this should be the case,
however, and hence any model applied to a real-world case is effectively making a
claim that could be argued to be, in Borgelt and Kruse’s terminology, both adbuctive
and inductive. It is inductive because it effectively claims that the model applies “For
all X’s such that (A) applies”; it is abductive only because we believe that the set of
X’s will have only one member.
3.3.3
Stochastic Models
Having defined a model as an engine that allows the implications of some antecedent
A to be traced to some consequence C, some further expansion is in order to accommodate common practice. Premo (in press) refers to stochastic or “one to many”
models, in which one model can lead to may outcomes. If we adhere to the idea that
a model is a simple deductive engine this outcome would seem to be impossible; it
seems especially so if the model is implemented as a computer simulation, given that
most such simulations are written so as to be entirely deterministic and if given the
same input will produce the same output. In practice, stochastic computer models
102
are created by using code to generate some form of randomness; a distinction is made
between this code and the rest of the model, but it could easily be argued that this
distinction is artificial.
In the notation of symbolic logic introduced above, an antecedent with several
consequents could still be easily represented (‘∨’ means a logical ‘OR’):
P1
A ⇒ (C1 ∨ C2 ∨ C3 )
And this could easily be used in either a deductive operation:
P1
P2
∴
A ⇒ (C1 ∨ C2 ∨ C3 )
A
(C1 ∨ C2 ∨ C3 )
P1
P2
∴
A ⇒ (C1 ∨ C2 ∨ C3 )
C2
A
or an abductive one:
In the case of the deductive operation, we are left with uncertainty as to which
consequent is implied, but we know (if the premise is accepted) that one of them is.
Additionally, it is an easy extension to add a probability value to each of the
consequents; these values, which would sum to 1, could indicate that some of the
outcomes were more likely than others.
One way to deal with this is to concede that it is possible to create models that are
identical except for this stochastic element; these form a class of models that can be
treated identically. But a further consideration is useful: all of these models- or, if you
will, the single definition of the model that includes the stochasticity- are treatments
of an identical state space. The state of the model can be defined as the value of
all the variables in it; the state space of the model is the aggregate of all possible
combinations of variable values. For any but the simplest models this will be huge, but
it is not infinite. The simulation’s operations can be defined as transition rules from
103
each state to a next state. In a completely deterministic simulation each state would
have exactly one other state to which it can change; in a simulation with stochasticity,
each state may have several other states to which it can change, with probabilities
associated with each option. A single run of the model is simply a trace through
one of the possible routes through this space state. Because of the vast number of
possibilities it is usually impossible to trace all of them, so the simulation’s approach
is to sample from this population by tracing through some collection of paths and
recording their inputs and outputs. (As an aside, typically such runs can differ in two
ways: by beginning from different initial states or by using different random number
seeds; often both ways are used, and therefore rarely does a stochastic model actually
move from a single initial state to multiple outcomes, but instead the sample is across
multiple initial states and is considered to be from the space as a whole.)
There is another way to look at this, however, and it returns us to a point introduced earlier. It is possible to relate the idea of a state space with that of a ‘theory’
in the syntactic, logical sense. A theory defines, first, the existence of an array of
entities and the values they may take on; this is essentially a state space. Following the definition given above, an interpretation would be any substitution of actual
values into the variables. However, as additional elements are added to the theory,
restrictions can be placed on valid values and combinations; not all interpretations
are valid when these restrictions are imposed. Extending this further, we can take
any two states and ask if the theory permits the transition from one state to the
next; put another way, if we assumed a list of all possible transitions, we could select
from this list only those that were valid according to some set of transition rules
(carrying forward probability values, if appropriate). Extending this to successive
transitions is also unproblematic (even if the transition rules change with each time
step, though this is uncommon). Each sequence that would result, being composed
of valid values for all state variables and valid transitions from one state to the next,
would be an interpretation of the complete theory; as defined above, the collected set
104
of interpretations forms the model.
3.3.4
Experiments
Along with the debate about the appropriate underlying logic of models and modeling, there is a teapot-sized tempest about the term ‘experiments’. Early proponents
of models, especially agent-based models, touted the ability to use models to do ‘experiments’ in silico rather than in the real world (Epstein and Axtell 1996). Others
have used this as evidence that modeling is inductive (Epstein 2005).
Non-scientists often think of experiments as activities performed to ‘see what
happens when X.’ This can be done in the real world, and can lead to interesting
insights; likewise, it can be done with a model- the modeler can try new combinations
of parameters or code- and lead to the same effect. However, to refer to the earlier
vocabulary of the larger pattern of scientific inquiry, this kind of activity properly
takes place in the context of discovery. It is not, however, the classic interpretation of
‘experiment’; experiments are rightly conducted in the context of justification. The
experiment is not conducted because the outcome is not known; rather, it is conducted
because it best demonstrates a specific principle.
This approach can be used in modeling. Suppose a model is constructed that gives
rise to some set antecedent-consequence relationships such that when one parameter
is varied, A1 ⇒ C1 changes to A2 ⇒ C2 . We assume that the difference between
A1 and A2 is only this parameter, and the difference between C1 and C2 is some
quantitative or qualitative difference of interest.
However, this rests on a more complicated epistemology than might be seen at
first blush. It appears to draw on a well-known parallel in empirical sciences in the
real world: the procedure of using experimental group vs. the control group. This
procedure draws from two presumably identical groups and sorts them into one group
that will receive an experimental treatment and another that will not. Presumably
105
differences that arise in the two groups can then be attributed to the experimental
treatment.
The modeling case is, however, not truly parallel. Consider again the symbolic
expression used above: what possible bearing can the conclusion A2 ⇒ C2 have on
the assertion that A1 ⇒ C1 ? If we take an absolutist view, A1 ⇒ C1 is one model
and A2 ⇒ C2 is a second model; had the outcome of the second model been different,
how would that have changed the results of the first?
And yet, when reading about or using a model we can often find our confidence
in the understanding of the model dynamics is increased by demonstrations of just
this kind. The reason lies in the distinction between a priori and a posteriori knowledge. As a purely deductive engine, a simulation generates knowledge that we accept
as a priori- that is, derivable from the internal concepts of the initial premises. If
this is accepted, then once A1 ⇒ C1 is demonstrated, there is no need for further
buttressing. However, we use models- especially computer simulations- to extend our
own thinking beyond the implications that we can arrive at ourselves. In doing so
we are purposefully distrustful. We wish to be sure that the simulation actually does
what we believe it to do; this is referred to as verification (Law and Kelton 2000).
When we attempt to verify a simulation model we slide into a domain in which we
study the model as if it were an artifact; in other words, we begin to address its
performance in an a posteriori way- that is, asking questions of it that can only be
answered by investigating whether this real-world implementation of our concepts
performs appropriately. It is from this standpoint that the demonstration of the A1
vs. A2 sort becomes reasonable; we have higher confidence, from the two examples,
that the model is doing what we expect.
However, this confidence is misplaced: if we doubted the ability of the simulation
to carry out the deductive transformations we intended for it, we should continue to
doubt it even in the face of two examples that behave as we expect. The parallel to
the experimental group is incomplete and inappropriate. In an experimental group,
106
we know that there will be many factors bearing on C that are out of our control,
but we hope that by randomly assigning subjects to each group we ensure that all
the factors that affect the first group also affect the second equally, so that differences
between the groups can be attributed with confidence to the experimental treatment.
In the modeling situation, we do not- or should not- assume that this wide array of
unknown and uncontrollable influences affects the outcome.
A better use for alternative versions of a model is demonstrative. We are able
to examine the model’s actions from an oblique viewpoint by observing its states in
progress. By doing so we can gain added insight into why the model produces the
results it gives. This insight, which is also a posteriori, can also be broadened by
examining other, similar models. This should not, however, be confused with the
experimental approach to which it appears superficially analogous.
3.3.5
Models in the context of a process scientific inquiry
Peirce proposed that induction, deduction, and abduction (as he used those terms)
were rightly considered separate steps in the larger scientific process. Each kind of
logic occupied a place in the progress of a given scientific inquiry (Flach and Kakas
2000): adbuction falls from observations and leads to hypotheses; deductions test
hypotheses and make predictions; induction moves from specific to general cases
which allows predictions to be made and, possibly, disconfirmation of hypotheses.
Popper would propose an alternative framework, one that explicitly excluded any
questions about the origins of hypotheses. Chauviré (2005), however, argues that in
fact the differences between the two frameworks are largely terminological, and the
processes they describe are virtually identical.
What role does modeling play? If the principal function of a model is to determine
whether some antecedent-consequent relationship A ⇒ C holds, then modeling can
provide two benefits: one in the case where a relationship is found to hold and one
107
in which it is found to be untenable. In either case, the relationship in question may
be one that has been considered possible previously, or may newly be added to the
understanding of the problem. Hence:
• The model may support one of the previously recognized probabilities. It may
also show that the supported probability is more likely than had previously
been thought.
• The model may add to the list of A ⇒ C possibilities. This has the paradoxical
effect of reducing our knowledge of the system. In the case where one or more
hypothetical A’s have been proposed, the new one may diminish the certainty
associated with the previous ones. This may be dealt with using ‘inference to the
best explanation’, wherein the new relationship may be much more likely than
the others or, alternatively, may be discounted as much more unlikely; it may
also be held roughly equivalent, demanding that some alternative strategy to
discern them. Broadly, this is the state of play with respect to complex systems
in archaeology: previously held conjectures about causes are being countered
with unanticipated with alternatives (i.e. Brantingham 2003).
• The model may show that a previously considered hypothesis is, in fact, untenable; this typically arises because the original was expressed more loosely
than simulation modeling requires, and when rigorous procedures are applied,
disjunctions are found that are insurmountable. Another way to think of this
is to recognize that the whole point to simulation modeling is to allow the computer to perform operations whose results we, as humans, can only surmise
in advance; in some cases we will find our expectations are unmet. We may
note that this is the inevitable result of the semantic approach to models: if
models are allowed to exist in something of a competitive soup, in which incomplete or even contradictory models are simultaneously employed, then the
108
process of reconciling them will lead to revealing these inconsistencies and other
shortcomings.
• The model may show that a newly proposed hypothesis is impossible; this is
the infamous ‘negative result’.
I began the previous section by mentioning Griffin’s (2006) argument that modeling (in this case specifically agent-based modeling, to which I will return in more
detail in Chapter 6), not only requires abductive reasoning, but that this characterizes
the process of modeling as a whole. Since that remark I have shown that abduction
can be defined narrowly or broadly; in its broadest definition is includes not only the
construction of hypotheses but their evaluation. If this broader definition is accepted,
Griffin’s view is not wide of the mark. Attempts to make modeling more secure by
incorporating null hypotheses and an experimental method misinterpret the kind of
knowledge that models produce. Likewise criticisms of modeling that it is merely
inductive because it merely ’finds’ a solution to a problem also miss the important
contributions that this kind of discovery can bring.
The best use of models falls somewhere apart from these extremes. A clear example can be drawn by considering the second item from the list above, in which a
new hypothesis is added to the list of explanations for a given outcome. Consider the
special case in which the original list was empty: the phenomenon was without any
explanation at all. One can imagine a modeler working to find some explanation for
that outcome. Using abduction in the sense of creating a hypothesis, he attempts to
envision some set of initial conditions and rules to form the A portion of the model.
When the model’s deductive engine is applied, C results. Moreover, C only appears
to result when A is narrowed quite finely- continuing abduction in its broader sense
to include the process of evaluating the hypothesis even more; C, as a result, is also
able to match the real-world phenomenon quite closely. Given that no previous explanation was thought acceptable, and the close match between the model outcome
109
and the real-world target, and also in no small part due to the work that seemed
to be required to create the model and ‘get it to work’, the modeler may feel highly
confident in the results.
This is, however, the extreme of both the abductive and inductive approaches.
It is fatally flawed because it is, paradoxically, doomed to succeed. That is, it is
an accepted truism that a model can be created that will generate some expected
outcome. Refining a model to match reality is sometimes called ‘tuning’, and is to
be avoided. One strategy for avoiding it is to emphasize the generality of A and C
by targeting not one specific context but classes of them; comparative approaches
(i.e. Barton’s Mediterranean Landscapes project; see page 34) are useful for this,
but even more general approaches are possible. A second alternative is to embrace
stochasticity, and to recognize that a model that can lead to a given outcome, but
that does not necessarily, may have a generality based on underlying dynamics that
may be more broadly applicable. It is these modeling approaches that are especially
important when modeling is applied to the pursuit of complexity in the archaeological
record.
3.4
Modeling for Exploring Archaeological Complex systems
Archaeology can claim to be a unique field for the study of the progression of human
life on earth because it is the only field that has access to time depth; no other field
has millennia-long data sets with which to work. But to fulfill that promise means
pursuing a range of goals; that range of goals, always large, has been amplimented
by the possibilities offered by complexity theory. Broadly, where before the adbuction of archaeology could work from a small or even restricted number of Ax ⇒ C
relationships, complexity theory has suggested that our understanding of the range
of possible relationships is much broader and, for the moment, quite poorly known.
The Hohokam offer a perfect example: we can no longer presume that a central state
110
was required to manage a large canal system, now that Lansing (1991) has shown
that another A can lead to the same C. This kind of logic extends to many areas,
and it has been suggested that there are senses in which the trajectory of humanity’s
existence is structured in accordance with principles we have as yet only glimpsed,
and yet which have common threads with many other such systems in the physical
and biological world. These are enticing connections to the rest of the universe (sensu
Kauffman 1995).
A complex systems-oriented archaeology will not only want to apply this larger
suite of possible A ⇒ C relationships, but discover more of them that may only be
visible through the lens that archaeology offers. Hence an exploratory approach is
tasked with different yet complementary sets of goals that include testing the applicability of newly understood system configurations to specific cases, and discovering
new system configurations if such exist.
3.4.1
A catalog of goals
In an earlier chapter (Chapter 1) four goals that might be pursued in a new, modelbased archaeology, in the form of four kinds of questions: What was? What was
necessary? What if? and Why? Here I will expand upon these and state more
formally a collection of goals for an exploratory approach to archaeology.
Several different goals that an exploratory approach to complex systems archaeology might pursue can be distinguished. While they move roughly from those attached
to a specific case to those that are more abstract, they don’t fall neatly in this line. I
offer a catalog of six related yet different goals that an exploratory complex systems
archaeology can reasonably pursue. It may be a slight mischaracterization to describe
them as goals; this suggests that they are the initial purpose of a given project. It
may be better to say that an exploratory approach should be open to all of these
possibilities, and that the course of the explorations may lead down all, some or only
111
one of these paths.
The goals are a product of the reductive form of reasoning that characterizes
archaeology. Complexity theory has shown that there exist a wide class of A ⇒ C that
were hitherto unexpected. The Hohokam context offers two examples: the first is the
possibility of non-state level organization and management of the canal system, and
the second is the possibility of non-catastrophic causes of system decline and dispersal.
In both cases the previous narratives gave explanations that were unsatisfactory; in
both cases new intuitions have given us cause to believe that there are other causes
that might apply. Tellingly, we are at a point where archaeological research need not
only attempt to find which A applied in a given context, but can also be applied
to the discovery of new A’s that were previously unsuspected or that may only be
recoverable through the lens of archaeology. The goals of an exploratory archaeology
must therefore include several related goals that move focus among several competing
poles.
The first goal may be considered the pursuit of abstract systems. At this level the
purpose may be to reveal systems that apply across domains, so that in one context
they may involve one set of entities while in another context the components are
different but the structure of the system remains the same.
There are two useful things to keep in mind with respect to this approach. First,
recent studies have pushed forward our understand of concepts like robustness (Jen
2005, Wagner 2005) and resilience (Holling 1973, 2001). These are useful, general
concepts, but they are essentially descriptive of systems’ behavior. I am referring here
instead to the idea that two systems may have a common structure, and not merely
that they behave in ways that we would describe as robust or resilient. Second, we
do not yet have an index of structures that we consider the canon- perhaps ‘bestiary’
is a not inappropriate term- of complex systems by structure; this may be a useful
endeavor, but it is also true that such a canon may not even be possible (see the other
possibilities given below).
112
The second is the pursuit of a system that is grounded in a specific context, with
no effort to abstract it to a general structure, but with no genuine restriction on doing
so, either.
The third is the exploration of a specific case in which the components are (or are
thought to be) unique. We cannot presume that the ‘abstract systems’ approach will
work; it may be that the dynamics that drive a particular system are dependent on
unique characteristics of the combination of elements found in that system, and the
hope of a general, abstract system is misguided.
A fourth is the exploration of a ‘system’ in a context where it is recognized that the
‘system’ aspect is being ‘washed out by other crosscutting aspects. in these contexts,
discovering the system may be very difficult, but the applicability of an abstract
system structure that has been revealed elsewhere may be tested.
A fifth relates to a concept introduced by Premo (in press): the notion of a “one
to many” model, understood to mean a model that has multiple outcomes. The
issue raised by ‘one to many’ models is that of alternative trajectories, and the fifth
goal, roughly stated, is investigating how likely the attested trajectory is versus other
possible trajectories. Some of the outcomes of the model may map to the C found
in the attested record, while others do not. This idea of alternative trajectories to
a model- alternative histories, in the context of archaeology- is a key component of
a complex systems approach to understanding the archaeological record. Gould (per
Lansing 2003) points us to the question of what would happen if history were to be
played out again? We might find that the actual trajectory of history was one of
a large number of very similar trajectories; it may fall within a basin of attraction
(Lansing 2003) such that from a large array of initial states, and across a large array of
stochastic runs, common trends appear and a large fraction of the instances map quite
closely to the actual trajectory. It may also be, however, that the actual trajectory
is one of the very unlikely ones; our procedure for investigating this recognize this
possibility.
113
The final goal is reconstruction of an actual trajectory. In no way is this goal
diminished or abandoned by the fact that it is outnumbered by the other goals listed
here; in fact, if the fifth goal corresponds roughly to “What if?” from the earlier framework, this one subsumes both “What was?” and “What was necessary?”.
Whatever course is used to pursue those other goals must maintain a ground in the
real, archaeological record, and presumably the roads those courses trace lead back
to that record. And, presumably, our understanding of the complex systems we seek
must at some point contribute to our understanding of some real instance. We would
not want our approach to be completely disconnected from a real example, even if we
were to find, as suggested by the previous kind of goal, that the real example was far
from the ideal representative of the underlying system we believed to be at work.
Collectively these goals are all expressions of one underlying question: to what
degree is the world suffused with complex emergent systems? Do they surround us
in ways that we have not even begun to recognize? Or are there only a few, isolated
examples that can only stand as interesting exceptions? When studying any specific
archaeological instance the investigator has dual allegiances to pursue this broader
question while being open to the true nature of the instance under study.
3.4.2
Implications of the collective goals
An influential paper by Levins (1966) proposed that models must balance three competing goals: generality, realism, and precision. He argued that a model could choose
to emphasize any two of these but in doing so would be forced to sacrifice the third.
Orzack and Sober (1993) heavily critiqued Levins’s comments, on the grounds
that the three axes were poorly defined and not necessarily mutually exclusive; a
second paper by (Orzack 2005) made the same arguments to parry a defense of Levins
mounted by Odenbaugh (2003). Despite the intuitive appeal of Levins’s approach, the
criticisms are quite sound. The issues these criticisms raise for modeling in general,
114
however, are even deeper when the milieu is the modeling of possible complex systems
in archaeology. Rather than merely moving in the direction of greater generality, for
instance, a modeler can shift orientation to one of the more abstract goals; the impact
of this shift on precision or realism may be affected, but the nature of that impact
may be determined categorically rather than in the simple trade/balance way that
Levins envisioned.
None of the goals is paramount, nor can we consider our exploration over when
any one of them is satisfied: they interrelate in such a way that each should contribute
to the other. The most abstract levels offer contributions to wider fields, but loses its
grounding if the most concrete ones are abandoned. An archaeology that omits this
complementarity misses important opportunities and fails to fulfill its promise.
3.5
An Exploratory Archaeological Complex Systems Modeling Toolkit
With these considerations in hand we can turn to a description of what a toolkit for
the modeling of archaeological complex systems might look like. In the preceding
sections I have outlined a role for modeling within the process of scientific inquiry,
and explicated the role of modeling ‘experiments’ within this; I have also addressed
some broader points concerning inference as we apply it to models and modeling. I
propose that, keeping these points in mind, the suite of goals that an exploratory
approach to archaeological complex systems pursues implies a unique set of modeling
operations; these are the ‘road map’ that exploratory archaeology has so far lacked.
In this section I will first consider these operations, and then return to the threads
of history I discussed at the beginning of the chapter to address the implications of
those operations for the epistemological and philosophical supports on which such a
toolkit must rest.
115
3.5.1
Operations of exploratory modeling in complex systems archaeology
A toolkit for exploring archaeological complex systems must be defined by the set
of operations that can be performed in it. Strictly, there are three operations. The
first is the declaration of entities to be modeled. This defines the ‘state space’ of
the model’s operation (see above). The second is the definition of the relationships
among these entities- how the actions of each impact the others. This amounts to
the transition rules among the state space.
These two operations are common to all modeling; complex systems modeling
must flexibly accomodate a third: the ability to easily substitute variations of the
elements or their relationships, to move upwards or downwards in complexity or
abstractness. This is the element that is required to permit the exploratory approach.
The substitution of variant elements can be motivated by any of the goals of the
endeavor, but no a priori assumption can be made about whether the complexity of
any given element is appropriate for any given goal. Hence the toolkit should allow
the substitution of a more complicated component when needed for some of the goals,
and a simpler one when needed for others.
This is a strategy that resolves the dichotomy between small, abstract simulations
and what I termed in Chapter 1 ‘Big, Real’ models. The former can demonstrate
general patterns but fail to address real-world complexity, while the latter can become too entrenched in detailed reconstruction to allow the underlying and more
generalizable system dynamics to be teased away from these details. An exploratory
approach must presume that the detail will be necessary for some of the goals but
not for others, but make no assumption as to which details are key. It should be
possible to move fluidly from what may be termed more purely exploratory efforts
that encompass a large range of presumptive values, to what may be termed a more
cleanly empirical approach, in which some subset of these is selected and controlled in
116
an more abstract, demonstrative experiment. Thus the modeling framework should
be useful in both the context of discovery and the context of justification, and should
allow each of these roles to be played as appropriate.
3.5.2
Characteristics of a toolkit for modeling archaeological complex
systems
This approach can be understood in the context of the two historical trajectories presented at the beginning of this chapter. These two trajectories reflect the dichotomy
between the two standard approaches, and the tension between these two intellectual
threads represent genuine philosophical concerns that have manifested themselves in
the practical problems that have led to the two opposing tactics. A new toolkit
should chart a course using this history as its landmarks, resolving issues that it can
along the way, and making choices from among the alternatives they present when
no resolution is clear.
If models are, as we have framed them here, deductive engines that generate a
priori knowledge, then they echo the programme of the Logical Positivists, who also
saw a programme of constructive knowledge generation from initial premises through
the operations of deduction. The congruence between a simulation models’ definition
of a state space and the definition of theory, interpretation, and model suggests that
an approach born of formal logic is worth pursuing, and this would also echo the
Logical Positivist’s programme. But the counterarguments of Quine- that we do not
and cannot address empirical knowledge in single units but it toto- apply as well; this
leads to the exploratory approach, and demands a formal description of how models
are to be used in the scientific process.
We can consider such a modeling toolkit in terms of the four modeling issues
raised at the start of this chapter. We have assumed a computer simulation platform;
our model ontology will be that the model is a physical thing. This carries multiple
117
advantages, not the least of which is the ability to objectively assess the actions of the
model. This also permits an easy approach to the second issue, that of representation.
This is an issue with several stages. The first is the way that the physical object of
the computer represents our concepts; the second issue, and the more important one
for our purposes, is how clearly our concepts represent the real-world target. I have
alluded to this already in mentioning that there is latitude in identifying any example
in the real world as a case of A or C; there are difficulties here, to be sure, but they
can’t be addressed generally, and must instead be tied to the specific context in which
they are posited.
What can be said, however, is that the formal definition required in a computerized
environment pushes us away from a semantic approach to modeling and toward a
syntactic one: terms implemented in the computer model have meanings that we
can define explicitly and, generally, associate with firm units of measurement. If the
semantic approach to the general use of models and theory in science, per the above
discussion, means that models be held that simultaneously are in competition, are
contradictory, and may even be incomplete, the simulation approach advocated here
will push its models back to a syntactic approach, and will force the resolution of
these issues; these resolutions, in keeping with the instrumentalist approach, can be
provisional, but inconsistency cannot be accommodated. (This is one reason that
simulation models can be informative when they are still being developed, before
actually being run: inconsistencies reveal themselves rapidly.)
It should be clear that the toolkit will not rely on a realist epistemology exclusively
(contra Gibbon 1989), and we can avoid the Logical Positivists’ restriction to an
empiricist view as well. An instrumentalist approach is much preferable. The simplest
example can be drawn from the first, most abstract goal of the modeling toolkit: if
we wish to examine system dynamics that crosscut several contexts, then we would
like to be able to define abstract entities that represent several different elements,
without being constrained to claiming that this abstract element in some way reflects
118
something ‘real.’ Moreover, we should also like to posit purely hypothetical constructs
as part of our explorations; these, too, need not be claimed to be real.
With respect to how modeling knowledge is extended to knowledge of the real
world, we rely on the role of modeling in the scientific process described earlier. One
implication of this, however, is that the toolkit should be flexible enough to permit
an increase in our knowledge over time, as our explorations move forward and the
cyclic process of science goes through multiple iterations.
3.6
Discussion
I have proposed that a toolkit for exploring archaeological complex systems should
be a mixture of approaches, drawing from an older program of Logical Positivism
but accommodating, when possible, the Postpositivist critiques of Quine and his
successors. A final light can be cast on this by considering Premo’s concern for
equifinality in models. He proposes that there is a philosophical reason to believe
that there is no way to discern which of a potentially infinite set of models that can
give rise to the attested C was the one that actually happened. This is, briefly, a
case with many Ax ⇒ C relationships, which we have seen before in the context
of abductive reasoning. I am willing to concede that this is true in principle, but
in practice I believe it is not devastating. If we assume that we are to start down
the road to an infinity of models all of which are equally likely to give rise to the
archaeological record to the full degree that we know it, and that every improvement
on our knowledge- every result of every research program designed to resolve issues
raised by our previous knowledge, merely gives rise to a new set of models which
is a subset of the first and yet is still infinite- then we are simply doing two things.
First, we are acceding to Quine that we cannot modify our understandings piecemeal,
but rather in whole cloth, and must look at the totality of our knowledge (including,
I would argue, the results of our pursuits of all the of the other goals) and then,
119
sensu Fogelin, come to a conclusion (if any can be reached at all) based on the
’best’ explanation. Second, we are surrendering both the instrumental and realist
epistemologies: we are denying that at the end of our investigations our concepts will
match ‘reality’ (the realists’ hope) and that they will have no utility (the hope of the
instrumentalists).
At the heart of Premo’s concern, however, are our own limitations: if we reach a
day when we are able easily to envision many mutually exclusive models for any given
archaeological record, and thus to duplicate the recent contribution of complexity
theory by adding more and more hitherto unexplored possibilities, until every model
seems equally likely, then, yes, equifinality will become severely problematic: we will
know nothing for knowing too much.
Against this I argue that the scientific process of exploration and refinement is one
in which we can have faith. Our understanding may be limited, and we may someday
find out that we are very far from the kind of understanding we seek; but this is no
cause to abandon the project. The appropriate approach, I believe, lies in a fluid
research program that moves flexibly from abstract to concrete, from the context of
discovery to that of justification, and from the pursuit of what actually happened in a
specific case to the larger question of what underlying patterns the actual past might
merely reflect.
3.7
Conclusion
The ‘exploratory’ approach is novel and unfamiliar ground to archaeologists, and it
contains pitfalls of which they are rightly worried. The goals it pursues are quite
far from traditional archaeological approaches. Because of this, perhaps the most
difficult aspect of this proposed intellectual toolkit is acceptance. Many of the goals
of an exploratory approach- for example, the idea of alternative histories- are outside
the traditional goals of archaeology. But it will push forward this acceptance if the
120
goals are outlined clearly, the operations to achieve those goals are well-justified and
thoroughly defined, and the claims achieved by the results are not overstated.
The next chapter will turn to a software environment designed to meet the criteria
and permit the operations outlined above, and thus enable the exploration of the
Hohokam trajectory.
121
Chapter 4
The ABCM Modeling Framework
In the preceding chapter I proposed that an exploratory approach to an archaeology
of potential complex systems would demand a new intellectual toolkit. I alluded to
the fact that this would inevitably be a computer simulation toolkit, but focused on
the intellectual components- how we think about problems, the ranges of permitted
arguments that we might make about them, etc.- rather than any implementationexternal tools (software) that we would use to help us perform these intellectual
operations. In this chapter I turn to the issue of implementation, and describe the
software toolkit that I created for exploring the Hohokam context. The software is
compartmentalized so that the modeling framework, which corresponds in a general
way to the intellectual toolkit, is separate from those components that are specific
to the Hohokam context; I will follow this division here, and in this chapter discuss
the toolkit in the abstract before turning to concrete examples from the Hohokam
context in the next. Roughly, the software is a collection of building blocks from
which varying examples can be made; in this chapter I focus on the structure of those
blocks and how they can be variously assembled, and in the next I turn to examples
drawn from the Hohokam context and made out of these pieces.
4.1
The ABCM Toolkit: A database-centered approach
I refer to the modeling framework as the Assertion-Based Computer Modeling toolkit,
or ABCM1 . The most important and fundamental aspect of the toolkit is that it links
more traditional simulation models with a database. Linking a simulation model to a
1
Source code and documentation for the ABCM framework will be made available via the author
and from the Global Institute of Sustainability at Arizona State University.
122
database resolves an array of practical problems having to do with data management:
it becomes easy to store input data and associate it with output data. All simulations use some structure to achieve the same goal. What distinguishes the ABCM
framework is that the database is not merely practical, but is applied in a manner
that matches the theoretical framework given in the preceding chapter.
In the preceding chapter I discussed certain aspects of the intellectual history of the
20th century, and noted that the intellectual tools we use have a history that shapes
what they contain and how we interpret them. The term ‘database’ is no exception.
It can be used commonly to denote any collection of information. Somewhat more
appropriately it can be used to denote a structured collection of information; a number
of software products exist that refers to themselves as database software systems, and
these implement a range of different kinds of structures. However, the trajectory of
these products has been shaped by market forces in a direction that emphasizes
general applicability and usability over theoretical consistency. A body of database
theory exists, but most software packages either ignore it completely or routinely
violate its tenets.
That theory is termed ‘Relational Database Theory’ and was developed in the
early 1970’s at IBM by Edward F. Codd (1970, 1979, 1990). The theory was originally
designed to address a practical issue in database management, an issue called an
update anomaly. This problem arises when a database contains two copies of a single
piece of information: if one of these is changed but the other isn’t, the database
becomes inconsistent and contains at least one piece of information known to be
incorrect. There are, occasionally, good practical reasons for a database to have two
copies of one piece of information; usually this is done to improve the speed at which
data are retrieved. But there is no good logical reason for it, and Codd, focusing on
this logical aspect of a database, created a theoretical edifice about what a database
is that is quite useful in the ABCM framework.
Briefly, Codd argued that a database should consist of statements that are known
123
or assumed to be true. The issue of duplicate data was to be avoided in part because
this simply led to redundancy. However, by approaching databases as logical structures, rather than as merely data storage and retrieval structures, Codd was opening
a doorway to a very complicated theory- one that eventually was to be ignored by the
vast majority of database software packages and their users. This theory is termed
Relational Database Theory. Relational Database Theory defines a set of rules for
creating and arranging sets of information so that the resultant collection of data has
clear logical structure; it also defines sets of operations that can be performed on this
collection in accord with principles of formal logic. What is striking about the theory
is that it begins with abstract principles but works with and through the meaning of
the information to be collected, to arrive at a refinement of that information and an
array of permissible operations on it.
One simple example can be provided. Suppose a database were asked to contain
three pieces of information: the volume of dirt removed from the first and second
levels of an excavation unit, and the total volume of dirt removed from both levels
together. Presented in a table (units assumed to be liters), this might appear as:
Level 1 Vol
7
Level 2 Vol
4
Total Vol
11
However, suppose the volume in the second level were to be found to have been
entered inaccurately. Changing it- and only it- would result in somthing like:
Level 1 Vol
7
Level 2 Vol
5
Total Vol
11
If the total volume is assumed to be the sum of the volume in the other two levels,
there is now, clearly, a problem. One solution is to change the 11 to a 12. A better
solution, however, is to not store the 12 at all: it is dependent on the other values, and
hence storing it is effectively the same as storing a piece of duplicate data. Instead of
being stored, it should be recalculated whenever it is needed from the fundamental
data on which it is based.
124
The same principle can be restated in a different way: the database should store
only fundamental values and the rules from which dependent data can be deduced
from them. In practice this meant that an elaborate architecture was needed to
define how information could be resolved into fundamental pieces. Mathematical
dependencies such as the one shown above are not the only kind of dependency,
and real-world examples offer many other variants. In most cases the data can be
envisioned as collections of tables; typically rows in each table represent individual
entities and columns represent attributes of that entity (in the example above, the
‘entity’ would have been the excavation unit, and the ‘table’ presumably could have
contained an indefinite number of such units). Commonly the tables are related to
one another, such that each row in one table may have several related rows in another
(i.e. a list of ‘persons’ exists in one table, and each person has several ‘phone numbers’
which are listed in another table, each number in a different row). The process of
examining a collection of related concepts and formalizing an implementation of a data
structure that reflects the dependencies among these concepts is called normalization
(see Eiteljorg 2007, Date 1998, and Codd 1990, inter alia).
Most ‘database’ software today can accommodate this kind of structure, and
many people, making use of this software, believe that the ‘Relational’ in ‘Relational
Database Theory’ refers to the relationships that are created linking the tables to one
another. In fact, a ‘Relation’ is a term drawn from set theory; a relation is a special
kind of set. In structure it roughly matches a table as just described, but it is bound
by several other rules; the most important is that as a set-theoretical construct it is
forbidden to have two member sets that are identical in membership. This makes it
usable as a component in a set-based logic; Codd developed a database logic based
on set-theory and related to a body of logic called the predicate calculus.
It is beyond the scope of this essay to delve deeply into the predicate calculus. For
our purposes several points will suffice. First, the predicate calculus was a new kind
of logic that was developed at about the same time as the rise of Logical Positivism
125
at the end of the 19th and beginning of the 20th centuries. It replaced syllogistic
reasoning with a more general symbolic reasoning. Two particular advances were
the inclusion of two ‘quantifiers’: the existential and universal quantifiers. Roughly,
the existential quantifier amounted to the claim “There exists some x”; the universal
quantifier amounted to the claim “For all x”. Hence one could take statements and
represent them symbolically. Thus:
“All sheep are white”
“Some sheep are not white”
∀x : x ⇒ y
∃x : x ⇒ ¬y
Translated directly into English, the two symbolic statements say “For all x, x
implies y” and “There exists some x such that x implies that y is not the case.”
These two statements are logically inconsistent; this inconsistency can be seen in
the symbolic representation, without reference to the real-world concepts of sheep or
whiteness. Another way to say the same thing is to note that any real-world elements
substituted into the symbolic representation will also be inconsistent: “All birds are
black; some birds are not black”; “All pigs are fat; some pigs are not fat.”
The predicate calculus includes a collection of legitimate operations that can be
performed on the symbolic representations; for example:
∀x : x ⇒ y
transforms to:
¬∃x : x ⇒ ¬y
The latter means “There does not exist any x such that x implies that y is not the
case”. To return to the vocabulary introduced in the preceding chapter, it allows the
deduction of a priori, analytic knowledge; one does not need to know whether the
concept of ‘whiteness’ is ‘contained in’ the concept of ‘sheepness’ to see the inconsistency in the two statements; we arrive at the knowledge that the two statements
cannot both be true in a purely deductive way. In a leap that far exceeds the scope
of our attention, set theory and the predicate calculus can be used to create the fundamental rules of mathematics; algebra can be stated in terms of sets and operations
126
on them.
Codd envisioned the same collection of operations within databases. Most modern
databases treat collections of information and operations on them as tasks performed
on lists; hence the concept of ’filtering’ a list is merely the removal of certain elements
from the list according to set criteria, and the database performs this by mechanically
moving through the list in a way that is intuitive to users (alas, perhaps especially
intuitive for users accustomed to performing such drudgery themselves). But Codd’s
original formulation viewed these and similar operations as akin to an ‘algebra’ of
logic, and described a collection of operations that included addition, subtraction,
multiplication, and division: hence one table could be ‘divided by’ another. The
solitary and inflexible rule was that the result of any operation on one relation must
be a new relation- that is, it must conform to the same rules as any other relation.
The result of this was that relations could be chained together in long sequences. If
the initial data were true, and the operations performed properly, the conclusions
could also be known to be true. In keeping with the belief that he was creating a
logical structure, Codd termed the initial, stored data, ‘assertions’: they were the
fundamental propositions assumed to be true and from which deductions were to be
performed.
There are more than superficial resemblances of this to the concepts of models
and modeling discussed in the preceding chapter. First, it should be clear from the
example of sheepness and whiteness that the predicate calculus is precisely the kind
of logic for which the syntactic definition of a model is a fit. In the example, the
‘theory’ corresponds to the symbolic representation of ∀x : x ⇒ y and ∃x : x ⇒ ¬y.
Interpretations are the replacement of variables with terms, such as “x = sheep, y =
white” or “x = birds, y = black”. The model consists of all interpretations such that
the theory is consistent- in this case, the model is the empty set.
Second, and more deeply, the definition of a theory begins with the specification
of the form that assertions can take. This precedes the actual assertions themselves.
127
These are existential assertions; they can be visualized by imagining the table structure that will contain them (though, again, this is an imperfect representation of the
underlying logic). For our ‘sheep’ example, we could imagine:
Sheep No.
1
2
3
Color
White
White
Not White
Prior to making the assertion that “Sheep Number 1 is White”, one must define the
universe of discourse: in this case, one must make a table of “sheep”- thus making the
claim that there will be something in the world that will be called a sheep- and one
must define an attribute called ‘color’. The attribute will have a set of permissible
values; Codd, again drawing from set theory, termed this a domain. In this example,
the domain is simple: a sheep’s color can be either White or Not White.
To relate this to the definitions of theory, interpretation, and model is straightforward: The theory defines the table structure; each entry in the table is an interpretation; the collection of values in the table as a whole is the model for the
theory.
This may not seem to make sense, but this is because only the existential issues in
the theory have been addressed so far: we have only defined what can be. The other
two statements make additional restrictions that go beyond mere existence. So far
the theory only claims that sheep exist and can be white or not white. The first claim
above was that all sheep should be white (or, there should be no non-white sheep).
This means that an operation that provides the set of non-white sheep should give
the empty set; in fact the model for this theory is not the empty set, as it includes
sheep number 3. Thus, this theory is disconfirmed. The second statement claims that
there should exist some sheep that is non-white; the model for this theory is also sheep
number 3, supporting this theory. (In the context of this example disconfirmation and
support are not rigorously explored, but it should be clear that the two statements
128
given are mutually exclusive and exhaustive and thus could form a clear framework
for hypothesis testing.)
Building from this we can return us to a key point from the preceding chapter: the
instrumental construction of theoretical units. The definition of any unit of analysis
includes a range of acceptable values- a domain. These can be nominal or numerical,
discrete or continuous, or otherwise constrained, but whatever their nature they form
a crucial component of the theory they are to be applied to and the operations
that can be performed on them- in fact, it is probably more accurate to say that
the definition of the units must encompass the collection of legitimate operations on
entities described with those units. The database framework makes this not only
possible but quite formal.
Finally, the database framework described here can be easily linked to the overarching theme of the preceding chapter. Models arrive at conclusions in the form:
A⇒C
or, “assuming A to be true, C is the result.” Databases act in exactly the same
manner: assuming the theoretical structure defined by the relations in the database
(including the entities they define and the domains for the attributes of those entities), the database can find logical implications of these by carrying out sequences
of deductive operations. The knowledge they create remains a priori and analytic.
What is missing, perhaps, is the idea of a ‘simulation’; when one thinks of a simulation one generally thinks of something quite different from what one imagines as a
database operation. In fact, however, this distinction is false. Every simulation model
could be rewritten as a collection of database operations. This claim is theoretical
and I will not attempt to support it with an example, but interested readers should
note that two recent agent-based modeling frameworks, Simphony (North et al. 2005
and metascape (see http://www.metascapeabm.com)) are incorporating database-like
query languages to allow agents to interact; simpler models (like cellular automata)
could be even more easily fit into a database framework. The cost would be immense
129
in practical terms because databases are profoundly slow when compared to most
simulation code, but there is no logical obstruction. In effect, the strategy adopted
in the ABCM framework is to store the initial input data and the results, along with
enough data to reconstruct the operations of the simulation in between; that the operations are undertaken by code that is not directly implementing database operations
is merely a convenience.
4.2
The ABCM framework in detail
The previous section hoped to show that a database framework is an appropriate
conceptual framework for our thinking about simulations. In this section I will expand
on this to address the larger challenges raised in the preceding chapter for the task
of exploring archaeological complex systems. The ABCM framework is designed
specifically to permit the most fundamental operation required in an exploratory
approach: the interchange of simple and complicated components in the simulation,
such that the divergent goals of an exploratory approach can be pursued.
The ABCM Modeling framework is a stand-alone software framework that exists
independently of the other components of the Hohokam Water Management Simulation. The ABCM framework can be obtained and used to create any other simulation,
and would provide the user with the suite of functions and flexibility described below.
One important aspect of the ABCM framework is the ability to provide specific details that the simulation will use in a given context, and allow the ABCM framework
to create the entirety of the supporting database framework for input and output
data storage. For more details, see the ABCM User’s Manual, in the appendices.
In the next several sections, I will describe the specific components of the ABCM
modeling framework that are used to compose the picture of both the A upon which
the simulation operates and the resultant C. The first of these is divided into several
specific components that allow for the kinds of flexible operations that an exploratory
130
archaeology demands, without breaking the integrity of the A ⇒ C connection. The
second is composed from simulation output, generically stored in a format that allows
for larger collections of output to be assembled in meaningful ways.
4.2.1
Code Models and the ‘Cartoon’
The core operations of an ABCM model are assumed to be some collection of code
that is run repeatedly; each iteration is a successive ‘step’ in the model, and in general
the time frame is assumed to equate one step to one day2 . Typically this core code
is referred to as the simulation’s ‘cartoon.’ It outlines in a very general way which
components of the simulation are to be allowed to act, and in what sequence. The
term ‘cartoon’ is intended to draw attention to the abstract character of the simulation
effort: no attempt is being made to enforce ‘realistic’ detail. The cartoon is the most
fundamental aspect of the simulation’s dynamic function; it is the framework on which
all other components are supported.
The actions of each of these components are the first interchangeable elements of
the ABCM system: variations of each dynamic component can be substituted freely.
Aside from a purely practical consideration- code will be found to have errors and will
have to be replaced (as surely as night follows unto day)- this allows one crucial aspect
of the exploratory program to be implemented: the ability to move from complicated
to simple model elements as needed.
Given that the operation of the simulation is conducted in Java, the database
element of this must store a record of the Java code executed. The framework for
doing this is relatively simple. One structure in the database contains a list of the
optional code elements; a second contains a list of the multiple versions available for
each element; and a third contains sets of combinations selected to be used in specific
runs.
2
Modification of this, so that a step is longer or shorter, is also possible, but no current code
structure anticipates this at this time.
131
Thus:
Element
Rainfall
Plant
Growth
Description
How rain is apportioned
How plants grow
is accompanied by:
Element
Rainfall
Option
Even
Rainfall
Spotty
Plant
Growth
Steady
Plant
Growth
Burst
Description of Option
Rain is distributed
evenly across the
landscape
Rain falls heavily in
selected spots
Plants grow steadily
whenever there is water and stop growing if
no water
Plants grow in bursts
if they are given heavy
amounts of water
These tables allow a Code Model to be constructed:
Element
Rainfall
Plant
Growth
Option
Used
Even
Burst
Any simulation run making use of this Code Model3 will use the ‘Even’ rainfall
option and the ‘Burst’ plant growth option.
The impulse behind this structure is that as new versions of each option are considered they can be added to the code, then integrated with the larger framework
seamlessly. Doing so does require additional programming and direct modification of
3
In fact, the database structure that stores Code Models would store several, with the actual
table structure being: Code Model Name; Element; Option
132
the existing code; the ABCM framework is not a framework that automatically generates code or manages the writing of code in the way that some other frameworks do
(i.e. metascape- see above). However, as the number of substitutable elements grows,
and the number of options available for each element grows, the ABCM framework
provides a simple way to manage useful or interesting combinations of these options.
In the code the ABCM framework provides a simple way to specify switches from
among options, and to link these switches to the database. (One additional note:
no restriction is placed on ‘switches within switches’: a version of code that has two
alternate tracks within it. A preferable approach might be to create two alternative versions, as this will eliminate switches in one code model even though they are
unused, but this is optional.)
One shortcoming of the ABCM system may be noted in passing: the specific code
options employed define both the state space of the simulation and the transition
rules through that state space. Some versions of specific code options may express
only variations in the transition rules while others may vary in the state spaces they
define. In the previous chapter it was argued that there are reasons to classify and
analyze simulations that share a state space but differ on the transition rules; it would
be a convenience if the ABCM framework provided means to indicate whether two
such variations could be so classified. This is not provided; it is the responsibility of
the user to analyze results of runs employing various code model option combinations
properly.
4.2.2
Configurations
A Configuration refers to some collection of base data that is to be used by the
simulation. From the point of view of the ABCM framework, the Configuration is
open-ended; the implementation of a Configuration depends on the details of the
problem to be solved. I will discuss in the next chapter the Configurations available
133
in the Hohokam simulation, but for now I can mention that this consists primarily of
collections of plants with specific values for their behavior.
4.2.3
Histories and Narratives
Configurations primarily store static data: the characteristics of plants do not (in the
HWM) change over time. In contrast, Histories and Narratives store data that represent changes that occur at specific, scheduled times during the run of the simulation.
The structure used to achieve this permits a wide flexibility. It begins with a set
of commands that are permitted to be used during the course of the simulation run;
these can be assembled into time-based sequences called Narratives, and Narratives
assembled in groups into collections called Histories.
Commands An ABCM model includes a list of commands that must be defined and
provided to the ABCM database. These commands specify the actions that the
model can undertake; commands may include up to 10 arguments. The ABCM
framework provides a number of generic commands (i.e. ‘Step forward one unit in
time’); commands specific to the implemented model can also be provided. Three
uses of such commands are important: as Probes (to be discussed below); as actions
that agents can undertake (see Chapter 6); and as actions that can be specified in
Histories and Narratives.
Narratives The Narrative is the more fundamental unit. The idea of a Narrative is
drawn from work in an earlier simulation4 . In the database, a narrative is stored in
two related tables. The first of these includes a name to identify the Narrative and a
‘Base Date’; the Base Date is a simulation date on which the Narrative begins. The
second table contains the list of instructions that are to be executed, each marked
with the date on which it is to be executed. A Narrative can have any number of
4
The SAIL simulation on which I worked with Steve Lansing.
134
instructions that are executed on a given date; each instruction has an additional
number that indicates the order in which it is to be executed from among other
instructions within that Narrative on the specified date.
A very simple Narrative would have a data structure that looks like:
Narrative Name: Sample-Narrative
Base Date: 01-JAN-1000-AD
Instructions for ‘‘Sample-Narrative’’:
01-JAN-1000-AD
1
(do something)
01-FEB-1000-AD
1
(do something)
01-MAR-1000-AD
1
(do something)
01-MAR-1000-AD
2
(do something else on the same day)
Note that this very simple Narrative provides instructions on three days, the first days
of January, February, and March of 1000 AD. On March 1st, the Narrative provides
two instructions, which are executed in sequence.
Histories A simulation run includes exactly one “History.” Histories are sets of Narratives that are grouped together. A History is a list of the Narratives it includes;
however, there are two additional modifications. First, when a Narrative is used in
a History, it is assigned a new “Base Date”. All of the dates in that Narrative are
converted to new dates that are relative to the new Base Date. If our example from
above were included in a History but were assigned a Base Date of 01-JAN-0900-AD,
the instructions in that Narrative would be executed beginning on the first day of 900
AD instead of, as stored in the Narrative, 1000 AD. One implication of this is that
all Narratives could be created with Base Dates of 0 AD. This is a usage issue, and
it is more common for Narratives to be assigned Base Dates that carry reasonable
meanings.
135
Second, Histories can incorporate multiple Narratives. This raises the possibility
of conflict between the Narratives; it is possible for two Narratives to have instructions
that might execute on the same simulation date. Because of this, each Narrative
within a History is assigned an order; all instructions for Narrative 1 are executed
before those from Narrative 2, etc.
A History’s data structure is thus quite simple:
History: Sample-History
Narratives for ‘‘Sample-History’’:
1
Sample-Narrative
01-JAN-0900-AD
2
Other-Narrative
01-JAN-1000-AD
It is perfectly acceptable to have the same Narrative appear within a History several
times.
Seasonal Narratives There is one special kind of Narrative that must be discussed
separately: the Seasonal Narrative. These are Narratives that are repeated every
year during the simulation run. A Seasonal Narrative’s instructions may have dates
associated with them, but the “year” portion is always disregarded, and the only part
of the date that matters is the day-month combination. If the base date is 01-JAN,
and if a History uses it with a base date of 01-JAN, the effect is the ability to specify
that certain instructions will be executed at specific times during the calendar year5 .
During execution of the simulation, Seasonal Narratives are treated differently in
two ways. First, it is possible to turn Seasonal Narratives ‘On’ and ‘Off’, so that
their execution may be suspended. Second, regardless of the order in which they are
listed within a History, all Seasonal Narratives in a given History are executed first,
5
It is also possible to specify other base dates and use Seasonal Narratives on yearly cycles that
do not begin with January 1st, but this is uncommon.
136
before the other Narratives in that History. Within each group, execution is always
in ascending number, of course.
There are additional variations to the use of Histories and Narratives that will
be discussed in more detail in the next chapter, in the context of an example drawn
from the Hohokam simulation. For now, the effect of this data structure is key:
individual narratives can be created that depict specific sequences of events. These
can be ‘placed’ in different combinations within a larger ‘History.’ One effect of
implementing this as a data structure is that a domain in which wide variation is
possible is now equipped with a framework that allows categorical alternatives to be
implemented efficiently; the immense number of alternative histories that could be
created can be easily managed in a single data structure, which renders the various
instances more tractable for comparative analysis. A second, equally important effect
is that it is convenient to address questions of alternative trajectories by creating easy
variations on a single theme.
4.2.4
Parameters
The ABCM framework provides a structure that parallels that used for code model
switches as described above, but provides only single parameters. Abstract parameters can be defined at the database level, and this definition can include ranges of
permissible values and units to be associated with the parameters; individual runs
select specific values for each parameter. The collections of specific values are assembled into sets called parameter sets, which can be reused across runs. The ABCM
framework makes the values of the parameters easily accessible within the code and,
as with the Code Models, stores the values used for every run of the simulation.
The use of parameters in simulation modeling is widespread, but is actually defined
fairly poorly. The ABCM framework takes the position that there is an important
distinction to be made: parameters only modify transitions through state space; they
137
do not modify available state space. Only code can define state space, and only Code
Model Options are used in the ABCM framework to specify variations of state space;
parameters are not6 .
Note that a parameter may apply only to specific Code Model Options- that is,
other Code Model Options may not employ a given parameter at all. Parameters that
are set but never used are ignored. As a convenience, the database structure maintains
a record of which parameters are associated with which Code Model Options.
4.2.5
Probes and Probe Sets
The preceding sections have focused on simulation input; I turn here to output. Data
must be extracted from the simulation as it runs or at the end of its run. The ABCM
framework includes a means to collect data and place it into a repository for analysis.
The framework can be described as a series of components:
• In the code, methods exist that return useful values. In contrast to some other
modeling systems (i.e. Simphony; North et al. 2005) these must be explicit,
but they may also usefully include summary values or complex calculations,
whereas the automatically generated probes of other modeling systems do not.
• Commands are created and added to the Command List that return these values.
• Probes are created that include a command to be used to retrieve a value and a
destination bin for that value. Probes additionally contain information on how
frequently they are to be executed (i.e. monthly, daily, or annually in simulation
time). Importantly, commands that are used by probes are assumed to be
without ‘side effects’: obtaining information about the state of the simulation
6
A programmer could violate this by using a parameter as a switch among Code Model Options.
This is discouraged by the structure of the ABCM, to the point where it would require deliberate
misuse of the framework to achieve, but it is not, strictly speaking, impossible.
138
does not change the state of the simulation. One implication of this is that it
is possible to run the same simulation twice but collect different data on it.
• Probe sets are created; these are collections of probes that can be reused in
bulk.
• While the simulation is running, the Probes that were loaded as part of the
Probe Set are executed at their specified frequency. They dump their information into the appropriate bins, marked with the simulation time at which the
data were collected.
• At the end of the simulation, data from these bins is stored in the database,
associated with the specific simulation run that generated them. The run’s
Probe Set is recorded along with all of the other components of the run, and
hence it is simple to associate the output with the input that created it.
The ABCM framework provides a general-purpose method of storing simulation
results and determining the nature of those results. Special, separate output files
need not be maintained. The output of the Probes is stored in the database along
with the input to the simulation that created it; this links input (A) with output (C)
in a way that makes A ⇒ C easy to assess.
4.2.6
Larger organization of these units
The preceding section discussed Code Model Options, Parameters, Configurations,
Histories, and Probe Sets. If a simulation is thought to consist of collections of assertions, these are the various kinds of assertions that must be aggregated together. The
means of aggregating these components together is not arbitrary; the ABCM framework makes use of a database architecture to achieve a clean and useful organization.
A schematic of this structure is given in Figures 4.1 and 4.2.
139
Figure 4.1. Data structure underlying an ABCM Simulation Run. Arrows indicate
dependency of one element on another. The ‘Configuration’ chain is open-ended
because the ABCM framework can allow customized collections of configurations
that will depend on the specifics of the context of investigation.
140
Figure 4.2. Construction of an ABCM Argument. The argument is composed from
simulation output organized into Analyses and Summaries.
141
In the preceding chapter I briefly addressed, in the context of stochastic models,
the difficulty in drawing boundaries between components of a single ‘model’. The
ABCM structure divides each simulation run into three distinct components: a Simulation, a Parameter Set, and a Probe set. The ‘Simulation’ contains two components
termed a Data Model and a Code Model. The latter was discussed above; the ‘Data
Model’ encompasses two of the other elements discussed above, namely Configurations and the Histories. Each of these, of course, is composed of subunits as well;
Histories are made from multiple narratives, as described above, while Configurations
are made of context-specific data.
Additionally, every Simulation Run is associated with a specific random number
generation seed; this is built into the ABCM architecture, even though theoretically
not every simulation will use stochasticity.
This structure permits every combination of elements to be run, but it emphasizes
and constrains some kinds of variation over others. Once a ‘Simulation’ is constructed,
a set of entities (the Configuration), a set of operations (the Code Model), and a set
of external driver events (the History) is defined. This combination can be run easily
with multiple Parameter Sets and collecting different data using various Probe Sets.
It is somewhat more cumbersome to attempt to vary just the History, the Code Model,
or the Configuration; within this subdivision, the ‘Data Model’ wraps the History and
Configuration together, so that it is slightly more convenient to vary the Code Model
agains a single Data Model than it is to vary the History against the Configuration.
The ABCM frameworks attempts to organize this and make it convenient and sensible,
but ultimately the treatment of any two ‘models’ (i.e. models that don’t use the same
Simulation and Parameter Set7 ) as members of some class of model and hence able
to be treated in a specific way analytically is up to the user.
7
Keeping in mind the comments in the preceding chapter on stochasticity
142
4.3
Completing the argument: Analyses and Summaries
In the preceding chapter I made reference to two situations in which multiple runs
of a simulation are classified together and grouped together for some type of analysis: the first was in the case of multiple runs of a stochastic model which could be
grouped together as examples of different paths through a common state space, and
the second was in the case in which two similar models were compared for verification
purposes. The ABCM framework has a built-in structure for grouping simulation
results into what are termed Analyses. The bins into which Probes deposit their data
can be individually brought into these collections; each bin remains tagged with the
information that identifies what it represents and which run created it.
Analyses can easily represent either:
P1
A ⇒ C1 ∨ C2 ∨ C3 ∨ C3
in the case of a set of runs of a stochastic model, or:
P1
P2
P3
P4
A1
A2
A3
A4
⇒ C1
⇒ C2
⇒ C3
⇒ C4
in the case of several separate models whose results might be of interest to compare.
(Note that nothing precludes using an Analysis to group multiple results generated
by a single run.)
The ABCM framework provides another level just above this: the ability to group
Analyses together into Summaries. Summaries are sequences of Analyses connected
with text that supports some argument. In theory these could be used to demonstrate the implications of some collection of models or model runs. There is an equally
important, if mainly practical, function that the ABCM framework provides: a collection of multiple runs may involve many different combinations of all of the various
143
elements discussed above. The ABCM framework provides two additional components on the front-end of the argument that allow users to specify the sources and
character of the components: first, references may be entered that link the components to textual sources; second, any component can be annotated with commentary
that indicates why it was included or considered appropriate. At the end of the argument, the Analyses and Summaries are always presented with all of these references
and commentaries attached; the ABCM framework kindly organizes these comments
so that there is no repetition, even if an element is used in several of the multiple
runs being collected. In this way, the whole of the argument, from the fundamental
statements and source data- possibly akin to the ‘control statements’ of the Logical
Positivists- through the conclusions the simulations are being claimed to support, are
presented holistically and completely.
An even more formal approach than the ABCM implements is possible within a
database framework. Additional database operations can be performed that distinguish among sets of results according to specific criteria. For example, all Simulation
Runs employing some specific ‘Simulation’ and values from within a specific range
for some specific parameter, yielding results within specific ranges, could be selected
using an appropriate database query statement. In effect, this extends the theory
defined by the database; recall that the ’theory’ is defined by existential statements
(the structure of relations or tables in the database) and sets of statements that lay
out logical implications or proposals from these, so by adding more statements the
theory is extended. The interpretations are the data inserted into the initial tables;
the ’model’ is the set of interpretations that are logically compatible with all statements of the theory. When a new statement is added to the theory, some of the results
that were acceptable within the more general model that encompassed all simulation
possibilities and results will be excluded by the requirements of the new statement;
what remain are the ‘model’ that satisfies the newly extended theory. This opens the
door to a formal approach to proposing hypotheses and disconfirming them within
144
the ABCM framework: if an extended theory does not include the results of some
runs that we consider valid, the theory can be rejected. For now, however, the Analysis/Summary structure lends itself more easily to arguments of a more traditional
kind, in which the output produced makes a case textually and graphically instead
of purely logically.
4.4
Conclusion: Databases as Platforms for Models Theory,
and Voice
Databases in their original formulation are platforms for theory construction and
logical argumentation; they thus form a sensible platform for the kind of modeling
effort involved in an exploratory approach to archaeology. Current database software,
however, is generally constructed for purposes other than logical argumentation; most
simulation operations, moreover, are better implemented in non-database software,
primarily for performance reasons. The ABCM framework attempts to take these
forms of extant technology and make them work in the way that makes the operations
of an exploratory archaeology convenient and clear.
However, the distinction between theory and model must be addressed again in
the context of a database implementation as I have described here. As described in
the preceding chapter, the syntactic view of the theory/model relationship is the basis
for how ‘theory’ applies in a database context. The semantic view, however, holds
that theory and model are of the same kind, with the distinction being mainly that
theories claim to be more correct and to better describe reality, and, as a consequence,
models are allowed to be incomplete or inconsistent- even to the point of being purely
hypothetical or counterfactual.
In the ABCM framework I employ the vocabulary of the syntactic approach because it provides a clearer structure to organize and evaluate simulation results. But
this may seem to be belied by the operation of modeling. If databases are used to
145
construct theory, how can they be employed to construct models? The resolution of
this paradox is the insistence that the claim that theories apply the ‘correct’ units
to arrive at ‘reality’ is misguided; instead, a full instrumentalist epistemology is embraced. This follows Dunnell (1971), who notes that theoretical units are always
essentialist, while reality is always materialist: essentialist concepts are abstracted
from time and space, and in their abstraction we can consider them identical, while
real objects are always unique. An essentialist approach may be employed and even
be appropriate in a field like physics, where it may be difficult even to imagine that
protons are anything but identical essences; but in archaeology, where every artifact
and every site is different from every other one, essentialism is sorely misplaced. All
attempts to construct ‘theory’ fall short of reality, and thus, in the semantic view,
are models.
One way to interpret this is simply that the claim of a realist epistemology is
entirely supplanted with an instrumental one; another way is that the situation is
the result of the primacy of modeling in the scientific process, so that models, even
though recognized to be imperfect, are moved to the fore. The problem can also be
likened to that faced by the Logical Empiricists, who hoped that the starting points
for their arguments would be absolute, but found instead that they were imperfect
and subjective. From whatever view, the database framework is an appropriate one
for simulation modeling, because the structure of its argument is simply ‘If all of these
assertions are true, then these are the implications.’ They remain assertions; their
connection to some external reality is a separate issue.
The assertions that must be marshalled in a simulation effort can be of varying
kinds, such as existential, static or dynamic. The structure of the ABCM framework
described above allows them to be interleaved in the way that the exploratory approach to modeling potential archaeological complex systems requires. But there is
one additional aspect to this that should not be left undiscussed. When the starting
points for the arguments become assertions, they are, in effect, given voice. In a col-
146
laborative environment, they can be made by individuals who can be recognized by a
larger community, and the construction of knowledge becomes one in which expertise
can be recognized: different assertions can be assessed differently depending on the
source. Even if a solitary individual is undertaking a modeling exercise, though, assertions may be considered to be of varying strengths. Provisional or counterfactual
assertions may be treated very differently from those held more firmly. The ABCM
structure accounts for this by allowing all initial assertions to be assigned truth values.
This framework employs the following categories:
• Certain
• Reconstructed
• Inferred
• Likely
• Possible
• Implausible
• Impossible
• Debugging
The use of these can vary somewhat. For example, some users may consider something ‘certain’ while another emphasizes that it is merely ‘reconstructed.’ ‘Debugging’
values are designated for testing the action of the code, but ‘Impossible’ is explicitly
for testing counterfactual situations. For example, a value of 15m/s2 for acceleration
due to gravity at the earth’s surface, when the actual value (which varies little by
elevation) is 9.8m/s2 , would be ‘impossible’ but might be theoretically interesting; a
value of −10m/s2 would also be impossible but would be mainly useful for debugging.
147
The ABCM framework propogates all combinations of truth values in such a way
that the final truth value of any given A ⇒ C scenario is equal to the least ‘true’
value of all of A’s elements (in the list above, the order is strictly from most to least
true); this is referred to as poisoning the truth value. This allows users to be sure that
hypothetical or merely interesting values are marked as distinct from reconstructive
or more definitive efforts; others who later review the results or assess the individual
components can do so with this demarcation in hand8 .
This nicely illustrates the clarity offered by the database framework. Following
the hierarchy discussed above, a user of the ABCM framework might create a run of
a simulation using the following assertion:
Simulation Run Number Simulation Parameter Set
Probe Set
Scenario 185
Params 10
CollectFieldData
Run 1347
Scenario 185 however consists of the asserted conjunction of a given data model and
a given code model; the data model will consist of a history and a configuration, and
each of these, in turn, is comprised of multiple subordinate components. Suppose a
portion of that configuration were only asserted to be ‘impossible’, and that this were
the ‘least true’ element of all of those in the run to be created. If the configuration
is implausible, any data model built from it will also be, as will any simulation built
from that data model; the resultant implication is the derived truth value:
Simulation Run Number
Run 1347
Truth Value
Impossible
Because the fundamental component of a database, per the original formulation
of Relational Database Theory, is the assertion, databases are entirely appropriate
platforms for the modeling effort. They allow the construction of thorough and rigorous chains of argument in a way that exactly suits the demands of an exploratory
8
In theory the truth value of any component could be malleable, and could change through time
as individuals’ assessments of it change. This is not currently implemented; an element’s value is
assigned by its creator and remains fixed.
148
modeling approach. The ABCM framework provides a robust structure for constructing these kinds of simulation models on a database platform and making them into
complete simulation-based arguments. In the next chapter, I will examine how the
architecture provided by the ABCM framework is applied in the concrete example
provided by the Hohokam context.
149
Chapter 5
The Hohokam Water Management Simulation:
An ABCM Example
In Chapter 2 I outlined the problem domain that the simulation presented in this
dissertation is intended to address, but I argued that the specifics of that problem
domain, and the ways in which we have hoped to treat it, demanded a new modeling
solution. In Chapter 3 I addressed an array of epistemological issues that were involved in that solution and in a new, exploratory approach to archaeology. In Chapter
4 I outlined a model structure, the Assertion-Based Computer Modeling Framework,
that addressed the issues from the preceding chapter and allowed for the possibility
of an exploratory approach. The purpose of this chapter is build on all of the preceding chapters, and to present the components of the Hohokam Water Management
Simulation (HWM) and show how they collectively enable the explorations necessary
to attack the set of questions that are presented by the Hohokam case1 .
The chapter begins with drawing attention to a small subset of the ABCM framework’s possible features; the HWM components discussed here will be drawn from
mainly from a handful of these. The description of the application of these elements
to the Hohokam case begins with an overview that outlines how the elements that
exist within the simulation were chosen, and how these choices were implemented into
the general ABCM framework. The intent of this is not to burden the reader with
additional software details, but rather to support the claim that the ABCM modeling
approach is a refinement of assertions about a given problem domain, and to show
how this is put to use in the Hohokam case.
The larger portion of the chapter lists the simulation’s elements. Each of the com1
Source code and documentation for the software discussed here will be made available via the
author and from the Global Institute of Sustainability at Arizona State University.
150
ponents to be described plays a specific role in the exploratory effort envisioned, and
the description here will explain these roles. Typically this will mean illuminating
why the component is important to the larger questions we have about the Hohokam,
and this is achieved by way of reference to the different proposals about the Hohokam
trajectory outlined in Chapter 2. A central issue in the exploratory approach is the
desirability of moving from abstract and highly simplified versions of each component to more detailed and elaborate versions, when appropriate; likewise, a central
benefit of the modeling framework employed is the ability deploy these variations
by making use of the framework’s structures. Different structures permit different
kinds of variations, and different variations permit different questions; thus there is
an additional need to show that the structure selected is the appropriate one for the
given component.
To demonstrate this, the way each component is instantiated, and the way its construction makes use of the possibilities offered by the framework, will be described.
These descriptions will be brief; fuller discussions can be found in the appendices,
and the emphasis here will be to show that the framework allows the inclusion of
combinations of components constructed at the different levels of the modeling concerns given in Chapter 1: scope, resolution, and composition (coherence, the fourth
concern, is a property of the system created from these parts, which may or may
not be present, and may, if present, be built-in by the user or may be something
sought within the model’s operation). That is, the range of elements included in the
simulation, the detail with which they are composed and the coarseness or fineness at
which they operate, and their inclusion or exclusion in particular instances, may be
varied according to the part of the exploratory process underway. The result is the
ability to craft versions of the simulation that address questions of the four different
kinds discussed in Chapter 1 (what was, what was necessary, what might have been,
and why) and to pursue the range of goals of the exploratory approach I outlined at
the end of Chapter 3.
151
5.1
The Structure of the HWM Simulation
Chapter 4 laid out a larger picture of the ABCM framework that links many of its
structures to the construction of arguments and the epistemological framework of
Logical Positivism. Here we can concern ourselves with a subset of these structures,
those that are actively engaged in constructing the picture of the Hohokam; in a few
cases some of the other structures do make appearances, but they need not be our
focus. Instead we can work here with just three key groups of components. These
are:
• The History and Narrative structure, including Seasonal Narratives
• The Code Model structure, in conjunction with Parameters
• Configurations
Briefly, the History and Narrative structure assumes that the ABCM model provides Commands that can be used to direct action in the simulation. These Commands can be assembled into sequences called Narratives, and Narratives can be
assembled in different combinations to form Histories; these cause simulation events
to occur at specific times. Seasonal Narratives are sequences that repeat each year of
simulation time.
Code Models are ways to specify which section of simulation code are executed.
A common application of this is the creation of ABCM-based ‘Model Objects’. A
Model Object is one that has a collection of behaviors, but uses the ABCM Code
Model to select among different implementations of that behavior: different ways of
doing the same thing can be chosen as alternatives, as needed.
Finally, Configurations represent static data stored in the ABCM-based database.
The word ‘static’ is key, however, for it should be kept in mind that all three of
these groups of structures store data, merely data of different kinds. Data, of course,
152
are ‘assertions’; the sets of commands that exist in Histories and Narratives, and
the code implemented in Model Objects, are assertions as much as the static data
in Configurations. The purpose of an ABCM model is to allow these three kinds of
assertions to be assembled into useful simulations.
Returning our attention to the HWM Simulation allows us to review a clear example of how this can be done. The software that underlies the simulation is divided
into four separate software packages; each of these plays a different role in refining
the three kinds of ABCM structures2 . The packages are dependent on one another in
a hierarchy. The ABCM package itself is the topmost component. A second package,
called the Flow and Agricultural package (FlowAg), consists of Model Objects built
to make use of the Code Model and Parameter structures from the ABCM toolkit.
A third package, called the Water Management package (WatMan), stitches these
components together into a simulation of irrigated agriculture, and defines the Commands that can be used on them. The fourth package is the HWM package proper;
it provides the link to the HWM database, and represents the implementation of a
WatMan simulation applied to the Hohokam context.
To better understand this framework we can examine how it was derived. The
first step in the construction of an ABCM-style model is the establishment of what is
called the ‘cartoon’; this (see Chapter 4) is a distillation of the basic elements to be
included and their relationships, but with most of the detail excluded. It is effectively
a definition of the ‘nouns’ that will be instantiated in the model, and the actions they
can undertake, or the model’s ‘verbs’. The Hohokam cartoon was established to be:
In a landscape, water flows at some rate (called the streamflow rate)
2
The division of software into packages is actually meaningless, except for two concerns. The
first is simply allowing a programmer to keep components organized, a concern that is very important practically but rather less so theoretically. The second concern is that components that
are packaged separately can, if assembled correctly, be reused in other contexts easily. None of the
HWM components, to date, has been reused in another context, but the possibility that they could
be remains open, and in other ABCM models this may become a key component allowing model
interoperability and comparability.
153
through rivers, and is diverted through headgates into canal systems; the
water, probably carrying chemicals, flows through the canals, possibly
causing damage to the headgates or the canals, and possibly undergoing
seepage and evaporation; the water is delivered onto fields. Rain also delivers water to fields. In fields, water is used to grow crops that consist of
different plants. Plants grow in soil, and in response to chemicals present
in the soil and the presence or absence of water, in accord with specific
characteristics. Agents may construct, repair, maintain, and operate canal
systems, including fields, and may plant, tend, and harvest crops in those
fields.
The cartoon defines the simulation’s existential issues: what will be present. These
are highlighted in the text above to draw attention to them; they are the ‘nouns’
in the HWM Simulation, and they are the elements discussed in the text below.
The cartoons define these, however, only very generically: for each ‘noun’ there can
potentially be wide variation in the instances that are used to implement it, and
each variant may not only have a different expression within the simulation (such as
short or long canals) but may enact the verbs through different means (i.e. different
algorithms for water flow or plant growth). As an aside, it should be noted that the
cartoon is important not only for what it includes but what it excludes: the models
created through the proper use of the ABCM framework are not models that try to
include every possible element of the situation under study, they are merely flexible
models that will define a space in which an array of possibilities can be explored. A
well-crafted cartoon that can be used to address a wide range of variations, but does
not try to completely reconstruct reality in its entirety, is the best initial target.
Table 5.1 shows the hierarchy of the the five software components (including the
HWM database) and places the HWM Simulation’s elements into association with
them. The implication that falls from this hierarchy is that assertions at the higher
154
levels are more general than assertions at the lower levels. One of the ways that
this plays out is that the upper levels make existential claims, while the lower levels
propose specific arrangements of the components that are permitted to exist. In
general the upper levels make a wide range of possibilities available, while the lower
levels manufacture workable simulations out of slices of those possibilities. This is
part of the exploratory approach: testing combinations of available components to
find those that are useful, workable, and informative.
The claim that this chapter ultimately intends to support is that the range of variations that can be created given this cartoon and the specifics of its implementation is
adequate to pursue the exploratory approach as outlined in Chapter 3, applied to the
Hohokam case. To that purpose I now turn to the elements in the HWM simulation,
and their implementations, in detail.
Package
ABCM
FlowAg
Description
The Assertion-Based Modeling Toolkit.
Provides components for building an
ABCM Simulation Environment.
Flow and Agriculture Model Object
Package. Contains collection of model
objects for water flow through open
channels and field agriculture.
Includes
Landscape.
Channel CrossSection; Canal
Segment; Canal
System; Water;
Field; Soil;
WatMan Water Management Model Package. Commands; BaImplements an ABCM Model by cre- sic Step Algoating the Model object
rithm
HWM
Hohokam Water Management Simula- Links
to
tion. Applies a WatMan model to a database
and
specific context, the Hohokam.
interface
HWM db HWM Database. Stores all data re- Plant Configuralated to the HWM Simulation
tions; Narratives
and Histories;
Code Models;
Table 5.1. Hierarchy of software for the HWM Simulation
155
5.2
HWM Simulation Components
The remainder of this chapter outlines the HWM Simulation’s components; this review brings the discussion more concretely back to the Hohokam case, and each
component will be dealt with in fairly specific detail. The goal of the list is to indicate how the inclusion of that element contributes to the exploratory approach that
is to be undertaken in pursuit of the larger understanding we hope to reach about the
Hohokam trajectory, and, more specifically, how the ABCM structure selected to implement that element makes this possible. To that end each component’s description
will include four elements: First, the role that it is suggested that the real-world element being modeled played in the Hohokam trajectory; second, the ABCM structures
that are employed to create the simulated component; third, the data that we may
use to apply the component to the Hohokam case; and, fourth, the range of variation
those structures allow.
5.2.1
The Landscape
Role: The subsistence opportunities for the Hohokam were shaped by the landscape in
which they lived. The broad and mostly flat river valleys were key in allowing them
to build irrigation systems that splayed outward from rivers and stretched many
kilometers away from their heads. The map of the Salt River system illuminates the
way in which local topography dictates the shapes of the canal systems. The heads
are upstream, so that the east-west grade of the valley can be used to feed the canals.
However, the canals springing from the north side of the river arc mainly to the west,
running basically parallel to the river, while those from the south side move more
quickly to the south, avoiding the south mountains and stretching downward toward
the Gila. As discussed previously (see page 51), a common approach in building the
canals was to follow the edge of a terrace, so that water could be dropped off the canal
and feed the fields on the downward slope; although in theory a canal can be built in
156
practically any landscape, the issue is cost, and a sensitivity to the local topography
is key. The Hohokam, however, did not only find opportunities in the flat valleys,
but also on the slopes of the hills and mountains around them. The land available in
these areas, the topography of washes and gullies draining them, and the proximity
of these different zones to one another all played a role in shaping the Hohokam’s
subsistence strategies.
Implementation: Because it is expected that ABCM models will commonly (though
not necessarily) take place on a landscape, the ability to create a basic model of terrain is built into the ABCM framework itself, and thus exists at the highest level of
the HWM hierarchy. Landscape specification is accomplished through the simplest
possible mechanism: a collection of points can be specified representing datum points
and assumed to represent the land surface. These points have x, y, and z coordinates, expressed in meters and corresponding to east-west, north-south, and altitude
values. There are very few complications to this framework; one, for example, is the
ability to specify that either the x or the y coordinates, or both, should be oriented
positively or negatively (so increasing values along the ‘y’ axis can go either north
or south), but aside from a few such examples the coordinate system includes none
of the complexities of a normal GIS system. This means that a landscape can be
specified using only a handful of values- three would be the minimum. On the other
hand, a landscape can be specified in much greater detail, if desired: datum points
drawing the contours of any complicated terrain can be added.
The action of adding points (or removing them) from the model’s landscape is
dynamic. The creation of a landscape makes use of the ‘History/Narrative’ structure
of the ABCM framework. There is no specific file format for storing landscape data,
as with a GIS system. This was done intentionally to maintain the flexibility to
specify landscapes in any level of detail, but also because landscapes are expected
to change through time. The landscape narratives can be interchanged for different
simulation runs without necessarily altering other components of the simulation.
157
Data: Our knowledge of the terrain in which the Hohokam lived is now extensive;
there are large numbers of maps available, and these have been created from onthe-ground surveys and from remote sensing such as aerial photographs and satellite
imagery. But although the information is abundant, there are problems with employing it in the way we need. For example, there exists a digital elevation model for the
Phoenix basin, which purports to offer a very detailed model of the terrain of the valley (provided to me by the Global Institute of Sustainability; Peter McCartney, pers.
comm.). However, the resolution makes it difficult to use in the HWM Simulation:
elevations can only vary in 1 m intervals. The subtle topography that would have
been important in the placement of canals is washed out. However, the DEM and
other more traditional contour maps, both of the Phoenix basin and of other parts of
the Hohokam world, can be used to construct virtual landscapes by hand that match
the major features of the real Hohokam landscape.
Range of Variations: The flexible approach to creating topography in the ABCM
allows us to avoid the issues that fall from using DEMs or other GIS-type data sources
for our simulation- or, if not to avoid them, to confront them selectively, using only
data we need at only the level of detail we require. Moreover, the same approach
makes it easy to explore a variety of artificial landscapes and enquire how these
would theoretically have impacted the Hohokam (or, alternatively, how the Hohokam
case may be compared to other cases where the landscape was somewhat different).
The variety of landscapes that can be created is limited only by our imagination; this
leads to something that will be a recurring theme in this discussion, the necessity
of selecting, from a wide array of possibilities, some sets of useful combinations that
address categorical options of interest. In this case, the wide array of possibilities is
any possible terrain we might like to imagine, but the categories of interest are more
like “gently graded” vs. “steeply graded” or “flat” vs. “slightly hilly” vs. “quite
hilly”. The actual categories can vary according to the interests and purposes of the
researchers.
158
5.2.2
Rivers
Role: As the source for water in the irrigation system, the importance of rivers should
be obvious. Some critical aspects of rivers and the way they played their parts in the
Hohokam irrigation works and larger trajectory have already been discussed. First,
they offered the opportunity to draw water into canal systems, but this opportunity
was limited to specific points, reflecting variation along the length of the river in the
availability of water for use in canals. Second, these opportunities changed through
time, as river geomorphology changed due to processes like erosion and downcutting
(see page 65). Additionally the amount of water that rivers delivered varied from season to season and year to year; this is termed ‘Streamflow’ and is discussed separately
below.
Implementation: Rivers are defined as Model Objects in the FlowAg toolkit. Regardless of the inner options selected, all versions of Rivers perform the same set of
operations: they provide a sequence of points that can potentially be used as headgate
points; they indicate the amount of flow passing at these points, at any given instant;
and they are impacted by their flows such that their morphology may change. The
sequential nature of the headgate points is key because it structures a part of the
larger irrigation dynamics: water removed from a river at one headgate is unavailable
at the next one downstream. In this way even separate canal systems impact one
another. Rivers are created by narratives that specify the locations of these points
and set their values for streamflow as simulation time passes.
The issue of changes to possible headgate locations or maximum efficiency is not
yet addressed in the implementation of Rivers in the FlowAg toolkit; expansion to
do so is planned. Generally this may be achieved by endowing each headgate point
with a maximum possible efficiency for water removal, and then allowing certain flow
values to impact these as if the river were being downcut at that point.
Data:
The actual optimum locations for weirs and gates to divert water from
159
the Salt and Gila rivers have been long recognized, and long used3 . Research continues, however, on understanding how these points may have changed through time
(following Waters and Ravesloot 2001).
Range of Variations:
Because Rivers consist mainly of possible Headgate loca-
tions, the variations available in Rivers are mainly to vary the number, locations, and
qualities, of Headgates. The last kind of change may eventually include changes to
how the Headgate is impacted by flow events; some Headgates may be relatively permanent and others persist for only a part of the longer-term trajectory under study.
Varying the number and location of Headgate points offers different opportunities for
understanding the relationships among canal systems that must share water and may
need to interact in other ways, so that many possible Headgate points spaced closely
may suggest different relationships than a few distant ones.
5.2.3
Streamflow
Role: The quantity of water moving through the rivers upon which the Hohokam
based their irrigation systems was crucial to their agricultural efforts. Flow along the
Gila and Salt rivers was far from constant, and involved variation on at least three
time scales: flow varied seasonally within years; total flow varied from year to year;
and there are trends in the patterns of water amounts on the scale of decades and
even centuries. Seasonally, flow was maximized during times that upstream sources
saw melting snow or seasonal rains; this would have shaped the Hohokam agricultural
calendar. Annually, some years were relatively wet while others were comparatively
dry; this could mean the difference between a successful agricultural season, a season
in which production was lowered by lack of water, or a season in which flooding
damaged crops and canal infrastructure. And during the long course of the Hohokam
3
Turney (1929b, p. 19) writes: “Ten modern canals follow the alignment of the ancient; three
only of their headings remain unused today; and in no case has it been found feasible to divert water
at any point which they had not utilized.”
160
occupation, different strategies may have been required to deal with changes in the
high and low ranges and the frequency and predictability of wet and dry years.
Implementation: Two kinds of commands are used to adjust the value of streamflow available in the River object: commands of the first kind set the annual streamflow value, and those of the second kind set the daily flow value as a fraction of the
annual value. The first kind of command has two implementations, one that sets
the value absolutely and another that introduces some stochasticity by adjusting the
value by some fraction (determined by a parameter). The second command exists in
only one, bland flavor, setting the value exactly as specified.
These commands are executed as Narratives, allowing the construction of Histories that reflect water availability regimens in any combination needed. Seasonal
Narratives, which (per Chapter 4) are repeated each year, can create the intra-annual
variation for a given river’s streamflow, in effect simulating the different seasons during which flow reaches peak levels or is lowest. It is also possible (again per chapter
4) to make seasonal patterns stochastic, so that each year one of several patterns
is selected with some probability, but which pattern will hold is not knowable in
advance.
Data: An extraordinary data set for streamflow exists (Graybill et al. 2006). The
actual pattern of wet and dry years has been reconstructed in detail from tree-ring
data, and from these have been calculated streamflow values for the Salt and Gila
rivers extending from early in the Hohokam occupation into the 1980’s. The general
pattern of seasonal variation in flow is also known for both rivers (the rivers have
different sources, and their seasonal patterns differ slightly; Graybill et al. 2006).
The collective data set is one of the most impressive that can be brought to bear in
reconstructing the Hohokam trajectory.
Range of Variations: The range of variation that could be simulated encompasses
any level of specificity in reconstruction that could be imagined: streamflow values
could be set for each simulation day, if desired. But the ABCM framework offers
161
some useful ways to deal with this wide range of possibilities, and refine them into
interesting questions. Because of the way Narratives are assembled into Histories, it is
possible to take short sections of climate history that have been created artificially for
the purpose of investigation and overlay them on top of the reconstruction. A 30-year
drought, for example, can be created and superimposed on the real reconstruction at
any point.
The effect is that there is not only great flexibility, but a specific kind of flexibility
that makes the exploration of different kinds of variation possible. Considered in the
abstract, if Streamflow values could be set to any of 10 possible values, and could
be set for every year for 1000 years, the result would be 101000 possible variations- a
number so vast it goes beyond astronomical. In fact, the value for Streamflow could
be set to not just 10 possible values but millions (it is a decimal value usually between
0 and 2, but with no true limit on either maximum value or fineness of intervals),
and it could be set not only for every year but for every day. The division of the
problem into two parts- annual flow vs. daily flow, the latter being specified by the
Seasonal Narrative- splits this problem into not only something more manageable
but something manageable in a meaningful and intuitive way. The ability to create
Narratives that overlay one another is another simplification, allowing patterns to be
moved around neatly and tested in useful blocks. The fundamental idea of a Narrative
is that it takes the infinite gradations possible in the full framework and provides a
means to build these useful and meaningful blocks of possibilities, in the same way
that the landscape’s possibilities might be reduced to a small number of categories,
save only that the landscape’s state is spatial and the History/Narrative represents
time.
The kinds of variations we might wish to pursue with respect to Streamflow begin
with simple issues, like whether predominantly wet or dry periods have different
impacts at different times in the trajectory. They include, however, variations in
which not only the absolute values but also the temporal patterns of those values
162
matter: the stochasticity and predictability of the water supply may have been as
important as the absolute amount.
5.2.4
Headgates
Role: In the actual operation of the canal systems, it is believed (see Section 2.1.2)
that water was directed into the canal systems by large dams or weirs, and then a
short distance down the initial channel of the canal was placed a gate that could be
closed to prevent excess water from entering the canal system. The ability to move
water into the canal system may have been limited, and the apparatuses may have
been subject to damage or destruction by flood-level waters. Hence these structures
represented key points in the operation, and vulnerability, of the irrigation systems.
Implementation: The FlowAg toolkit defines a Headgate object that abstractly
encompasses these features, wrapping in one object the two functions of the weir and
the gate. A Headgate takes a current flow value from a River and translates it into a
flow value for the head of a canal. Headgates may be placed at the points the River
object allows (see above).
Following a simulation done by Howard (1993a), Headgates are defined to have a
specific efficiency in removing water from the river; the separate roles of weirs and
gates that may have existed in the real Hohokam systems are ignored. For the initial
implementation, setting a single value for this efficiency has been adequate, but other
variations are possible. One is the ability to make efficiency dependent on flow, so
that when the flow value in the River is high the Headgate is able to remove water
effectively, but not when it is low (for a discussion on how water channels change
dependent on streamflow values, see Graybill et al. 2006). The current implementation does not allow Headgates to be damaged, but proposed variations can make
them susceptible to damage from flow values on the River exceeding some settable
threshold.
163
Data: The structures represented by Headgates are no longer present in the archaeological record; we know them mainly from ethnographic descriptions. Continued
archaeological work, in combination with studies of river morphology, will perhaps
allow us to refine our understanding of the characteristics of these structures.
Range of Variation: The ultimate goal of the framework is to permit Headgates to
vary along two axes: efficiency of removing water (as a function of River Streamflow)
and susceptibility to damage. If eventually it is appropriate to divide the structure
into weir and headgate structures separately, either by separating each of these two
axes into one of the new structures or by endowing both kinds of structure with
implications for both values, this can be done without great difficulty.
5.2.5
Canal Systems and Canals
The means of delivering water from the headgates to the fields were the canals. Canal
Systems and Canals offer an array of complicated issues; this discussion will separate
these into four parts: first, the structure of canal channels and canal systems; second,
issues related to modeling water flow; third, the issue of damage to the canals; finally,
loss due to seepage and evaporation.
Canal System Structure: The physical arrangement of canals.
Role: The physical structure of a canal system determines its ability to deliver
water to the fields it serves. Characteristics that impact a system’s capabilities include
the lengths of canals, the branching structure of the system and the nature of its
junctions, the slopes of segments of the system, the cross-sectional profiles of channels,
the number and kind, if any, of water control devices positioned through the system,
and the surface characteristics of the portions of the channels in contact with water
flowing through it.
Implementation: The building blocks of the implementation are defined in the
FlowAg toolkit. There Model Objects of three kinds are used to assemble simula-
164
tions of entire irrigation systems: channel cross-sections; channel segments; and canal
systems.
Channel Cross-Sections exist in three varieties. Engineers typically use standard
shapes for constructed channel cross-sections (see Howard 1990 for an archaeological
discussion of these), and two of these- trapezoidal and parabolic- are pre-built. The
third is ‘irregular’, and allows the user to specify any cross-sectional shape desired,
provided only that it is convex4 .
Cross-Sections of any of the three kinds are used to define the ends of Channel Segments. Channel Segments have an upstream and downstream anchor coordinate that
marks the bottom and middle of the channel; the slope of a Channel Segment is calculated from the coordinates of the points. Channel Segments also have a roughness
value. Channel Segments are considered to have shapes that are extensions of their
upstream and downstream cross-sections. When the two cross-sections are identical,
this is easy, but when the channel shape changes from the upstream to the downstream end the shape of the channel at any point is assumed to be an interpolation
of the two shapes.
Channel Segments can be chained together; when two segments are assembled in a
sequence, there is a small sleight of hand performed: the downstream cross-section of
the upstream end is ignored; instead it is effectively replaced with the upstream crosssection of the downstream segment. Junctions are instances where a single upstream
segment has two or more downstream segments connected to it as outflow branches.
Channel Segments are all assumed to be straight, but the downstream segment need
not run in the same direction as the upstream segment, allowing the heading of the
canal to change. The junction between any Canal Segment is assumed to have a gate
that can be in one of two positions, open or closed.
Channel Segments branch out from HeadGates and are encapsulated in a Canal
4
This is to prevent a channel from having two troughs through which water will flow; in fact,
some Hohokam canals may violate this.
165
System; the Canal System object handles the calculations of actual water flow (see
below).
This structure allows almost any canal system that can be envisioned to be simulated. One limitation is that no downstream segment is allowed to have more than
one upstream segment feeding into it; there is no way, therefore, to return water
to a canal after removing it. A second, and perhaps more important one, is that
water control structures except for gates are absent; there are no weirs sluices, or
drop structures, though these are not thought to have been features of the Hohokam
system.
Like the landscape, Canal Systems are built using the Narrative/History structure.
Commands exist (defined in the WatMan toolkit) that create Cross-Sections and
Canal Segments and assemble them into Canal Systems. As with landscapes, this
allows not only a wide array of variations to be created, but it also allows canal
systems to change through time.
Data:
The most fundamental data we have about the Hohokam canals is the
record of their locations. For parts of the canal systems that have been preserved
this is straightforward, but for the majority of canals we must rely on maps that were
made in the past. The most enticing maps are the early ones (such as Turney’s; see
Fish and Fish (2007) for a reproduction) that show canals that were still visible in
great numbers across the landscape. These maps suffer from some problems, however,
chief among them the absence of detailed information about cross-section and slope.
The many channels that appear to cross one another on these maps also indicates that
these canals must be not only spatially but temporally mapped as well. However, the
biggest issue is the gap between a map, which is ultimately only useful in producing
a visual representation of a data set, and a structured data set that can be used
to simulate actual water flow. An attempt to use a digitized version of the Turney
map revealed these issues in stark contrast. The outcome of the digitizing process
was a set of line segments that collectively represented all the segments of the canals
166
depicted on the map. However, three things were missing. The first was the crosssection and slope information, and the second was chronology. The third was a deeper
issue: the structure of the canal system, with trunk lines feeding into branch lines,
was lost. The digitizing created nothing more than a list of groups of line segments.
Some of the problems this created could have been resolved by greater care on the
part of the person doing the digitizing; for example, each group represented a stretch
of channel, but a group did not correspond to any canal unit: sometimes groups
stopped at branch points, sometimes not, and sometimes the digitizing had been
done progressing upstream, and sometimes downstream. But the main problem was
data structure, as there was no hierarchical relationship among the canal sections to
be used to trace flow. Because there were more than 30,000 points, there was no way
to repair this. The result was convincing visually, but almost useless numerically.
The data structure employed in the HWM approach avoids these problems.
The shortfalls of earlier data collection efforts are being rectified in newer ones,
where cross-sections and temporal relationships are carefully documented. A second
front on which our understanding of Hohokam canal structure is improving is the
general pattern of the canal systems; this speaks to whether the canal was highly
dendritic, with many secondary, tertiary, and further subdivided branches, or, as
is seeming more likely, was characterized mainly by main lines and a few laterals.
Howard (2006) has done the most extensive study, and revised some earlier work,
especially in Canal System 1, to demonstrate that the pattern of main lines with
distribution canals and a few laterals, was consistent on both sides of the Salt River.
Range of Variations: The flexible arrangement of the HWM system for simulating
canal systems allows nearly any canal structure to be investigated. In theory, a
collection of real canal information and measurements could be used to create a
simulation based on the reality of an operating Hohokam canal system. In practice,
we will likely never have a data set so complete, but we may benefit from the HWM’s
ability to create both spatially and temporally detailed systems. The ‘Truth Value’
167
feature of the ABCM framework can also be used to differentiate between attested
and surmised or reconstructed values. Note that the issue of canal structure echoes
the same kind of issue raised by landscapes and streamflow values: because an infinite
number of gradations of canal network structures is possible, it is incumbent on the
modelers to create nominal categories of different structures.
Water Flow: The simulation of the movement of water through the canals.
Role: The delivery of water in an irrigation system is determined by the physical characteristics of the canals, but governed by the principles of water flow. The
problem presents itself neatly to archaeologists, who, observing a particular canal
structure, would like to calculate the volume of water that would be delivered at the
ends of the system per unit of time. The Hohokam, of course, faced the problem forward instead of backwards, needing to know how to build a canal system that would
achieve a desired delivery capacity. In either case, however, the problem quickly becomes the kind of problem that mathematicians like to call ‘nontrivial’; especially in
the case of a complicated structure, in which channel cross-sections and slopes vary
widely and the structure of canal branches is highly dendritic, it can rapidly become
nearly insoluble.
Implementation: Archaeologists have used several techniques to estimate water
flow rate and capacity through the Hohokam canal systems. The standard approach
now uses open-channel flow equations (Howard 1993b). One of these, called the
continuity equation, relates flow to the cross-sectional area of the water in the channel
to the velocity:
Q = V ×A
(5.1)
where Q is the rate of flow in m3 /s, V is the flow velocity, and A is the flow area. To
calculate V, and from this Q, an equation called Manning’s equation is used:
168
2
1
R3 S 2
V =
n
(5.2)
where R is the hydraulic radius (determined by the shape of the cross-section of
the water flow), S is the hydraulic slope of the channel (the downward slope of the
surface of the water), and n is a coefficient of channel ‘roughness’, a measure of the
resistance due to friction along the channel and dependent on the characteristics of
the channel’s surface. By making an assumption about the depth of water in a given
channel segment and about hydraulic slope (see Howard (1993b, 1990) for a detailed
description of the various ways this can be calculated, but there are good reasons for
assuming that it is equal to channel bed slope, if that is known) V can be calculated
and from that and A, Q.
The HWM Simulation can easily recapitulate the earlier approaches to calculating
canal flow using summary values or Manning’s equation (see Chapter 6 for examples).
However, water flow offers still more complications. Manning’s equation applies to
non-branching channels in equilibrium; real-world systems may be much more complicated. As a simple example, in a given Y-shaped branch, with one main channel
diverging into two, if one knows the amount of flow going into the junction, it is
difficult to calculate the amount of flow coming out on either branch: which branch
will have more flow? The answer depends on the channels’ cross-sections, slopes, and
roughness values. Manning’s equation can be used to calculate flow if the depth of the
water is known or assumed, but in this case we know only that the sum of the water
flowing in both outflow branches will be equal to the water flowing in5 ; Manning’s
equation cannot tell us how the water will be apportioned.
Moreover, the flow in a relatively flat open-channel system without hydraulic
control structures- and we do not know whether the Hohokam employed these- is a
linked system. So long as flow remains sub-critical (super-critical flow is the kind
5
Ignoring water loss due to seepage and evaporation- see below
169
of flow that is seen going over the top of a waterfall, while sub-critical is the more
common state of flow in a gently sloping channel), the amount of flow is determined
by not only what is occurring upstream but what is occuring downstream as well.
When our simple branch example is reconsidered in this light, and we conceive of each
branch as, potentially, having long reaches with cross-sections and slopes that vary
and additional branches further down the line, we see that the problem of calculating
how much water is found at each part of the system is highly complex.
The FlowAg toolkit approaches this problem using two alternative schemes. The
first is called the ’LPG’ algorithm; this is a very coarse substitute that has no genuine
grounding in flow dynamics, but produces a useful proxy that allows an array of
interesting dynamics to be provisionally explored. The other is a scheme derived
from algorithms used originally for the prediction and possible control of flood pulses
along rivers. The Army Corp of Engineers has written a software tool called HECRAS6 that tracks unsteady (meaning varying through time) flow through branches
and reaches of rivers. The algorithm begins with known conditions of depth and flow
rate at points along the system, including branch points, and assumptions about what
is occurring at the start and ends of the reaches being simulated; it then applies a
set of theoretically derived equations (derived from the assumption that energy and
water will be conserved throughout the system) and calculates coefficients that apply
to each point. These are then assembled into a system of simultaneous equations that
are solved to generate delta values for each point’s stage and flow, which can be used
to calculate the depth and rate of flow for the next time step. The FlowAg toolkit
re-implements portions of this algorithm.
Data:
Most of the archaeological discussion of water flow is based on indirect
inference from the structural properties of the canals; however, Howard (1990) addi6
At the time of this writing this software and documentation was available via
the Army Corp of Engineers Hydraulic Engineering Center, on the Internet at
http://www.hec.usace.army.mil/software/hec-ras/; see also Brunner (2002).
170
tionally demonstrated their utility by testing them against sediment deposits in the
canals.
Range of Variations: These different implementations carry different implications
for our study and understanding of the Hohokam system. Summary values are adequate for coarse-resolution efforts; they effectively argue that fine-scale chronology
doesn’t matter. However, if the canal system’s operation on a smaller scale does
matter, due, for example, to the need to manage the allocation of water among water
users on a tight time schedule, the more precise chronological resolution offered by the
implicit finite difference approaches will matter. The game described by the simulation will be one in which gates are opened or closed along different branches, causing
shifts that could change the distribution of water to all other parts of the system; the
result is that managing the system would require a balance and a cooperative effort7
that the summary approaches fail to capture.
Siltation and Erosion: The effects of water moving through the irrigation channels
on those channels.
Role: Flowing water brings two problems. First, if the water carries sediment
and the flow velocity is low, siltation can occur. Silt clogs channels by increasing the
roughness of the channel, and hence increasing drag on the water, and by changing,
and usually lessening, the slope of a given channel. Channels with significant siltation
can be far less efficient. Second, if water velocity is too high, erosion of the channel
bed can occur. Prehistoric channels lacked the advantage of modern concrete linings,
and high water velocity could wear away at a given channel’s sides, changing, and
usually diminishing, its ability to convey water at the rate and depth intended.
Implementation: Neither siltation nor erosion is yet implemented. When they
are included they will be alternative code models to the Canal System object, as
7
Jerry Howard has speculated that some simple management techniques may have been sufficient
to manage the Hohokam canal systems, but even these require some communication among people
stationed at various points along the system (Howard 2006).
171
an integral part of whatever flow algorithm is employed. Different flow algorithms
will require different implementations- summary approaches at the level of a canal
system may make it troublesome to calculate the effects of these smaller-scale issue.
Within the implicit finite-difference approach, both of these can be implemented by
modifying the channel properties between time steps of the flow algorithm.
Data: The Hohokam were undoubtedly aware of the problems of siltation and
erosion, and undoubtedly built their canal system to avoid the low and high velocities
of flow that caused each problem (Howard 1993b). There is additional evidence that
some channels were endowed with additional reinforcement to ameliorate erosion (i.e.
Doolittle 2000, Ackerly et al. 1989), and we may suspect that cleaning sediment from
the system was an ongoing process. The degree to which their canal systems were
impacted by each of these problems is unknown, nor do we know how to estimate
with any confidence the contribution of these problems to the tasks of maintenance
and repair of canals, as well as to their eventual abandonment.
Range of Variations: Siltation and erosion, being among the main challenges in
maintaining a canal system, would have played an important role in the Hohokam system, and contributed to the larger questions we have about the Hohokam trajectory;
the degree of the contribution, however, is not clearly known. However, relatively simple implementations can be envisioned in which water velocity directly modifies the
appropriate channel properties during the flow algorithm. More elaborate approaches
(for example, erosion of channel bed upstream increases sediment load for siltation
downstream) are not currently thought to be necessary, but could be considered.
Seepage and Evaporation: Loss of water during transport.
Role: An additional challenge to any open-channel irrigation system is the loss of
water in transit; this can occur through two means: seepage through the bottoms of
the water channels, or evaporation. While both result in a loss of water that can be
delivered to fields, they differ in important ways. The primary effect of seepage is
172
a loss of water that can be delivered to the irrigation targets; evaporation, however,
changes not only the volume but the nature of the water delivered. This is because
the water carries a chemical load. When evaporation occurs, some volume of water
is lost, but the chemicals it contained remain. This increases the concentration of
those chemicals in the water. Because evaporation occurs as the water moves down
the canal system, water at the end of the canal may have much higher concentrations
of chemicals than water at the head.
The impact of evaporation on chemical concentration may have contributed to
the salinization of Hohokam fields over time. Some authors have speculated that a
significant long-term problem affecting the Hohokam’s ability to make use of their
irrigation system was the buildup of deleterious chemicals, primarily salts, in the
fields; Ackerly et al. (1987) have gone so far as to chart out which plants might have
been more resistant to saline conditions, and to speculate a shift to these plants over
time. There are others who contest this, and argue that with the long-term effects
of salination would not have played a prominent role in the Hohokam trajectory (i.e.
Haury 1976). Howard notes that flushing salts off the surface of fields would have
required additional water above that needed simply for use by the plants (1993a,
2006). The role played by edaphic processes remains an open question that the
HWM Simulation can be used to address. More about this issue will be discussed
when Plants, Fields, and edaphic processes are introduced below.
Implementation: Seepage is not yet implemented but will likely be represented
by a Code Model optionally used during the flow algorithm; as with siltation and
erosion, the modification will depend on the flow algorithm used.
Evaporation is handled in the same way, but separately from seepage, and with a
few additional issues addressed. ‘Water’ is a software object defined in the FlowAg
toolkit; each instance of water has a volume, but can also carry some collection of
‘Chemical’ objects, each of which has a value that gives its concentration (in ppm).
One of the actions that ‘Water’ can perform is evaporation, which reduces the volume
173
of water and adjusts the concentrations of the Chemicals accordingly. Each River has
a water source which contains the record of the chemical concentrations in the water
flowing into the canals. The rate at which evaporation takes place is specified by a
parameter as an absolute offtake in mm per day.
Data: Howard (1993b) uses the figure of 10 percent to estimate the water loss due
to seepage and evaporation in Hohokam canals, but hard data are not available.
Range of Variation: The rates of loss due to both seepage and evaporation will
be adjustable in ways that allow the loss to vary from zero to severe. Perhaps equally
important, the chemical concentration in the water of the Rivers can be varied, so that
variations in the water delivered to the fields, with increased concentrations of these
chemicals, can be explored; the impact of chemicals on soils and plants is discussed
below.
5.2.6
Rain
Role: Irrigation is essentially a way of taking advantage of nature’s generosity: precipitation over a wide area is concentrated in the rivers and delivered in a package to a
central spot, which irrigation then diverts to achieve a purpose that would be impossible if only the local precipitation were relied upon. And to someone familiar with
the Phoenix Valley today, relying on local precipitation would seem an extremely
risky proposition, given that there is so little and that so much of it is delivered
in short, quick, violent storms in the middle of extremely hot times during the year.
However, the role of rainfall in the overall subsistence strategy of the Hohokam should
not be ignored. We have seen (Chapter 2) that the Hohokam who lived at distances
from perennial rivers made use of an array of strategies to cultivate plants without
the benefit of irrigation. These relied on rainfall, either directly deposited onto the
plants or collected from surface wash or at the mouths of gullies. We can ask, then,
what role this cultivation played in the larger picture of Hohokam subsistence, and, in
174
particular, whether it might have acted as a buffer against times when the irrigated
crops failed to be produced in expected amounts.
Implementation: Rainfall in the HWM simulation is implemented as part of the
WatMan package; it is actually a different way to make use of the same objects (Water
and Fields) that are defined in the FlowAg toolkit. Simple Commands deliver Water
to Fields directly, without the intervening step of irrigation. The Water delivered can
have a specified chemical concentration, but needn’t (no acid rain, we presume). The
execution of rainfall commands determines the schedule at which rain is delivered;
this can be controlled using the same kind of construction that is employed to control
Streamflow: a set of Narratives, Seasonal Narratives, and Histories.
Data: Actual rainfall data are not yet utilized, even though modern rainfall data
are collected in abundance and can theoretically be applied.
Range of Variation: The use of the Narrative and Seasonal Narrative structure
offers the same range of opportunities- variations may be crafted in the timing and size
of rainfall events, their predictability, and so forth- as are available for Streamflow values. Additionally, commands can easily be created that deliver water to fields within
certain geographic criteria, including within certain areas or at certain elevations.
5.2.7
Fields and Soil
Role: Irrigation systems and rainfall both deliver water to fields. The role fields
play in an agricultural system is straightforward, but can be differentiated into two
useful components: location and composition. A convenient way to view this is to
ask what makes two fields different; they answer is that they are in different places,
and they have different soil qualities. The FlowAg toolkit defines two objects that
work together to perform the actions of real-world fields: the Field object and the
Soil object.
Implementation: Field objects have only a few attributes. They are defined by four
175
corners (but needn’t be rectangular) represented by datum points on the landscape.
This implies not only a location (and an elevation) but an area. Additionally they
have one other attribute that may vary from Field to Field: the Evapotranspiration
rate, or EtO. This may be calculated in one of several optional ways, all dependent on
some additonal variables about the environment (including temperature and daylight
hours), or it may simply be set directly by the user.
Each Field object contains one Soil object. Soil objects are abstract representations of edaphic processes. They contain Water and Chemical objects, which may be
added to or removed as needed. All measurements in Soil objects are linear, corresponding to depth; area and volume are inferred from the Field object that contains
the Soil object. Soil objects enact edaphic processes like seepage and evaporation; water can be lost over time due to both of these, and the content of Chemicals adjusted
accordingly, as was seen in evaporation in canals. Only simple algorithms for seepage
and evaporation are currently implemented; these are expected to be improved with
additional, optional algorithms if needed.
Data: Our knowledge of Hohokam fields continues to improve. Hohokam fields
may have been any area of land where plants were cultivated.We have relatively little
data on actual Hohokam fields. Schaafsma (2007) has shown that the Hohokam could
build fields in nonarable areas through the diversion of silt-laden river water. Others
(Fish et al. 1992a) have demonstrated that ‘rock fields’, in which arrangements of
rocks formed spaces in which evaporation was slowed and water could accumulate for
use by cultivated plants, could have been used for plants like agave. Ethnographic
sources (see Wilson 2003) describe Pima fields as divided by berms and fences of
sticks, which probably functioned as silt traps.
Range of Variation: Perhaps oddly, the claim here is that the very simple Field
object can, by virtue of emphasizing only the details crucial to plant growth (or, if
you will, by focusing on the effects of physical characteristics like berms or rock pile,
instead of trying to model these directly) can accommodate the range of variation
176
needed to simulate all or nearly all of the Hohokam fields we hope to represent. That
is, while our knowledge of Hohokam field techniques improves, the aspects of those
techniques are captured in the elements included in the Field object. Hence fields
with high EtO or low, and at high elevations or low, can be implemented, along with
any of an array of soil characteristics.
5.2.8
Plants and Crops
Role: The key role of plants in an agricultural system is obvious: they are its raison
d’etre, of course. What is subtler is how the characteristics of plants shape the way
they are used. The most basic issue is the plants’ demand for water over the course of
its development from sowing to harvest. Secondarily, a plant may require nutrients in
its soil, or may be adversely affected by chemical content in the soil; it may also impact
the chemical content, both by removing and adding chemicals. The characteristics of
crop plants act in concert with the capacity of the irrigation system to determine an
acceptable agricultural calendar to maximize production and minimize risk.
Implementation: Although the words can be used nearly interchangeably in common speech, the HWM simulation differentiates between Plants and Crops. In the
HWM framework, Plants are abstract, while Crops are instantiated. A ‘Plant’ is an
object that has abstract, and unchangeable, characteristics; for example, one Plant
may have a growing season of 90 days. A Crop, on the other hand, is an instance of
that Plant growing in a field; one can have a field in which several Crops all exist,
so that one was planted May 1st, one June 1st, and one July 1st, each occupying
1/3 of the field and each due for harvest at a different time. More simply, ‘Plants’
are unchangeable representations of the abstract characteristics of real-world plants,
while Crops are representations of production in fields.
This is a key point largely because the Plant object makes use of the ABCM
framework’s Configuration structure. A Configuration has two halves that mirror one
177
another: one is a collection of software objects that represent data while the simulation
is running, the other is a set of database structures that store data permanently. In
the HWM case, the FlowAg toolkit defines Plant objects abstractly, the WatMan kit
defines a Configuration that can make use of these, and the HWM database, with the
benefit of these structures behind it, is able to store raw data about Plants.
A specific implementation underlies this. It is possible to model plant growth in
any number of ways; ecologists and farmers are both interested in this, and many
efforts exist in the literature. The HWM framework is primarily interested in modeling how plants respond to the availability of water; for the initial efforts a very
simple algorithm for plant growth was created, and from it were crafted certain data
requirements. The template for this algorithm was the Food and Agriculture Organization’s field manual for irrigation systems8 . The core of the algorithm centers on
a few simple assumptions. First, it is assumed that the amount of water plants need
is dependent on the EtO of their environment, and, second, that the water required
can be calculated by applying a single, scalar multiple to the need of a standard grass
crop. Third, this need will change as the plant passes through distinct life stages.
The FAO manual also states that different plants have different degrees of susceptibility to drought. In the HWM implementation, this is also considered to have the
potential of changing from one life stage to the next; the result is a data structure
that gives a plant’s characteristics in table form as:
Using this data structure, it is possible to begin to assemble a database of information about plants that might have been used by the Hohokam. This is not
without difficulty: there is no guarantee that the literature will contain information
on a given plant in precisely the structure we have employed here. One strategy to
deal with this is to make it possible to include multiple varieties of any given plant, so
different values can be stored as alternatives, to be selected and used as needed. The
database structure allows these to be grouped together, so that with care it is possi8
Online at http://www.fao.org/docrep/S2022E/S2022E00.htm, as of May 17th, 2009.
178
ble to construct simulations at the higher level (i.e. using ‘Maize’) and load specific
configurations interchangeably (i.e. for run 1234, use ‘Maize- Short Growing Season’,
but for run 1235 use ‘Maize- High Drought Resist’; in both cases the command ‘Plant
Maize’ will work as expected).
The next difficulty, however, is how these data are translated into simulation
dynamics. The FAO manual’s interest is in calculating the amount of water required
on fields growing certain kinds of plants. It does not, however, specify what happens
when those water needs are not met. Will one day with no water kill a crop? How will
a narrow but extended shortfall affect the final yield? Some of this is implied, but not
quantified, in the manual’s discussion of drought resistance, but drought resistance
for various plants is given categorically (high, medium, low) rather than numerically,
and there is no means given to translate the general data about plants into a plant’s
estimated growth performance on each day throughout its life cycle.
It is worth noting that this problem is not unique to this dataset; it inheres
in practically every translation of real-world data into simulation, and the ABCM
framework is intended to allow structures like those found in the FlowAg and WatMan
toolkits to be created to balance the needs of the simulated environment with the
challenges of real-world data.
The algorithm for growth is encapsulated in the Crop model object. Crops consist
of one kind of Plant, assumed to occupy some fraction of a field. It is always assumed
that the crop is planted at its optimum density. Crops grow, day by day, using the
values from the Plant’s growth stage on that day. Some variant algorithms allow
the impact of water shortages to be assessed; a typical one reduces the final yield
by some fraction each day the water need is not met, and kills the Crop entirely if
the water need is not met for some number of days consecutively. Note that current
implementations do not easily support perennials.
Crops interact with other Crop objects in the same field by virtue of their impacts
on the Soil object. One Crop object removing water from the Soil makes that water
179
unavailable to other Crop objects. Crops may also adjust the Chemical content of Soil
objects; this allows dynamics of nutrient depletion, nitrogen fixing, or the impacts of
deleterious Chemicals to be tracked.
Productivity of a given Crop will also be an attribute of the Plant variety, determined in bushels or pounds per acre, so that crops identical in other growing
characteristics may vary in terms of their value; however, this is not yet implemented.
Data: Information about plants can be continually accrued within this structure.
One challenge is generating reasonable values for prehistoric varieties of plants, which
may have been markedly different from their modern versions. Empirical and experimental studies contribute raw data about the characteristics of many agricultural
plants. The files attached to the electronic version of this document include a snapshot of the plant database; see Appendix A for details.
Range of Variation: Ignoring for the moment the impacts of chemicals on plant
growth, the range of variation available for plants is relatively easy to describe: plants
may grow quickly or slowly, may have many stages or few, and may in any of those
stages be drought resistant or not, or have high water demands, or not. Obviously,
however, this leaves wide avenues for variants, and much like other, similar cases
described above, part of the exploratory process will be staking out combinations that
are of interest out of these wide possibilities. One strategy will be to let real-world
plant data drive these; another might be to create generic possibilities representing
different points within the possible spaces of variation. What may ultimately be
of interest is not any individual plant, but how a given repertoire of plants can be
assembled into an agricultural calendar; different combinations will lead to different
timings of water distribution, as well as different balances of opportunity and risk.
180
5.2.9
Agents and Actors within the larger HWM system
I here put aside the formula for the preceding subsections (Role, Implementation,
Data, and Variation) because agents are quite different. In later chapters I explore the
role of agents in the simulation, and outline how the ABCM framework is especially
well constructed for agent-based modeling. Moreover, the implementation of agents
in an ABCM model is extremely flexible, and so the possible variations cannot be
easily projected here. However, some key points can be made.
It might seem enough to say that agents represent humans on the landscape, and
therefore represent the Hohokam; in fact, the situation is slightly more complicated,
because even if we restrict ourselves to the idea that agents represent humans (in other
contexts software agents can represent practically anything else, including plants or
even canals and water), they can represent individuals, households, communities, or
even entirely abstract entities that subsume the powers that human groups would
have held. What matters more in their definition is the role they play relative to the
other components of the model. Agents represent the decisionmaking portions of the
simulation. They operate on collections of the other elements present, and as such
they are the spokes that tie the other components together. One way to think of the
effect of agents is to compare them with the role of water in the HWM Simulation:
water operates on the rules of water flow and links rivers to canals, fields, and plants.
Agents can operate on any set of rules we would like, and can link any elements we
choose together.
Agents are Model Objects in the same sense that canal segments and plants are;
they may be defined at any level of the software hierarchy, according to their level
of generality. Structurally, however, they differ in that agents can make use of any
defined command9 ; that is, they make use of the same commands that are used
for Narratives and Histories. While this introduces a potential for mischief on the
9
At or below the level at which they are defined
181
part of modelers (because it makes all commands structurally available, even though
some commands- like ‘Rain on all fields’- are semantically inappropriate for agents
representing humans to use), it provides the flexibility to encode the rules agents
follow and the actions they take in the same framework as all other model elements.
Whereas GIS data might exist mainly as pixels or vectors, and an agent would need
some software layer to translate between its rules and the GIS data (cite Hessam
Sarjoughian’s work), in the ABCM the agent uses the same ‘nouns’ as the rest of the
modeling framework, made available via the collection of commands.
An important implication of this is that previously static ideas can become dynamic. The Hohokam agricultural calendar offers an example. Using the Narrative/History approach, the calendar can be specified as events that can be scheduled
to take place on different days. By creating an agent representing the farmer who is
making decisions about cropping, the calendar can become a set of alternatives: in a
given year, the farmer could assess options, consider planting some kind of crops at
a given time if conditions warrant, cut losses if plants die due to drought, etc. The
‘calendar’ is no longer a schedule but an algorithm.
Agents in the Hohokam system tie elements together in several important ways.
They represent available labor and population, which can serve both to limit opportunities for ways the canal system might be managed, and to generate demand.
Abstract agents may be endowed with a kind of omniscience and omnipresence: a
single agent might represent some capacity for labor and some demand, but be able
to manage an entire canal system, determining the crop schedule for the entire valley,
and so forth. Alternatively, there may be multiple agents limited in both the scope
of their knowledge and their abilities: communities along the canals, or households
in the valleys with the opportunity to join into communities. They may be spatially
located, and occupy positions in the larger context of the simulation: near the head
of the canal systems, near the ends, in the non-irrigated areas, etc. By virtue of
their different characteristics and their different positions they may have competing
182
interests, and these may interact to generate the dynamics of the system we observe.
As with the other elements in the simulation, agents reflect the operations of refining assertions about the Hohokam context and moving between abstractions and
reconstructions. As our data about Hohokam life, social structure, community organization, and household dynamics increases we may wish to explore how these are
integrated into the larger systems of Hohokam irrigated agriculture. But we may also
choose to make our agents more abstract and follow simpler dynamics, and we may
find that these adequately address broader more comparative questions applicable to
a wider range of contexts. The range of variation for agents is nearly limitless.
5.3
Conclusion: Model Variations, Questions, and the Exploratory Approach
Thus far I have focused on the range of variation possible for each included element
in the HWM Simulation. Because the framework allows the elements to be varied
in almost every combination, the array of possibilities for the entire simulation is
effectively the product of all of these. The issues of scale, resolution, and composition
are all addressed by the creation of different scenarios from the range of these elements
(including omission of some, when appropriate). The scale of the simulation may be
varied through the creation of more or fewer elements, or the creation of larger or
smaller versions of those elements. We may, for example, consider a simple system
with one headgate, a short canal, and a single field; or we may create more than one
river, with more than one headgate location, and several branching and extensive
canal systems projecting away from them, in conjunction with an array of rain-fed
fields on the peripheries. Composition is addressed by the inclusion of either elementsone kind of plant vs. many, for example- or dynamics- seepage and evaporation may
be severe but may also be omitted entirely. Resolution is limited only by the level of
detail we would like to attend to: the large canal system may feed a wide number of
183
small fields each with varied crop schedules, and all determined by streamflow that
varies daily, or one field may be a proxy for the whole system, with streamflow levels
the same every day and varying only from year to year.
There was, however, a further claim to this chapter: that these variations permit
the pursuit of the kinds of questions that define an exploratory approach. It is here
that we begin to see the difficulty of lacking a ‘road map’ to such an approach, for
the questions we may pursue follow paths that intersect and crosscut in ways that
are not knowable at the start of the exploration.
Taken element by element, it is relatively easy, though not completely straigthforward, to arrive at questions that can be put to a test in the model. If we assume
that a version of the model has been constructed, we can begin to vary that element
and perform what might be termed “ceteris paribus” arguments- “all other things
being equal”- and in so doing explore the impacts of that element on the others in
the system. If our interest is in, say, productivity of Hohokam fields, we can ratchet
up the level of seepage from zero to very prohibitive levels, and chart the effects this
has on production. Another similar approach is a sensitivity analysis; this has been
a common approach of the ‘big real’ models from Chapter 1.
But there are two problems with this approach. One is pragmatic, but important:
data informing particular questions may not immediately be available. The ABCM
approach envisions data collection as an ongoing process, and assumes that data collection will provide new input for the simulation, allowing the continual improvement
of our approach to many of these questions. The emphasis on the refinement of data
in the form of assertions downward through the software hierarchy, and in the need
to take wide possibilities and resolve them to smaller subsets, reflect the emphasis
on data within the simulation; this also leads to a sense in which some questions are
questions that can be asked now, while some cannot.
The more important issue is the logical, and not merely practical, relationships
among the questions themselves: they are not amenable to ceteris paribus approaches.
184
All other things are not equal. Changing the structure of the canal system may change
the way that seepage will impact it; assuming that seepage can be reduced by lining
canal channels opens a new issue of the cost of canal maintenance and the velocity of
water moving through the channels; and so on.
It is this interconnectedness in the real world that pushes us to ask for different
ways of understanding the phenomena we study; some of these different ways, like
robustness and resilience, describe the general behavior of systems, and the modeling
approach is intended to be useful in applying these strategies to a given context. The
questions are not merely whether we can see the impact of one change among many
different variables; the question is whether we can understand why the system as a
whole changes under different circumstances.
I have earlier argued that when this kind of thinking is applied to a trajectory
of history, such as the Hohokam trajectory, the modeling approach is requisite. We
would like to make a claim that the Hohokam were an expression of some system
that we understand. The model allows the creation of the ahistorical version of that
system, removing the accidental asymmetries of place and time and concentrating
on an abstraction where the dynamics we propose were determinant are laid bare.
In parallel, the more realistic version can be created, with ever-increasing detail laid
down in a reconstruction of what we know about what actually took place.
In the HWM, this means that we can make use of the elements listed here to create
the abstract version we need to examine the system that we have laid out roughly in
the cartoon. But keeping in mind the multiple goals of the exploratory approach as
listed in Chapter 3, there are multiple purposes to which this approach can be put. Of
the six possible goals of the exploratory approach (see section 3.4.1), the only one not
easily supported in the ABCM framework is the most general, which aims to transcend
a given context; implementations of the ABCM framework tie their vocabularies to
specific contexts, making this effort one that will take place at a higher level than the
ABCM model itself. The other five goals all ask about the relationship between the
185
real system as it actually was and the abstract system it may have expressed. The
key to exploring those questions is constructing an abstract picture of that system
and moving back and forth from it to the picture more grounded in the real-world
example. What can’t be known a priori is how this process will go: it is possible,
and even likely, that pursuing one or more of the exploratory goals requires a very
fine-scaled, detailed and particular view of some of the components listed here, but
can employ the coarse and abstract depictions of others.
For the Hohokam case, I will close with an example; it is far short of complete, but
represents the kind of problem, and process, the ABCM framework and the HWM
Simulation are intended to address. We begin with the simple hope of estimating
the productivity of the Hohokam agricultural output around the Salt River systems.
We could assume two crops per year (from ethnographic sources), planted in the
most common or productive crop (maize, surely) and ideal conditions. From this we
could estimate the area of the fields and the total yield of these crops; Turney has
done exactly this, and the result is a convenient theoretical baseline. But, uncontent with this, we can ask what other limits there might have been, and also what
other opportunities. Would a different crop cycle have been better? Would water
have been sufficient to irrigate the entire area? If not, what would have been the
best strategy to deal with shortfalls? Now we are in an area where we must propose different solutions- a more complicated agricultural calendar with other plants
grown at different times or in different areas, perhaps- and these may require more
data than we have. Might flood damage have crippled production with effects lasting more than a year, as modeled by Howard (1993a)? Now we must ask about the
timing of floods and the vulnerability of the infrastructure of the irrigation system.
To introduce agents: who would have participated in the maintenance and repair of
these systems? We can assume it would have been different groups from different
places in the valley, with different advantages, challenges, and claims to the water
provided. This opens even more elaborate lines of thinking, but for now we can make
186
a small, illustrative assumption: that the more successful the irrigation system, the
more likely it would have been to increase the number of people it served. Now odd
dynamics are in play: suppose that headgates were stronger, and could withstand
more powerful flood events. If so, might this have induced the system to grow more
quickly, paradoxically leading it into a situation where it grew too large to be sustained? Might the improvement of headgates have actually been to the detriment
of the system? Demonstration of this dynamic might not need the elaborate crop
schedules we started with, but could rely on a simplified representation; the more
detailed one could also be used, of course, and would undoubtedly open new lines of
inquiry.
187
Chapter 6
Dimensions of Exploration in the Hohokam
Water Management Simulation
In the preceding pages I have argued that for approaching a class of archaeological
problems (which by definition I take to include interactions among humans and of
humans with their natural and artificially created environment, through time) which
are characterized by issues that exist on a large scale but are understood to be comprised of the incompletely understood interactions among a number of smaller-scale
elements, and for which we hope to apply principles we have drawn from research
in the operation and dynamics of complex systems, a special modeling approach is
required. The questions we have about the trajectory of the Hohokam provide a
case-study, and I have proposed (following others, including Hegmon et al. 2008 and
Janssen and Anderies 2007) that our understanding of this context may benefit from
insights such as robustness, resilience, and self-organization. The more central issue
has been to explicate the modeling approach. Previously I have focused on more fundamental aspects of the Hohokam Water Management Simulation (HWM) and the
toolkit from which it was created, the Assertion-Based Computer Modeling (ABCM)
framework1 . This has been appropriate: the modeling approach requires that our
conceptions about the system be resolved to fundamental pieces; the ABCM framework provides a library of specific kinds of pieces, and in the last chapter I explored
how the conceptual elements in the Hohokam context were divided into components,
each one chosen from among these possible kinds of pieces, in order to construct the
HWM Simulation. In this chapter I shift the focus outward, and examine how the
components in the HWM context interact. This shift is crucial to understanding and
1
Source code and documentation for the software discussed here will be made available via the
author and from the Global Institute of Sustainability at Arizona State University.
188
using the approach.
The chapter is built around the construction of a collection of related examples;
the issue of the interaction among the examples’ components is the thread that runs
throughout the chapter’s course. The chapter is built in two main sections: in the
first, the examples are presented in the form of an additive scenario, building from the
first necessary and foundational pieces of the simulation toward increasingly rich and
inclusive combinations of elements. Within this section, the interaction of elements
impacts the example in two clear ways: first, with each new element there appear
constraints that the new element imposes on the previously implemented ones, and
that the collection of existing elements among which the new element is placed impose
on it. I will demonstrate that this is not a technical issue- the software framework
of the ABCM can perform the calculations without problem- but, rather, is part and
parcel of the modeling approach; usually such constraints are previously unexpected
implications of the earlier modeling efforts- an example of simulation models teaching
us through their construction even before their execution (see Frigg and Hartmann
Spring 2008). Second, the components’ interactions yield results that are interesting,
unexpected, and useful. This is the action of the model in execution: when elements
are combined they behave in ways that we cannot think through ourselves. The
emphasis during this section is on showing the flexibility of the modeling framework
to pursue a range of interesting questions through the ‘exploratory’ approach proposed
in Chapter 3. In the second part of the chapter, the components are employed in the
construction of a chain of argument, using the ABCM audit and analysis tools, as
discussed in Chapter 4. This second section adds a level of rigor that is glossed over
in the first section, by requiring that the components be used according to the rules
imposed within the ABCM framework.
This second component is a step toward the construction of formal scientific arguments, so that simulation modeling in the ABCM framework can play the appropriate
role in the scientific process, as was raised in Chapter 3. However, here it also serves
189
to reinforce the same message that is carried through the initial section: the individual components within the modeling framework are necessarily abstractions- their
real-world targets will always be richer and more complicated than our conceptual
models; but this an advantage to modeling, not a limitation. The fact that they are
abstractions is what gives us the freedom to manipulate them as we must in order to
pursue the range of interrelated goals laid out in the exploratory approach.
I return to these themes in a concluding section, in which I address them and a
few others, illuminated by the light of the examples of the simulation in action. For
now I turn attention to a patch of virtual territory I refer to as Landscape 3.
6.1
Life on Landscape 3: The HWM Simulation in action
In this section I will present a collection of simulation runs that begin with relatively
few elements but that progressively include more of the components that we established were necessary in Chapter 5 and that comprise the HWM Simulation ‘cartoon’.
This additive procedure is rhetorically useful, as each component can be introduced
and addressed in turn, but, more importantly, it is central to a larger point, which
is that the framework can be used to move in directions of increasing or decreasing
complexity.
The hope in this section is that each of the components is shown to meet three
criteria: it is flexible, it is interesting, and it is useful. By ‘flexible’ I mean that it
can participate in the exploratory programme by being included or removed, and by
having an array of expressions that are suited to moving along the different dimensions
of exploration I have outlined. By ‘interesting’ I do not mean, necessarily, that they
will be appealing to the reader; I may hope for this, but I mean something more
specific: namely, that the different variants of a component can give rise to dynamics
that offer surprises and hint at new questions, or, equally, that can be shown to
have an unexpected interplay with other elements in the simulated system. Finally,
190
by ‘useful’ I mean useful as measured by a standard rooted firmly in the Hohokam
system: the arguments for included each component were made in Chapter 5, but the
demonstration that each component will be (or, if this is uncertain now, seem very
likely to be) involved in the larger pictures that we are drawing from the Hohokam
context is given here.
The name ‘Landscape 3’ arises from the existence of other such landscapes that
are packaged in a collection of demos2 , but if it is dryly evocative (suggesting that
other landscapes, 1 and 2 but also 4 and beyond, also exist), this serves my purpose.
The scenario is intended to be simple, so that it is manageable and useful as an
illustration, but not so simple that it offers no subtleties, which will allow me to
make broader points about the way the modeling framework is to be put to use.
6.1.1
Landscape 3: Topography
Rather than taking the landscape as a given and focusing on the richness of its details,
as in a GIS-based approach, or reducing a problem to some abstract grid of cells, as
in many modeling approaches from the Agent-Based Modeling milieu, the ABCM
framework allows a landscape to be constructed with either rich or limited detail,
and specifically configured to the kinds of problems being addressed. Special features
allow versions of the landscape to be substituted for one another, or for elements to
be included or excluded as needed. The scenario set on Landscape 3 uses several
optional variants.
The actual landscape part of Landscape 3- its virtual topography- begins with a
rectangular, central area that, employing a virtual unit that we will take to mean
one meter, measures just over 5 km from east to west and exactly 4 km from north
to south (see Figure 6.1). The coordinate system establishes an ‘equator’ at the N-S
midline; the origin point (0, 0) is offset, however, placed only 500 m from the western
2
A collection of these is included in the printed documentation for the HWM system; all demos
and scripts are available to be reviewed and run in the online version.
191
Figure 6.1. The ‘central basin’ of Landscape 3. Note that the contour lines, in
green, run diagonally north and south away from the center line that will be the
position of the river, so that the most directly downhill slope runs back toward it.
edge. This point also marks a point of zero elevation. The landscape has a constant
slope downward from east to west; this slope is 1:1000 on the equator, making it easy
to calculate the elevation of points to the east and west (4km east = 4 m elevation).
There is also a gentler (2:10000, or 20 cm for every km) grade upward in both N and
S directions away from the equator, so that the land slopes downward toward the
central E-W axis, and movement directly N or S away from this axis means going
slightly uphill3 .
Landscape 3 optionally includes two additional areas that lie to the north and
3
Mathematically: z = 0.001x + 0.0002|y|, for −500≤x≤4700, −2000≤y≤2000. However, the
actual specification of the landscape is via the placement of datum points at 100 m intervals across
the area.
192
Figure 6.2. Landscape 3 with north and south mountains. Because this map shows
a larger vertical interval that Figure 6.1, the contour lines do not illustrate the very
small slope of the central basin.
193
Figure 6.3. Landscape 3 with north and south mountains in 3D. The view is from
the west looking east; the north mountains are 500 m further away and shorter than
their southern counterparts.
south of this central basin (see Figures 6.2 and 6.3); these represent mountains and
provide additional land area for fields at various elevations. Each set includes three
plateaus; in the standard version of the north range these occur at 400, 800, and
1200 m, while in the south they are found at 500, 1000, and 1500 m; other versions
can vary these. We will return to these highlands toward the end of the extended
example. For now it is useful to note, as a small example of the flexibility of creating
a landscape in the ABCM, that the three areas can be used in any of their 7 possible
combinations of presence and absence. Additionally, we will have use of a second
version of the central basin, in which the N-S gradients are steepened4 ; this version
includes a set of datum points that use identical identifiers, and so can be substituted
for the original as needed, as could, in principle, any other kind of landscape, without
losing the ability to play out any of the other versions of the scenario that I present
here. For the discussion here I will take advantage of the flexibility and include the
mountains only when needed; when dealing only with the basin area they are omitted.
As more elements are added to the scenario, constraints that the landscape offers
4
To z = 0.001x + 0.0012|y|
194
on them will be revealed, as will opportunities that this particular design offers. It
should become clear that this landscape has been rather carefully chosen for this
demonstration. It is not the most featureless, abstract plain that could be imagined;
its characteristics have been selected for their effects on the other elements that will
be presented shortly.
6.1.2
Rivers, Headgates, and Stream Flow
Among the theories about the Hohokam that were presented in Chapters 2 and 5
was the suggestion that the number, location, and characteristics of headgates (or
possible headgate sites) played an important role in structuring the relationships of
different Hohokam groups along the major rivers that fed the irrigation systems. In
this section I show how the HWM Simulation can be used to explore some dynamics
that fall from the locations of headgates. We often assume that those who occupy the
upstream positions along a water system are invariably in the advantageous position.
The crux of the examples given here is the suggestion that the relationship between
upstream and downstream may be more complicated than we have realized; coupled
with long-term streamflow data, the Simulation can be used to explore the possibility
that there might have been changes through time in which position held the greatest
advantage. A key part of this example is that it is comprised of very few components:
that such rich dynamics can result from one small change, a single new link between
two variables (headgate efficiency and river flow level), represents both the power and
the challenge inherent in the exploratory approach.
The strategy for modeling a river in the HWM was described in Chapter 5, but can
be summarized succinctly: Rivers exist only as sequences of points on the landscape
that are marked as potential locations for headgates. This means they have none of
the characteristics of channel morphology, not even width, that might be expected.
They have only two attributes: an annual flow value and a value that determines what
195
Figure 6.4. Landscape 3 with Default River and headgates. Four headgates lie
along the river; two of these are at the edges of the map.
portion of that annual flow is occurring on a given day. These are set by narrative
values to indicate how much flow, in cubic meters per second, is available at the
headgate locations. The most significant aspect of the arrangement is that headgates
occur in a sequence, so that water removed by upstream headgates is not available
for use by downstream ones.
The River on Landscape 3 (named the ‘Default River’) runs east-to-west along
the ‘equator’ and consists of four headgates; two of these lie at the east and west
edges of the landscape and are not actually used- they remove no water, and exist
only so that when the river is drawn on a map it appears to flow through the whole
196
landscape (see figure 6.4). The other two, named HG1 and HG2, lie 500 m apart, at
E coordinates 4500 and 4000. This simple arrangement allows us to see the dynamics
that can arise when two components, even two fairly simple ones, interact.
We begin by following Howard (1993a), and assume that headgates have an efficiency that specifies the fraction of the river flow that can be diverted into the canal
system. We will see, below, that this approach can be used to determine the initial
conditions at the upstream end of a canal system, by specifying the flow that enters
the canal system in cubic meters of water per second. We can, with only back-ofthe-envelope calculations, show what must inhere when two headgates coexist. If the
upstream headgate, for example, were to have an efficiency of 50 percent, then the
downstream headgate would have to have an efficiency of 100 percent to capture the
same amount of flow. If the upstream headgate is more than 50 percent efficient, the
downstream gate will be forced to make do with less water, no matter how efficient
it is. If we assumed that the two gates were comparably efficient- say, each one removing 60 percent of the flow, then the first gate will be able to divert a significantly
greater amount of water. However, if the efficiency of both gates is relatively low,
and the downstream gate is slightly more efficient, a condition could occur in which
the downstream gate takes off more water than the upstream gate.
The role of headgates in the Hohokam world has been discussed by a number
of authors; these were reviewed more extensively in Chapter 5. Here we can note
Rice’s argument (1998) that the restricted possibilities for headgates and their relative
locations were sources of conflict and competition among the Hohokam along the Salt
River. We can also note Waters and Ravesloot’s (2001) argument that headgate
efficiency may have changed through time with changes to the river morphology. I
will say more about these diachronic issues below, but for now I will note that the idea
that headgate efficiency is determined by the specifics of river channel morphology
may render the assumption of a flat efficiency for the headgates too limiting. As a
next step, the HWM offers a small, optional change, in which the efficiency of any
197
Figure 6.5. Efficiency curves of two headgates. The downstream headgate is more
efficient than the upstream one at lower flows.
headgate is a function of the available river flow (following a proposal in Howard
1993a); some gates may be more efficient at low flow than they are at high flow, or
vice versa, dependent on the channel bed morphology and the topography of the land
leading away from them, and other factors. One needn’t assume that a headgate that
becomes less efficient at higher flows will necessarily draw off less water in absolute
terms, only that the possibility exists that some gates will be more efficient when flow
is lower.
We can construct an illustrative example of the new complications that this opens.
We assume that each gate has its own efficiency curve; the curves ar given in Figure
6.5. Flow enters HG1 at some value; HG1 removes the percent that it is able at that
flow level. The reduced flow travels downstream and reaches HG2; HG2 is able to
remove water more efficiently because the flow level is lower. We can show that for
certain ranges of efficiency curves and flow, HG2 is removing more water than HG15 .
We could also, as noted, achieve this simply by saying that HG1’s efficiency is
5
This example is highly skewed; in theory it would be possible to use a single curve for both
headgates, but this might narrow the range of the effect even further, and for a demonstration the
approach used is simpler.
198
Figure 6.6. Absolute flow from two gates of varying efficiency. The headgates use
the efficiency curves in Figure 6.5; flow increases daily from an initial value of zero
on Jan 1.
199
Figure 6.7. Salt River seasonal flow. Reconstructed seasonal variation in flow along
the Salt River; from Graybill et al. 2006.
Figure 6.8. Gila River seasonal flow. Reconstructed seasonal variation in flow along
the Gila River; from Graybill et al. 2006.
lower than HG2’s, but there is more to be seen in this example- more dynamics that
arise when more model elements are added. We know that flow along rivers varies
during the year; for the Salt and Gila rivers we have reconstructions of intra-annual
flow variations from historical records (Graybill et al. 2006). Figure 6.6 shows a
simulation run in which flow is varied from low to high over a series of days. It
is possible to read this graph by assuming that the horizontal axis is a proxy for
flow, which increases from left to right; the graph is therefor a graph of which flow
levels lead to which levels of offtake for each gate. But it is also equally important
to understand it as it really is: a graph of time. Read in this way it means that
for a while, the downstream headgate was the winner in the game, but during later
periods, when flow increased, the upstream gate was removing more.
200
Figure 6.9. Headgate efficiency demo using Salt seasonal flow. Graph shows paired
headgates with the same efficiency curves that were used in Figure 6.6 (see Figure
6.5) but using the reconstructed flow data for seasonal variation along the Salt River.
These abstract dynamics can be explored using examples of real flow variation over
time. We can begin with the reconstruction of seasonal variations. Data exist from
stations along the Gila and Salt Rivers in historic times that allow us to reconstruct
the seasonal variation in streamflow (see Figures 6.7 and 6.8). The patterns for the
two rivers, though similar, are different. The HWM simulation uses two Seasonal
Narratives to represent these; a flow level set along the main river for the entire year
is parsed into daily fractions so that the resulting monthly flows are in the proportions
represented on the graphs.
When these values are used with our specially-configured headgates, we find that
the two rivers produce very different patterns (see Figures 6.9 and 6.10). Along the
virtual Salt, where flow is very high during one part of the year and relatively low
during the rest, the downstream gate takes off more during most of the year, except
during the peak. On the ersatz Gila, conversely, there are two periods of the year
where the downstream gate has the advantage.
201
Figure 6.10. Headgate efficiency demo using Gila seasonal flow. Graph shows
paired headgates with the same efficiency curves that were used in Figure 6.6 (see
Figure 6.5) but using the reconstructed flow data for seasonal variation along the Gila
River. Note that during two periods of the year the downstream (south) flow value
is higher (and they are roughly equal in January).
Figure 6.11. Salt River headgate efficiency over a range of values. The graph shows the offtake of two headgates
using reconstructed seasonal flow distributions of the Salt river over a range of values and given the efficiency curve
as shown in Figure 6.5. Absolute flow begins at zero and moves upward each year through time to the right. Note
how for different ranges the times of the year that the downstream gets more offtake vary.
202
Figure 6.12. Gila River headgate efficiency over a range of values. The graph shows the offtake of two headgates
using reconstructed seasonal flow distributions of the Gila river over a range of values and given the efficiency curve
as shown in Figure 6.5. Absolute flow begins at zero and moves upward each year through time to the right. Note
how for different ranges the times of the year that the downstream gets more offtake vary, but in contrast to the Salt
River (see Figure 6.11) there is no value on the graph for which there is no time of the year when the downstream
gate is not drawing more flow.
203
204
To produce Figures 6.9 and 6.10, however, an arbitrary value for annual flow
was chosen. This begs a further question, which is to ask how this is affected by
annual variation in flow. During dry years, the flow arriving in our system may fall
below the level benefitting the downstream gate for more of the year than it does
in comparatively wet years. To chart this, the simulation was run using each of the
two reconstructions of seasonal flow, applied to a steadily increasing flow value. The
results are given in Figures 6.11 and 6.12. These graphs show that the interplay
between annual flow level, seasonal flow, and headgate efficiency has a richness that
might not initially have been suspected; this can be seen especially clearly in Figure
6.11, where the crests on the blue graph, representing the second gate’s offtake, curve
and twist as the amount of water they represent varies and different times of the year
are seen to be more or less favored depending on the absolute flow and the offtake of
the upstream gate.
We can use this in turn to examine reconstructions of annual flow- were there
more years with low flow during some periods, meaning that for extended periods
the downstream canal system would have had the better results? An example of
this approach, abbreviated to a 50-year sequence using the Salt River data, (using
reconstructed streamflow values provided to the author by Jeffrey S. Dean) is given in
Figure 6.13. The patterns are intriguing; again we see that the times of the year that
maximum offtake occurs for the downstream gate shift as the upstream gate’s offtake
reacts to changing absolute flow level. This is suggestive of additional directions the
simulation might be taken, as this kind of interaction between components along the
single river system may offer rich surprises and challenges.
6.1.3
Canal Systems and Water Flow
The next addition to the scenario is the inclusion of a canal system. A canal system
can only be modeled using some kind of algorithm for moving water through it,
Figure 6.13. Headgate efficiency on the Salt River using reconstructed data and seasonal variation.
205
206
and it is on this that our example will focus. I will show that there are different
possibilities for this, but that with the different algorithms there are slightly different
requirements for the data on which those algorithms will operate, and that these,
together, can impose more restrictive demands on the way that the canal system
is integrated with the elements we have already placed into the landscape (rivers
and headgates), with the data we are using with those elements (streamflow values
and efficiency curves) and even with the landscape itself. Thus different variations
of internal algorithms within the canal impact the way that canal can be made to
work with other elements in the system, even though the basic interface between the
components remains unchanged.
Ultimately we will have two canal systems, one branching to the north of the
Default River and drawing from HG1 and the second branching south, drawing from
HG2 (see Figure 6.14). We can carry on most of the discussion, however, using only
the north system; just as the topography of the central basin is symmetrical, so the
two systems are identical except in orientation.
The first illustrative aspect of the canal system is that it is constrained by the
topography of the landscape. There is a caveat, however: the degree to which this
constraint matters depends on whether we are using an algorithm for determining
water flow that relies on the physical characteristics of the system (or some of its
characteristics) or whether these are essentially ignored. This represents a link between three components, not just two, the third being the code that makes use of
the two different data sets. Either approach is possible; for simplicity in this example
the canal system is built the same whether these details are to be used or not, but
it should be borne in mind that for some flow algorithms there are more or less restrictive constraints than there may be for others. The canal system varies somewhat
from Howard’s (2006) description of the actual Hohokam canal system structure, but
is not irreconcilable with it; it consists of a trunk line extending away from the river,
distribution canals drawing water away from the trunk line, and field canals bring
Figure 6.14. Landscape 3 Central Basin with canals and fields. The fields are the faint squares at the termini of
the smallest branches of the canals. Note from the contour lines that all canals always flow down the local gradient.
207
208
water from the distribution canals to the fields. The map in Figure 6.14 shows the
arrangement, with six laterals feeding a total of 30 fields on each side.
In the real-world situation canal system structure is determined by the local topography of the landscape6 . This is reflected in the fact that the choices I made in
constructing the landscape were ones that allowed me to demonstrate a system of
this structure (a true reflection, reversing the direction: I made the landscape fit the
canal system I wanted, not the other way around). The topography of Landscape 3
is such that the river flows downward from east to west, nestled between two grades
sloping upward away from it. This is not an uncommon condition, given that water
often flows in the kind of trough described. But there is a question raised by it: How
does a canal system move water away from the river, when ‘away’ would seem to be
‘uphill’ ? There is a strong correlation between the slope of a grade of land and the
slope of a canal built through it, so an uphill slope, even a gentle one, is difficult to
ignore.
The answer is that the canal does not go away from the river perpendicularlywhich would mean moving straight up the gradient- but rather moves away from
it at an angle, such that it is still going downhill, only not as steeply as the river.
On Landscape 3 this means cutting across the N-S gradient at an angle. When a
sufficient distance away from the river is reached, it can turn more parallel to it. At
that distance, the laterals come off the trunk and flow toward the river, allowing the
downhill gradient to help move the water toward the fields. The field canals flow
east to west, already a downhill grade. By doing this the simulated N canal system
and its mirror-twin to the south do an effective job at seeming to be reasonable
representatives of a working canal system.
6
Technically this is only partially true; ‘constrained’ might be better than ‘determined’. History
and even prehistory provide examples where topography was overcome through engineering in the
construction of water movement structures, however effort and skill are required to achieve this, and
the cost is often very high; the Hohokam made use of the opportunities the landscape offered them,
so it is reasonable to assume the same kind of dependence throughout this discussion.
209
But how reasonable they are is a function of the algorithm that makes use of them.
Strictly speaking, a water flow algorithm needs to do only one thing to function in the
HWM simulation: accept some measure of flow as input- this is the role of headgates,
whose connection to the canal is simply to provide this value at any point in time- and
distribute the water as output onto the fields to which it is connected. Any variation
on this theme would be acceptable, so a very simple flow algorithm might be that
the water coming in is distributed equally on all fields. This approach might actually
have a parallel in the archaeological literature: Howard (1993b) discusses the overall
capacity of the canal systems, and uses some coarse figures about the maximum that
the trunk canals could have carried to provide estimates of the total field capacity
that they could have served. However, Howard’s own research demonstrates that the
problem of understanding how the canal system functioned requires more that this
simple approach allows. Our ultimate goal with the HWM Simulation is to enquire
about the social relationships that might have inhered among the users of the canal
system, and how they may have settled issues arising from the challenges of water
distribution; assuming that water is simply to be distributed equally is insufficient.
We can move to one of several finer-scaled approaches; however, I will show that as
we do so the data requirements grow larger, and the need increases to work within the
constraints imposed by the landscape. This is an example, I contend, of the modeling
process I am advocating in action: pushing forward one element of increased detail
may require us to rethink other aspects of the simulation, either by being prima facie
incongruent or by returning simulation results we recognize as invalid.
To demonstrate this, we can beging to use the north canal system, employing a
flow algorithm called the LPG algorithm. ‘LPG’ is an in-house name that stands
for ‘Looks Pretty Good’7 ; just how good it actually looks is subject to argument.
The algorithm is very simple: assume that the canal system has three kinds of end
7
The name is in honor of Jim Kremer, who used it at a workshop held in Tucson Arizona in
December 2005 to describe pithily the most common criteria by which we judge models.
210
points: one kind trails off into nothingness and directs its flow outside our concern;
the second pours water onto fields; the third is a blocked outlet, closed off with a gate.
The LPG algorithm assumes that closed outlets equal closed branches and considers
these reaches empty. The total flow is first distributed to the field outlets, up to some
maximum value that each such outlet is specified to accommodate. This distribution
is prioritized by the outlets’ distances from the headgate, so that closer-in fields get
first draw of the water. If there is excess water above the total that can flow to the
fields it is distributed to the outlet line (or lines) that leave the system. This algorithm
is such that one line acting as a kind of drain is virtually a requirement; this means
that the system is in what Howard terms ‘low hydraulic equilibrium’, meaning that
there is little need to regulate the balance between water intake and outflow.
The Landscape 3 canal systems are built to work with the LPG algorithm: the
trunk line serves as the outlet for any overflow, and the remaining outlets all discharge
onto fields. We can see the effect in Figure 6.15. This map shows the fields after
several days of flow, during which the flow level began above the level of maximum
field discharge allowed and dropped steadily down to zero. The darker fields received
more water. There is a result visible in this figure that might surprise some readers:
the order in which fields lose water priority. Casually we might assume that all of
the fields along the westernmost lateral would be denied water first, then the secondwesternmost, and so forth; however, the figure shows that this is not the case. This is
because the linear distance from fields is what matters, not any distance determined
by network node-structure. Measured strictly up-the-line, the fields at the end of
branch E are farther from the headgate than the fields at the near and of lateral F.
Intuitively we might wish to construct the algorithm differently, so that the network
structure is used to direct flow in a way that matches our expectations, but this is
more difficult than one might think (and may be impossible).
But, to continue our inspection, note that the characteristics of the canal system
that have been mentioned so far are of exactly one kind: into what does each terminus
Figure 6.15. Landscape3 flow shortage demonstration. This map of Landscape 3 demonstrates the outcome of using
the LPG flow procedure with inadequate flow to supply all fields. Flow has decreased over time, leaving nearer fields
with more water than more distant ones. Note the gradient from NE to SW in the northern canal system (and its
reflection in the southern system); this reveals that linear distance determines supply level regardless of which lateral
a field is on
211
212
discharge?8 There is only one other element: the maximum flow capacity of the field
termini. This is entered into the calculations through one of two means. In the first,
it is simply specified by the user. The simulation allows a user to enter in a flow
value as the maximum flow that is then used by the algorithm without question. If
this is done, there is no connection between the flow algorithm and the landscape;
we could construct a canal system that moves uphill or over mountains, and the
simulation would be content. But in the second approach, the simulation calculates
the maximum flow value from the physical characteristics of the field canals. The
now-standard approach, pioneered by Jerry Howard (1990) is to employ the Manning
equation, which is a simple formula relating channel profile and slope to rate of
discharge. By using this formula and assuming a depth of 2/3 the channel height9 ,
the simulation calculates what should be the flow through the field canals and onto
the fields.
This simple change, however, links the canal system to the landscape- at least to
that aspect of the landscape that determines the slopes of the field canals. Because
we have added a new constraint, essentially linking flow to topography, we must
reconcile these previously separate issues. Doing so in this case means determining
the appropriate slope for the field canals (which should be close to the slope of the
surface of the ground through which they flow), then determining an appropriate
profile for a channel at that slope to carry an appropriate amount of water. This also
implies that the physical characteristics of the field canals- their cross-sections and
roughness coefficients, are now figured into the flow calculations. Where previously
these could have been of any configuration, now they must be carefully configured;
8
For completeness I will add that the algorithm calculates the flow that should be in each nonterminal segment of the system by summing the outflows of all of its downstream extensions; once
this has been calculated, the algorithm uses the characteristics of each segment (profile and slope)
to calculate an estimate of the water depth, given that level of flow and using an algorithm that
finds a solution using Manning’s equation.
9
Howard (1990) notes the difficulty of establishing water depth in archaeologically attested canals,
and several various means employed in doing so; for our simulated canals the use of 2/3 is arbitrary.
213
Figure 6.16. The flow shortage demonstration with physical constraints. This
is a run nearly identical to Figure 6.15 in outcome, but using a slightly different
algorithm, in which the flow through the canal systems is determined by the physical
characteristics of the terminal field canals.
in our examples here, they are trapezoidal in shape, 25 cm across at the top, 10
cm across at the bottom, and 1 meter deep. They parallel the surface gradient in
slope. (Note that there is yet another simplification here: these field canals deliver
their water 1m below the surface of the fields. In theory a more detailed landscape,
in which field terraces were flattened and sunk into the landscape, would be more
accurate, but the simulation algorithm for moving water from field canals onto fields
ignores this issue and we can pass it over here.) Figure 6.16 presents the results of the
simulation run with the new algorithm; the results very closely match the first run
from Figure 6.15 because the original field canal limits were chosen to be the same.
Two small asides are worth noting here. The first is an odd demonstration, but
one that illuminates the limitations of the kind of modeling that we are undertaking.
According to the modeling configuration given above, the field canals run directly
E-W, and thus their slope is determined solely by the E-W gradient. The link that
214
Figure 6.17. Flow shortage demonstrated on a steeper landscape. The conditions
are identical to Figure 6.16 except that the N-S gradient is steeper. Because the field
canals flow directly E-W, and because only the E-W gradient is used to determine
flow rates through the entire system, the simulation results are almost identical in
both runs, even though in this run many of the other canals have very different
characteristics- note that the main trunk lines actually run upslope.
215
we have established between topography and flow is therefore only in one direction.
Figure 6.17 shows the results of a run identical to that used in Figure 6.17, except
that the N-S gradient of the landscape has been altered- in fact it has been made
fairly steep. Note that the results are virtually identical10 , even though some of the
canals in this second run flow uphill. This illustrates the limitations of the LPG flow
algorithm as well as the kind of unusual connections that can arise when modeling
using abstractions.
The second aside stands in contrast to the first: whereas the first showed that
two components that we would intuitively think were linked were actually not, this
one shows the kinds of extended links that can begin to accumulate when we begin
to assemble the parts of our model into a coherent whole. There is an additional link
that is formed when the physical characteristics of field canals are used to determine
system flow: flow values must now be appropriate to the volume of flow that can be
carried through channels of this configuration. The previous example, in which the
maximum flow capacity was set manually, could use any arbitrary numbers, and flow
would be distributed appropriately. If the total streamflow were 100 million acre-feet
per year, an absurdly high figure, we could instruct the field canals to convey enough
water that we could observe the same patterns of shortfall in the distal fields that
we saw at the flow levels given earlier. If the maximum capacity of the field canals
is given by their physical characteristics, however, we must ensure that the flow
entering the system must be congruent with the outflow expected at the fields. For
our purposes this means that flow input must be within the right range; for example,
to demonstrate that periods of low flow leave some fields dry we would need to ensure
that the range of from low to high flows encompassed the maximum field discharge,
now determined by the simulation. If we wished to use real streamflow values, as
10
That is, the amount of water deposited on the fields is almost exactly the same. The actual
depth of water in the fields’ soils (see below)) would be slightly less because the surface areas of the
fields would be slightly larger, the x,y coordinates of their corners being the same but the slope on
which they sit having been changed.
216
we did in the preceding Headgate example, and to employ headgate efficiencies in
the ranges we consider appropriate or interesting, we will have to re-scale the river
flow values to fit in our newly constrained system. The lesson continues, of course:
if the headgate efficiency function is a function of flow, and the river flow values
are re-scaled, the efficiency function will also have to be re-scaled. Discovering and
repairing issues like this are part and parcel of the exploratory approach, and as more
components are included in the system, issues like this can reveal connections that
were not initially recognized.
With greater detail, or the hope for it, come greater data costs. The benefit
of the LPG algorithm is the flip side of its cost: by ignoring most of the details
of the canal system it arrives at a simple proxy for water flow. However, it is a
straightforward exercise to show that the ‘Looks Pretty Good’ algorithm doesn’t look
very good at all. The first clue, of course, should be how many of the canal system’s
characteristics are ignored. Closer inspection shines light on other flaws. To begin, it
is not possible to control a canal system from the ends. Assuming that closed reaches
have no water in them is wrong: a reach with a blocked end will not become an empty
reach, but instead will fill until the obstruction is overcome or the water backs up,
affecting all other reaches. But we can ignore this by assuming (or pretending) that
the blocked end is actually a block at the junction leading to the entire reach. More
difficult to reconcile, however, is the ‘throttling’ of the canal system from the open
ends discharging onto the fields. A canal system can’t be controlled from the ends;
this would also lead to backwater effects that would change the rest of the dynamics
through the system. Finally, the assumption that in periods of short supply water
will flow onto close fields first seems generally true, but should also seem too absolute:
the algorithm states that the flow into the last field should decline to zero before the
next field is affected, while even a casual and intuitive understanding of flow should
suggest that low flow will cause problems in more than one location.
There is an alternative to the LPG algorithm. The HWM system includes an
217
optional algorithm derived from the Army Corps of Engineers HEC-RAS software
(Brunner 2002). This software is designed to track changes in flow in rivers undergoing
flooding, but the underlying mathematics is general enough to apply to the problem
of open-channel flow in other contexts, including here. The advantage this approach
offers is that it approaches something we would consider more correct: a better way
of representing the way water moves through a system like our north canal system.
Changes to one part of the system- say, blocking off the field canal leading to field
C3- might well have unexpected results for the fields both upstream and downstream
of it. Controlling a canal system is tricky business- even today, control of the system
of canals that support modern Phoenix is still the study of research (Wahlin and
Clemmens 2006a, 2006b).
The cost is an additional data requirement, which for the HEC-RAS algorithm
is profile, slope, and roughness information for all segments of the system, not just
the field canals. This is a high bar, even if only a hypothetical canal system is to
be constructed. If we someday hope to use archaeologically recovered data on the
actual canal systems the challenges are even greater, even though work continues on
recovering as much of the Hohokam system as possible. The HWM system allows for
this possibility through various means- for example, missing canal segments would
have to be represented by conjectured segments, which could be marked as such using
the ‘truth value’ of the canal construction narrative (see Chapter 4)- but assembling
the data would be a formidable- if laudable- task.
We would like to run the HEC algorithm on the Landscape 3 canal systems, but
this is still out of reach; the canal system data are not quite enough to allow the
system to be run. The kind of problem the HEC algorithm is intended to solve is
akin to the ‘thought experiment’ presented in Section 5.2.5, an apparently simple
water flow puzzle with only three branch points. We would like to put it toward the
very much more complicated problem of how water flows through Landscape 3 when
some gates are open and some closed. The results would be of great interest, but the
218
data requirements are large and are not yet met here11 . Moreover, we must keep in
mind that the artificial canal system being used is ingenuous: all lines maintain the
same profile and slope for their entire extent, the only exception being the portions
leading away from the river, which, especially in sections where they change direction,
are pinned to convenient datum points, leading to changes in slope from one section
to another. The fact that profile does not change means that the flow approaching
the last lateral, given that much flow has already been diverted upstream, would
be different in character than the flow approaching the first lateral. In a real canal
system, we might expect the profile would be changed to account for different flow
levels expected in each section. Of course, the Hohokam engineers faced this problem
as well; moreover, they realized that in years with low flow the whole system might
need to carry less water than in high flow years. The actual system would have been
full of such design compromises; Howard’s analyses (especially 1993b) can carry us
further into this if we wish to go.
To review, and return to the original message of this section: The HWM can
specify a canal system’s operation using some flow value at a headgate for input and
a collection of termini for output. Four possibilities for this were mentioned; one,
quickly dismissed, was to simply distribute water equally on all fields, but the other
three involved some more elaborate calculation to decide which fields got more or
less water. It is among these three that the most illustrative examples are found of
the interplay between the internal richness of a model component and its interactions
with other components. In the first, the LPG algorithm with user-specified maximum
outputs for field termini, no characteristics other than maximum output need be
specified internally for the canal system, and the canal system can virtually ignore the
landscape- canals can be of any shape and can flow uphill. In the second, maximum
11
One example of the limitations being faced: the FlowAg implementation of the HEC algorithm
cannot accommodate a junctions being connected directly to another junction, as occurs in the canal
systems used here. This is an example of how different data structure, not just greater information,
is needed.
219
output for field termini are calculated from channel properties of the end segments.
These channel properties include slope, which begins to link the canal segment to
the landscape. This link carries two implications: first, that the landscape must be
minimally compatible with the desired behavior of the canals- that this link is minimal
is illustrated by the mischief of the example in Figure 6.17, but it is at least present;
second, that the discharge of the canal system can no longer be entirely arbitrary,
but begins to be constrained as well. Finally, we hoped to reach a different version of
the flow of water, and presumably one more reflective of reality, with the HEC-RAS
algorithm. We found the data requirements for this to be very high- channel slope
and profile for all segments must be managed closely- and this links the system even
more closely with the landscape and with the headgate discharge. It should be noted,
though, that these constraints are all defined by the meaningfulness of their results:
the points of contact between the components never change, but their interaction
leads to a constrained domain within which we recognize meaningful model behavior.
Canal Systems, simulated and real It is worthwhile to return the discussion to more
solid ground, albeit temporarily, and to examine how the HWM Simulation is specifically relatable to what is known of the Hohokam canal system. Decades of extensive
and intensive research has examined the Hohokam canal system in both wide breadth
and fine detail, and it is useful to consider how the HWM implementation can reflect or incorporate this and future, work, and whether by doing so it can offer any
improvement on our current state of knowledge.
What we know now of the Hohokam canal system runs from small scales to large,
and includes an array of detail on a wide suite of varying structures, which we can list
and briefly discuss here, beginning with our rhetorical lens zoomed in on the smallest
details and moving to larger ones.
We know that the interior linings of the canals were crucial. They determined how
quickly or slowly water was able to flow through the system and through individual
220
parts of the system. They determined the rates of seepage, erosion, and sedimentation. The Hohokam sometimes lined their canals or treated them in other ways (see
page 51) in recognition of these effects.
In serving this function, the lining of the canal would have worked in tandem with
the slope of the channel. This is something of an oversimplification, because a water
channel has three distinct slopes: the bed slope, the slope of the ground through
which the channel cuts, and the hydraulic gradient, which is technically the change
in energy through the length of the channel, but is usually equal to the slope of the
water surface along the channel. Howard (1993b) argues that these three are usually
the same slope (if they diverge too much the water surface will go into the channel
bed or over the channel ways), but they can be different.
The way that water flows through canals is also affected by the cross-sectional
shape of the channel. Common templates for channel cross- sections are parabolic,
trapezoidal, and rectangular, and for these nice mathematical formulas provide hydraulically relevant values (see Howard 1990), but real channels only approximate
these and may be highly irregular.
The cross section and slope can, and almost certainly will, change from upstream
to downstream. Howard (1993b) has examined this in detail, and noted that if the
cross-section of a channel grows smaller in area this can indicate that water was taken
off along the stretch in question, even if the channels diverting this water are no longer
archaeologically visible.
We know a little about control features that the Hohokam may have employed,
beginning with the headgates and including weirs, the points of junction for the
canal systems (where there may have been water dividers used to apportion water
down alternative channels), and even features like tapons, gates that could be closed
downstream of a diversion channel so that the water surface at the point of diversion
would rise. Unfortunately these structures were usually temporary or archaeologically
ephemeral, so our knowledge of them remains incomplete.
221
Following the channel downstream, we can note, from Howard (2006, p. 40),
that “The ends of the Hohokam canals appear to end abruptly into the solid face of a
wall. . . .” This implies that the canal system acted in some ways more like a reservoir;
it was a system that, in Howard’s terminology, was at ‘high hydraulic equilibrium’,
meaning that considerable care must have been required to balance input into the
system and offtake.
At the largest static scale, the Hohokam canal systems have a structure that is
determined not simply by the characteristics of individual channels but by the arrangement of channels and the structure of the network they form. Howard (2006)
has argued that the canal systems along the Salt River conformed to a general template that was comprised by four kinds of canals: main canals, branch canals, distribution canals, and field canals. Main canals are the large canals drawing water
directly off the river; branch canals are effectively alternative forks of main canals,
though their distinctiveness in an archaeological context is sometimes confused by
the possibility, in many cases, that what seem to be branching canals are actually
two non-contemporaneous canal sections; distribution canals spring from the main or
branch canals at intervals, moving a short distance away from the main can and then
turning parallel to it, defining, in Howard’s terminology, a ‘command area’ of related
fields; field canals move water from the distribution canals to collections of fields.
This is only one arrangement of many possible ones, and Howard argues that it
carries implications for the organization and management of the canals. Branches
in a canal system represent points of decision and points of control: the options for
distributing water are determined by the branching structure of the canal system;
the opportunities for making those decisions are held by those who control these key
points; and the people who live along branches of a canal system find their destinies
linked by it in ways that are determined by its overall structure. Howard argues,
based on the physical evidence of the Hohokam case in comparison with ethnographic
cases from systems of similar scale, that in the Hohokam system the most prominent
222
unit of organization was the command area; branch canals, however, would not have
determined much of the organization structure of the management of the canals, due
to the different water management issues they would have faced.
Allowing our focus to expand through time as well as space introduces the fact
that we can trace diachronic changes to the canal systems. At the beginning, of
course, they must be constructed; we have alluded to general principles of canal
placement, with main lines following the contours of the landscape (see page 51), but
placing a canal on a landscape is inherently a complicated engineering question. And
there are additional issues: Howard (1993b) has argued that there was a shift in the
placement of canals in Canal System 2 from south to north during the beginning of
the Sedentary period that allowed the Hohokam better water control during periods
of higher variability and risk. And, of course, the canal system might have changedin fact, was probably constantly in flux- due to wear and damage that was either
constant or episodic.
I contend that the simulation has the potential to help us understand the canal
system at each of these levels. The first point in support of this is that the simulation
can represent each of these features; most of them have already been implemented
(some water control features, such as diversion gates, have not; this is due to their
poor archaeological preservation which, in turn, has contributed to them being given
little treatment in the archaeological literature, which has led more or less directly to
their invisibility in the simulation). The various means by which they are implemented
have been discussed at different places in this essay, and in some cases we can envision
ways that the representation might be improved. For example, channel lining is
incorporated through a combination of data and code; thus far it includes only the
single datum of a roughness coefficient for each channel and the code that supports it,
but an easy extension is to allow different kinds of linings that can erode at different
rates, etc. The flexible nature of the ABCM framework facilitates additions like this.
How can this flexibility be turned toward answering specific questions about the
223
canal system? We can better understand one way that the simulation can aid us
by considering what the simulation offers in light of previous analyses of the canal
system. Howard’s important work on Paleohydraulics (1993b) presents several analyses of the canal systems that we can consider as an exemplary model. The first
is the reconstruction of the channel dimensions from the archaeologically available
data, which consists generally of small samples at loci along longer canal routes. He
found that cross-sectional area could be related to distance from the canal head via a
logarithmic function. He used this to calculate estimates for the cross-sectional areas
of the canals at their headgates. These he translated into discharge values, which
were then converted into estimates of the capacity each canal had to irrigate fields, in
acres. An additional calculation, based on the volume of the canal channels, inferred
the amount of work required to build the canals. These analyses were combined with
chronological data to arrive at estimates of the changes in the canal system through
time and the labor, during these time periods, invested in them.
This kind of analysis is straightforwardly replicated in the simulation. The data
that have been incorporated to simulate the canal system can be converted into
equivalent measurements that match the archaeological data that Howard used in his
analyses. In fact, it would be possible to construct a simulation of an entire canal
system and then sample it in a manner that replicates the element of chance that
figures in to our archaeological data collection, wherein we cannot always control
which portions of the canal system are available for study, and from these samples
replicate the calculations Howard employs basing them on the sample. These could be
compared to the knowledge we have of the entire simulated canal system, wherein we
know exactly the cross-sectional area, volume, and acres irrigated, and we have perfect
chronological control. The analysis can help us work in two directions: we might find
that our simulation suggests that Howard’s approach can be improved; we might also
find that Howard’s approach allows us to refine our simulation. This dialogue could
have benefits to the applications of the simulation and to the understanding of the
224
archaeological record; it can also be applied to the other features I have listed above.
Where they inform our picture they can be incorporated into the simulation; once
there they will interact with other assertions we hold about the simulation, and in
their interaction with these other components we may learn either more about them
or more about the other components, or both. When features of the canal system are
not yet known we may find that we can infer them or find their boundaries when we
provisionally model them.
Turning, for a moment, to the theoretical issues raised earlier in this work, it may
seem that we have abandoned the ‘deductive engine’ idea for our model. We are
merely representing the canal system, and making measurements from our representation. I would argue that this is actually exactly in line with the idea of a deductive
engine: we still have a set of assumptions, and we are still retrieving implications of
those assumptions. In exactly the same way a scale model of, say, a car can be built
and allow us to deduce certain things about the car that we did not know before, so
our representation of a canal system can give us information that we did not have,
even if it is primarily a model that organizes rather than one that operates. Nevertheless, I will concede that our need for the model increases when we turn our attention
to dynamic questions- in the case of the canal system, primarily the question of how
water will flow through the system. It is here, however, that the real challenges lie.
The above analyses were intended to get past the limitation of our data. In
truth we may know a great many details about a particular archaeologically attested
canal system, and yet still fall very far short of knowing all of the details that we
would require in order to model it as a flowing canal system. Segments, short ones
or sizeable ones, will be missing, or data about other segments incomplete. The
simulation would allow us to fill in the gaps with provisional or inferred details, but
this is not completely satisfying as a solution. The situation worsens when we consider
what that additional detail would gain us, for the reality is this: no perfect model of
open channel flow dynamics through a system containing the kinds of detail we wish
225
to investigate in the Hohokam case exists. The HEC-RAS algorithm for unsteady,
open-channel flow is arguably the best that can be done, but it is not perfect and
should not be considered so. Work on modeling water flow continues- newer versions
of HEC-RAS approach sediment transport in a way that might someday be usefulbut the dream of being able to create a canal system with a fine level of detail and
including much that we know about the Hohokam canals, then asking it to simply tell
us where the water goes, is likely to remain elusive. This is particularly unfortunate
because of the lacunae that remain, still, in our understanding of the Hohokam canals.
Despite the excellent and ongoing work of Howard and others, there are still gaps that
we would like to fill in. As but one example, the use of Manning’s formula to calculate
headgate discharge, as noted above, seems inconsistent with the way that water is
moved through the rest of the system, which frequently violates the assumptions
required for the use of Manning’s formula.
This is an additional argument for the ABCM system and the exploratory approach it supports. The LPG algorithm is included as one means to get around this.
Rather than attempt some realistic model of water flow, it takes some aspects of
the system- in this case the distinction between field and non-field termini- and allows us to consider other examinations based on this provisional and useful approach.
Whether we are committed to representing in our simulation some section of the
actual Hohokam canal system or are working with an abstraction from it, the LPG
algorithm allows us to move forward. The focus of these kinds of investigations moves
us beyond the canal system- or, if you prefer, to components of the system other than
the canals themselves- and it is to these, the fields and plants that the canal system
nourishes and the people that build it and are supported by it, that I will turn next.
226
6.1.4
Fields, Plants, and Plant Growth
The water borne by the canal system may have had several uses, but the primary one
was to be used by the agricultural crops the Hohokam planted, tended, and relied
upon. Many of the crucial aspects of our understanding of the Hohokam system
revolve around the operations of the Hohokam farms, and the HWM Simulation
addresses these through a general model of plant growth. As with other elements
of the simulation, the algorithm used could, if needed, be supplanted with some
more complicated variation, or a simpler one; the algorithm I present here is a first
effort that, as the other examples here, contains enough richness to allow interesting
dynamics to be explored but is no more complicated than it needs to be for an initial
foray. For our purposes here it illustrates three main principles. First, it is derived
from available sources from other contexts, but these other contexts have needs that
are very different from our bottom-up modeling approach, and so the descriptions
they provide- descriptions that are, apparently, perfectly adequate for guiding realworld farmers to care for real-world plants- are inadequate for our modeling needs.
This illustrates a difference between the bottom-up approach and the kind of rigor
it demands and more flexible approaches in which many aspects can go unspecified.
Second, in the context of adddressing the insufficiences of the source material for
our purposes it illustrates the principle that when an algorithm is implemented, new
necessary elements are often recognized and need to be implemented.
Finally, it illustrates the creation of a new kind of game, in which the range of
variation is wider than can be addressed by any single parameter. Instead, the set
of options it makes available is one that can be crafted and tested in many ways, so
that exploring it is, and must be, more like assembling a puzzle than making a single
graph.
The major axes of plant growth in the HWM map the distribution of water available to the crop through time; this is used to determine, first, whether the crop
227
Figure 6.18. Plant yield with constant rainfall at optimum amount.
continues to survive to harvest, and, second, the size of the yield obtained. Note that
there is an immediate simplification here: plants in the real world need other things
besides water that we will ignore, either because they are probably not limitationslight, for example, is in abundant supply in the Valley of the Sun- or because they
would lead to complexity we are not ready to address (soil quality, for example, will
be discussed below). An illustration of a basic case is presented in Figure 6.18. In
it the crop planted has been supplied with water each day in the amount needed for
optimum growth. The horizontal axis is time, tracing the 120 growing days for this
plant. The two lines indicate yield; both lines are percentages of the optimum yield
for the crop (in the simulation, the percentage would be converted into an absolute
number by being rescaled for the size of the field and the fraction of the field planted
with this crop, but here the percentage value provides adequate illustration). More
specifically, both lines are representations of possible yield. The lower line (‘available
yield’) is an indication within the simulation of what the yield would be if the crop
were harvested on that date; for most of the crop’s life, the harvest would be zero,
until it reaches maturity. For simplicity, the simulation assumes that the crop cannot
be harvested until it has finished its growth (although it would be straightforward
to assume, in contrast, gradual or partial maturation), so the potential yield, in this
228
sense, is zero until the crop has completed its growth. Note that after the growing
season, the yield remains 100 percent for a short time, then degrades to zero if it is
left unharvested.
The upper line (‘potential yield’) is also a graph of potential yield, but in the sense
this time of what the yield could still be assuming that the plant receives optimum
water from that day through the end of its growth period. In general this line drops
through time as the plant experiences various hardships. In Figure 6.18, the crop’s
growth is optimum, so the upper line is constant at 100 percent until the post-harvestwindow degradation. The underlying dynamics of how this was achieved must be
explicated before we move on to the next examples.
The guide used to create the HWM plant growth algorithm is the FAO manual
on irrigation methods (see page 177). It is an excellent example of a guide that is
useful for real-world work but must be adapted for implementation in a bottom-up
simulation- even though the pragmatic approach that needs to be bolstered for use
in the simulation is the root of the advantage it offers for implementation in the
simulation, which is its coarse-grained simplicity. It presents some theory for what it
advises, but very little; that theory, however, is our starting point.
The manual assumes that a plant’s need for water will be determined in part by
the climate in which it is grown. There are four basic factors to be used to determine a
baseline rate for all plants, which is called Evapotranspiration rate (abbreviated EtO):
these are wind, humidity, hours of daylight, and (primarily) temperature. The HWM
ignores wind and humidity- there are negligible amounts of these in the Phoenix area.
Temperature and daylight hours, however, are used to calculate the EtO using the
‘Blaney Criddle’ method. This uses a simple formula:
EtOmonth = p(0.46T + 8)
(6.1)
where p is the mean of the percentage of the annual daylight hours for the days in
229
Figure 6.19. Map view of Plant Demo 1. This map shows Landscape 3 with one
field (field A1 at the NE corner of the group of fields) under cultivation. The other
fields show dark colored soil because they are also receiving rain, but no water is
leaving them through plant evaporation. Field A1 shows four additional values. The
first is the soil water content: the soil color is clear, indicating virtually no water.
The second and third values indicate the status of the crop, and are reflected in the
green bar at the S edge of the field. As the crop grows, this bar will move toward the
opposite edge of the field, hence its progress reflects maturity. The bar also extends
completely across the field, indicating a density of 100 percent. If several crops are
present in the field, each will have its own bar in proportion E-W according to density
and moving N-S according to maturity. Finally, the color of the crop indicates its
yield potential- in this case, green indicates a perfectly healthy crop.
the month being calculated, and T is the mean of the median daily temperature12 .
Key for our purposes is that the units are millimeters per day. A value of 5, then,
means that the plant is evaporating 5 mm of water every day13 .
Figure 6.18, then, is created through the following procedure. Landscape 3 is
12
That is, p is the average value of the fraction, expressed as a percent, of the hours of daylight
on each day of the given month divided by the total annual hours of daylight; T is the average value
of the median temperature (midway between daily high and low) for each day of the month.
13
The one-dimensional unit of measurement is akin to that of a rainfall gauge; in the simulation
water is converted from volumes delivered by canals into depths by dividing by field surface area,
across which plants are assumed to be uniformly distributed.
230
given the same collection of fields used in the previous canal examples, on the N
side only. These, however, are not connected to a canal system. Instead, a crop is
planted in one field (the first field on the first branch, field ‘A1’; see Figure 6.19)
and rainfall is specified to supply water to the crop in exactly the amount needed.
At a temperature of 30 degrees centigrade, using a value for percentage of annual
daylight hours of .27 (12 hours / (365 * 12) = .0027, converted to .27 percent), the
Blaney-Criddle value would be 5.886 mm evaporation per day; for convenience in this
and the other demonstrations here this value is put aside and a value of 5 mm is
used- the Blaney-Criddle calculation is overridden and the EtO for the field is set
directly14 . The crop receives exactly this rainfall and processes it each day, resulting
in the optimum growth pattern observed.
This is, of course, highly unrealistic- with respect to the continuous rainfall for
120 days, it is especially unrealistic in southern Arizona- but there are other ways in
which it is both false and, for our purposes, less useful as well. To delve further into
the HWM model, we can address the issues that are apparent in this scenario.
The first issue is that the delivery of rainfall is not constant. Rain occurs on
some days and not on others. Irrigation can provide a constant water supply, but
often doesn’t- consider the difficulty of arranging the delivery of exactly 5 mm of
water every day on a wide collection of fields. Instead, the model must accommodate
punctuated delivery of water. To achieve this, the model must contain a component
that manages the water directly available to plants: a component that accepts water
on fields and keeps it there for some time while plants consume it over the course of
days.
The FAO manual here offers an example of how a pragmatic approach fails to
deal with details that must be addressed in a bottom-up simulation. The manual
14
Versions of Plant Demo 1 exist for each of these alternatives (using a calculated Blaney-Criddle
by setting the temperature and daylight hours vs. specifying an evapotranspiration rate directly);
the results are nearly identical, but small differences (probably rounding issues) do exist.
231
discusses the EtO rate as a value per day, but converts this into a total water need
per month. It also addresses the total water various crops might need for their entire
growing lifespans. But the details of the timing of the delivery of that water are left
ambiguous. Whether the water is to be delivered daily, as in our initial example, or
in a single event once per month, or somewhere in between, is left undiscussed.
If we assume, however, that some patttern of irrigation or rainfall that leaves
periods of days in between water delivery events, then there are some difficulties we
must address in our simulation, if a daily time step is to be used and our guide for
a plant’s water consumption is an EtO per day. Two possibilities can be offered: in
the first, the plant consumes all water delivered on the day it is delivered, and this
fortifies it for some period thereafter; in the second, water delivered remains available
for a time after delivery, with the plant slowly consuming it at the specified EtO rate.
We have opted for the latter; the ‘Field’ object in the simulation contains a ‘Soil’
object, which can contain and process, over time, some amount of water. Water can
then be added to the Field and stored in the Soil, where it is then used by the plant
over some series of days. Soil has limits to the water that it can contain, and we will
see below that there are other ways water can be moved out of it, but generally this
allows us to place water on fields at intervals larger than a single day and allow plants
to survive.
Figure 6.20 illustrates this. Here the mechanism of rainfall is again used to deliver
water to the fields, but this time in an episodic way. The engine for generating
rainfall now causes rain to happen stochastically (incidentally, the first appearance of
a random process in our discussion). This occasional shower15 douses the fields with
roughly a week’s supply of water for the plants. The notable aspect of this graph
is that in this run, as with Figure 6.18, the plant’s potential yield remains optimum
throughout its life, even though it is not consistently receiving rain.
15
The actual settings allow a 15 percent chance of rain, which will be in an amount chosen
randomly from 40 to 50 mm.
Figure 6.20. Yield and water for Plant Demo 2. The plant is receiving water only sporadically but consuming it
constantly.
232
Figure 6.21. Yield and water for Plant Demo 3. The plant is receiving water only sporadically but consuming it
constantly. The rainfall actual amount is smaller than in figure 6.21 but is concentrated onto a field with an effective
area one-ninth of its actual surface area.
233
234
We have, however, replaced one unrealistic set of assumptions with another, if only
for the moment. Rainfall of 50 mm per week over a span of months is unheard of in
Phoenix. Water delivery on this scale may be possible in an irrigated system, but if we
are to envision rainfall as playing a role- here it functions as our arbitrary engine for
water delivery for our unrealistic demonstrations, but we hope to put it to use in other
contexts later- some additional modification must be offered. The HWM proposes
two means for concentrating rainfall from a wider area onto a smaller, cultivated
area: the first is by specifying that only a fraction of a large field is being used for
cultivation, and the water delivered to that field as a whole will be concentrated onto
that area; the second is by linking fields together, so that water delivered onto one is
moved onto the recipient field. Both of these methods can be employed to implement
something like the water control features attested in the Hohokam case; the second
can be put to use to form series of fields and act as ‘check dams’ (see Chapter 2).
Both can act to take rainfall from a wide area and concentrate it onto a smaller one
for cultivation. In Figure 6.21, the first procedure has been used: the effective area
of the field is set to 100m x 100m, or one-ninth of the 300m x 300m field. This means
that for every millimeter of rain, nine millimeters are available to the plants. Now
depositing only 6 mm of rainfall on the field per week is able to support the crop at
optimum- although, of course, the yield in absolute terms will be one-ninth the size of
the original. The figure shows the yield and water level for a simulation run in which
this is done, this time with the same possibility of rainfall but a more reasonable
rainfall amount of 6 to 7 mm per event. Note the initial period of the plant’s growth
is hindered slightly by the fact that no water is present, but it recovers to reach a
reasonably high yield.
This broaches the thornier issue raised by the timing of the delivery of water:
what, exactly, happens if the plant doesn’t get enough water? How is the yield
affected? It is here that the FAO manual is the least helpful. It discusses the plants’
overall water needs; it mentions that some plants are more sensitive to drought than
235
others; and it, as mentioned, parses the daily EtO into monthly water requirements.
But what it is not- and this is the crucial difference between the manual’s interests
and approach and that of the HWM simulation- is a way to calculate what the crop’s
final yield will be if water is, or is not, delivered according to a specific schedule. This
is an example of what is characteristic of the bottom-up approach of simulation, and
how it differs from the aims and needs of other contexts.
The HWM implementation of impacts of water shortages on plant yield uses an
algorithm that is intended to be useful for our purposes, but cannot claim to be derived
from the FAO manual or any other authoritative source. In essence, it balances
three factors: total water supplied to the plant; the timing of water supplied to the
plant; and the plant’s inherent capability to survive water shortages. The latter is
loosely derived from the FAO manual’s mention of drought sensitivity, but the manual
provides no means for using this notion in any calculation.
There are, of course, limitations to this scheme. For example, we might reasonably
assume that damage to a crop early in its development would have a more serious
effect than damage late; the current algorithm does not make this kind of adjustment.
But the issue for our simulation use is being able to establish a payoff framework
for how choices about water allocation can impact yield (or, in the case of rainfed agriculture, anticipated or unanticipated rainfall patterns can impact the crops
planted on the basis of these predictions). The algorithm provides at least this.
The next issues that make our initial demonstration unrealistic fall from the characteristics of plants. First, the Blaney-Criddle method arrives at a standard evapotranspiration rate for all plants, but plants vary in the amount of water that they
need: some simply need more than others. The FAO manual addresses this by specifying a ‘crop factor’, which is a multiplier by which the standard EtO is converted
into the value appropriate for that plant.
A related aside: We have already noted that water control structures may need
to be implemented in the model, at least in some abbreviated form. We should also
236
note that the simulation can accommodate procedures such as that proposed by Fish
et al. (1992a), in which piles of rock are used to house plants, like agave, and restrict
evaporation. Two avenues are possible for this. First, while EtO can be calculated
from the Blaney-Criddle approach, it can also be explicitly set: the value desired
is simply assigned, and this value overrides that derived from the temperature and
daylight hours. One difficulty with this approach is that the EtO is assigned to the
field, not to the crop; in the case of a field with more than one crop, this approach
may not be appropriate. The second avenue, however, is to assume that the field
technique is part and parcel of the plant variety; that, in effect, agave planted under
rocks has a different crop factor from agave planted in the open, and create a separate
agave variety that has the requisite values.
Second, and more crucially, this value changes for a plant during the course of its
life. Generally, the plant’s water needs are fairly low during the first stage of its life,
when it is small, but increase dramatically during the main parts of its growth and
especially during the time that the harvestable elements are developing.
Third, a plant’s life span usually spans several months; during the course of its
life the environment in which it grows will change with the season, and these changes
in temperature and daylight hours will impact the Blaney-Criddle EtO rate. Both
temperature and hours of daylight can be set in a manner exactly analogous to streamflow: through the use of seasonal narratives16 . Data sets for these are given in Table
6.1.
Plant growth, via the Blaney-Criddle equatiion, brings the mountains on the north
and south sides of Landscape 3 back into play. Their role is to provide additional
areas for fields, but also to provide another axis on which fields may vary: elevation.
Elevation allows two kinds of specific variation: rainfall and temperature.
16
Hours of daylight can be calculated from latitude, and one planned extension to the ABCM
system might be to do this; however, the calculations are not without difficulty- see Forsyth et al.
(1995) for a summary- and for the moment a seasonal narrative offers a simpler alternative.
237
Month
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Mean Temp
11.70
13.80
16.60
20.70
25.40
30.50
33.30
32.40
29.40
23.00
16.20
12.00
Mean Pct Daylight Hrs
0.23
0.25
0.27
0.29
0.31
0.32
0.32
0.30
0.28
0.25
0.23
0.22
Table 6.1.
Phoenix average mean daily temperature, by month,
and mean percent of annual daylight hours.
Percent daylight hours
is derived from the FAO manual given Phoenix’s latitude.
Temperature data are taken from http://www.wrh.noaa.gov/psr/climate/ climatetable.php?wfo=psr&month=All&parm=MonthlyTemps&site=PHX,
averaging
temperatures by month from 1896 through 2008. Web site available as of August
2nd, 2009.
We can use these data on fields in the central plain just as easily as we can for fields
at elevation. Temperature affects plant growth by modifying the evapotranspiration
rate: at higher temperatures, plants need more water. By varying the temperature
throughout the year we can see that there must be interplay between timing of water
need and the ambient temperature.
However, the temperature in the basin needn’t be the same as the temperature at
higher elevations. Meteorologists and pilots use the term ‘lapse rate’ to describe the
drop in temperature with increasing elevation; it averages 6.5 degrees centigrade per
1000 m (Lutgens and Tarbuck 1989). In the HWM, this is settable by specifying the
temperature at any two elevations; these are assumed to define a linear gradient. In
Landscape 3, when the 6.5 degrees/km value is used, it means that crops being grown
on the tops of the south mountains (elevation 1500m) experience temperatures nearly
238
Figure 6.22. Lapse rate demonstration map view. The fields are shown as outlined
squares. The maturity of the crops grown within each field is shown by the progress
toward the north of the colored bar representing each crop. The crop’s potential yield
is represented by the color of the bar. The westernmost fields have all died; the other
variations in color between crops are subtle. Note also that each field may contain
multiple crops, which may have varying potential yields.
Figure 6.23. Lapse rate demonstration results. The four rainfall regimens are presented above the graphs of potential
yield through time of the crops being grown at varying altitudes (coded by name: H = high, M = middle, L = low,
B or D = on the central basin.)
239
240
10 degrees centigrade lower than those on the plains at the same time. Note that
there is a nice illustration of the issue of scale: we chose the elevation for the south
mountains somewhat arbitrarily, and if we were to consider a 10 degree difference
too high we could easily reduce the mountains’ height, or reduce the lapse rate: until
some other issue is added into the mix, these two values have meaning only with
respect to each other. However, the flexibility of the means by which the topography
is generated makes it easy to keep the standard lapse rate and change the height of
the mountain ranges as needed.
The example depicted in Figures 6.22 and 6.23 shows an example that encapsulates all of the dynamics we have discussed here. It presents us with an array of
plants of varying characteristics being grown simultaneously under different water
timing schedules and at different altitudes. The crops include plants with three distinct drought sensitivity values (.25, .50, and .75). Under the first rainfall schedule an
amount that should be enough (depending on temperature) falls each day; under the
second a slightly lower amount, just below the threshold for perfect plant development, falls each day; under the third rainfall is episodic but regular, with days of rain
alternating with drier days; under the fourth, the plants experience a long drought
during the middle of their growth. The graphs indicate the widely varying responses
that can arise from the different combinations of plants, altitudes17 , and rainfall regimens; needless to say this is only one possible set of configurations, abitrarily selected
out of many that could be created.
In sum, the algorith used for plant growth in the HWM Simulation is one that
is intended to allow a wide array of variations in the arrangement of crop scheduling
decisions. It creates a framework for a suite of options: the timing of sowing and
harvesting, the variety of crops planted, the delivery of water to those crops; and a
establishes set of constraints for those options: the growing characteristics of plants,
17
The lapse rate used in this example was -5.5 per 1000m, with a base temperature of 30 degrees
centigrade at ero altitude.
241
including their needs and explicit effects when those needs are not met, and the
characteristics of the environment in which the plants grow. These requirements are
drawn from our hope for understanding the choices the Hohokam made and the effects
of those choices; this purpose differs from that of our primary sources, whose task is
to direct action in the real world, and therefore our model simplifies complexity found
in those other source and fills in the gaps those sources lead. This kind of approach,
albeit tentative, is appropriate and may be adequate given our larger goals, and will
likely be typical of the exploratory approach.
6.1.5
Chemicals: Evaporation, Soil Processes, and Plant Growth
There is an additional thread that connects several aspects of the HWM Simulation
that has not yet been mentioned here. It has been suggested (and debated) that the
long-term trajectory of the Hohokam was influenced, and possibly even determined,
by changes to soil chemistry of the fields they farmed for the centuries they occupied
the river valleys on which they built their canals. In the HWM Simulatioin this story is
implemented in a way that impacts several of the elements we have already discussed.
It also presents an illustration of the demands of resolving a reasonable narrative into
a useful implementation; specifically, it presents several examples where components
proposed to accomplish one part of the task seem by their own logic to demand
a complement that will balance and complete them; on occasion this even opens
unexpected opportunities. This phenomenon is another aspect of learning through
the accumulation of constraints that is part and parcel of the modeling approach
here; it also speaks to another issue, which is the way that disparate elements end
up crosscutting each other in ways that are not always apparent initially. This is
a deliberate effect of the bottom-up approach: if we decide that the vocabulary of
the system is that it consists of certain elements, then we are compelled to include
dynamics in the system in terms of those elements.
242
The first component in the story of chemicals is that they reach the fields via
the water in the canals. Often only one kind of chemical is of interest- salt- but
the HWM includes a general structure for chemicals of any kind; no characteristics
of a chemical matter, however, except the chemical’s name and its concentration
in whatever element it is found (basic algorithms for mixing elements and combining
their concentrations are provided). The important aspect of delivery via canals is that
canal water is lost in transit to evaporation. For the purposes of our examination
of chemicals, what matters is that when the water is lost, the concentration of the
chemicals increases. The effect is that water delivered to the fields at the end of the
canal system has higher concentrations of chemicals than the water delivered to the
upstream fields.
It should be immediately apparent, however, that there is a tension between two
dynamics: chemicals should be delivered in higher concentration to fields at the
ends of the system, but- depending on the flow algorithm- the fields at the ends of
the system may be watered less frequently. The first question, then, is what is the
balance between these two components.
The second issue implicitly raised by this scenario is a genuine example of a
new demand placed by our hopes to address the chemical issue. We are interested
in the long-term effects of chemicals accruing on the soil. The effects of these on
plant growth will be different from simply assessing the chemical content of the water
that is arriving on the fields at any given time. We will need to allow chemicals
to accumulate over time; we therefor need some model of edaphic processes. The
challenge with implementing this is that it is not strictly summative: one can’t simply
add to the existing value representing concentration in the soil (if for no other reason
than that sooner or later the figure, which is a fraction (ppm) would go over one,
which is impossible- we cannot have a value of 2 million parts per million). Instead,
the situation provides us with yet another illustration of the need for a modeling
approach: the actual dynamics of how chemicals accumulate in soil must rely on
243
interactions among the molecules that make up the chemical, the water, and the soil,
but if our enterprise is to be worthwhile we must believe that we can model at above
this scale. Put another way, if the only way to understand the rise and fall of the
Hohokam over a millennium is to model the orientation of ions in the soil, we are lost.
While we are addressing modeling components that are demanded by an internal
logic that we find when we follow our initial assumptions, we can note another element
that has been passed over. There is something odd about Figures 6.20 and 6.21, which
show water available during crop growth. When the plants in the crop are consuming
water, the water level goes down consistently. After the plants die or are harvested,
the water level rises without limit. We have not provided a means for water to leave
the fields, except to be evaporated by plants. We note this problem now because
it is directly connected to the question of how chemicals in the water delivered are
transferred into the field soil.
Instead of modeling molecules, the HWM Simulation assumes a simpler algorithm
that meets the following criteria: when water arrives bearing chemicals, some of those
chemicals can be transferred to the soil; there is a ceiling to how intense the total
soil concentration can be, and if water with more chemicals arrive when the soil is
at its limit then the chemistry may be adjusted to reflect the arrival of different
kinds of chemicals, but the limit will not be exceeded. Water consumed by plants
for evaporation leaves no trace in the soil, but we allow water to leave by two other
means: evaporation and seepage. Evaporation will increase soil chemical content,
up to the limit; seepage, on the other hand, we will allow to reduce soil chemical
content. This allows the ‘flushing’ of fields, which Howard (2006) suggests may also
have allowed the fields to avoid long-term chemical problems. Flushing may occur
when water with lower concentrations arrives either via canals or rainfall.
This allows enough of a picture for a demonstration. Figure 6.24 depicts the soil
chemical concentration in fields A1 (the first and last fields along the first last branches
244
Figure 6.24. Chemical flush demonstration. The graph shows the chemical concentrations of two fields receiving the same amount of water over time but on different
schedules. During the first ten years a small amount of water is added daily, and
evaporation removes most of this, increasing the chemical load in the soil. During the
seccond ten years the water is added once per ten days in larger amounts, allowing
seepage to remove more; seepage has the effect of flushing chemical content from the
soil. The field with the lower concentration is at the head of the canal system, where
the water reaching it has a lower chemical content.
in the North Canal System) and F5. Evaporation18 occurs as water flows along the
canal, so that the concentration of salts entering the field further downstream is
higher. The illuminating aspect of this demonstration is the difference, apparent
in the graph, between the first 10 years and the simulation’s second half. During
the first period, the fields are watered daily with a small amount of water (5mm).
Evaporation from the fields is contrived at a specific level to take the majority of this
amount (4mm). This occurs before seepage. The remaining 1mm of water seeps out
immediately, leaving the field dry for the next day. Evaporation has a strong chemical
effect: all of the chemicals contained in the evaporated water are put into the field’s
soil chemical content. Seepage of water with concentrations below this content can
18
Evaporation of water from out of the canals is, in this example, set to an absurdly low rate. The
evaporation rate is set in absolute mm per day, but in this example very little water is being moved
by the canals, which are therefore running very shallow and slow. If a more typical evaporation rate
were used, so much water would be lost that the concentration, especially at the downstream end,
would be absurdly high. This is yet another illustration of how subtly interconnected the simulation
can become.
245
flush the soil, but for the initial period of this demonstration, seepage is kept too
low to counteract the effect of the evaporation. During the second period, however,
the fields are watered with 50mm of water, once every ten days. This is, of course,
exactly the same amount of water over time, but on a different schedule. Under the
new schedule, 4mm per day of water evaporates, but 16mm seeps. Because there is a
heavy seepage, the impact of the seeping water can outweigh that of the evaporation,
and the soil chemical content goes down. Note that there are limits to how much
seepage can clean from the field; this is due to three causes: the continuing effect
of evaporation, which does not cease; the fact that the water being seeped contains
some chemical load itself; and the fact that the flushing effect is proportional and not
absolute.
Our picture is still not complete; soil chemistry does not yet matter. We are now
required to consider the effects of these chemicals on plants and crops. Practical
guides (such as the FAO manual) for real-world farmers are unhelpful, for we do not
need to know what to do to keep the plant growing, we need to know how to calculate
what will happen to the plant when that is not done.
Instead we again turn to external criteria. Plants should be able to be differentially
affected by chemical content. Some plants are ‘salt resistant’ while others are not.
The HWM implements a simple algorithm that checks the soil chemical content (for
each named kind of chemical) against a threshold level; if that level is exceeded, yield
is diminished by an amount calculated using the ratio of the threshold level and the
actul soil content.
The HWM implementation adds two additional components to this; these pursue
opportunities that the initial framework makes available. First, plants can also have
chemicals that they need to grow at optimum levels; if these needs are not met, the
yield is negatively impacted. Second, plants can impact the chemical concentration
in the soil, either by consuming chemicals (one imagines that they consume chemicals
that they need) or by giving off chemicals.
246
The driving issue behind these elements is the ability to use plants in yet another
kind of ‘game’. The real-world analog is soil nutrient level, and, more specifically,
the effect of fallowing fields or growing crops that fix salutary chemicals, especially
nitrogen. The implementation- the internal component- is undoubtedly far simpler
than other plant models may offer; but the effect is to allow the larger-level interactions between crops to be played out: field choice, and the success or failure of that
choice, can be impacted by a game of soil chemistry, and the long-term changes to
that chemistry from chemical-laden water or overfarming can be explored. Another
way of viewing this is that it is now possible for crops in the same field, simultaneously
or sequentially, to impact each other through their actions on the chemical content
of the soil in which they are growing.
We can return to the initial question about the dynamic between upstream and
downstream fields and their soil chemical content vs. the frequency of water delivery.
If we believe that the larger picture of the Hohokam trajectory was impacted by this,
then we can easily use the HWM to explore this avenue of questions that, as has been
claimed, lie between us and our clear view of this larger picture. We can supply it
with more specific data, or better models of each process, or play through the effects
on a landscape that more closely resembles one the Hohokam lived on. We might
also choose to assume these factors had relatively little impact and turn them ‘off’ for
simulation runs to explore other factors. The point for our purposes is that whether
soil chemistry does or does not play a key role isn’t yet known. The HWM provides a
framework for pursuing the possibility that it did as part of an exploratory approach.
The big-picture concerns prompted us to choose elements (canals, chemicals, soil,
plants) that allowed us to assemble an arrangement of interactions among them,
putting at a lower level (for now, at least) the details of their internal workings and
focusing on the how each looks from the outside and the way it interacts with the
other elements. This is a key aspect of the modeling approach advocated here. In
the case of soil chemistry it drew together several components that were previously
247
unlinked- another element in the modeling approach, and one that figures prominently
in the next section.
6.1.6
Agents
If the point of the components discussed so far has been to establish an array of
interrelated elements that interact to produce challenging and interesting dynamics,
the larger point of the simulation is to introduce and attempt to understand the way
humans participated in those dynamics. It is not pejorative to say that the purpose
of the model is to structure a set of games; the term is used in the mathematical sense
(see Sigmund 1993) to indicate a situation in which actors have options from which
they can choose and which will result in payoffs to them, although in our case we
make no distinction about whether those payoffs are immediate and measurable or
whether the term simply encompasses the range of changes to a given actor’s overall
situation. In this section I present a brief overview of agents in the HWM Simulation,
and some examples of their use, with special emphasis on how this parallels and is
integrated with the preceding discussions of modeling theory. I divide this into a few
parts, beginning with some remarks about the special features built into the ABCM
framework that are specifically intended to facilitate agent-based modeling.
6.1.7
Special features of the ABCM for Agent-Based Modeling
Definitions of agent-based modeling can vary widely (compare Epstein and Axtell
(1996), Gilbert (2008), and Griffin (2006)), but common to all of them is the idea that
a fundamental unit of the model is some entity that can perceive some things about the
rest of the model and can take some actions based on these perceptions. This stands
in contrast to top- down approaches that rely on descriptions of overall population
behavior. This shift in perspective has been taken to heart both in the interpretation
of agent-based models and in their construction; regarding the interpretation, the
248
elegance of an agent-based model is frequently recognized to be the fact that entities
acting on limited (often ‘local’, depending on the kind of simulation) information can
lead to patterns that are recognized at levels above the individual. Regarding the
construction of agent-based models, the process of creating an agent-based model is
one of attempting to grasp what information is available to each agent and how each
agent will act upon it- or learning to ‘think like an agent’ (sensu Griffin 2006). These
common elements allow several distinct strands to coexist; some agent-based models
focus on the interactions of simple agents in large numbers that mainly interact among
themselves, while others emphasize small numbers of agents that are constructed
to enact complicated decision rules, with the question being primarily how these
complicated rules will play out.
The link with the ABCM framework lies at a level above these issues, the level at
which an ‘agent’ is an ‘agent’: that it can perceive something about its environment
and take action. The ABCM framework’s emphasis on defining units for the simulation also leads to the construction of the units that the agents can perceive; likewise,
the definition of what can occur in the model defines the actions that the agents can
take. This is in contrast to approaches that try to graft agent-based models onto existing simulations, wherein it becomes difficult to construct agents that can perceive
the simulated environment in meaningful units and take action within it.
The architecture that permits this allows any kind of action that can occur in a
narrative or seasonal narrative, and any kind of probe, to be used by agents. Not all
such commands are meaningful for agents to use (for example, ‘Rain’ is not a directive
that we would expect our agents to be able to issue), but all commands are available
for agents to use when appropriate. If an action or a way to perceive the model is
properly implemented, it will be available to any agents that are implemented.
One interesting example of this: the simulation can be run interactively, so that
as it proceeds step by step the user is allowed to evaluate the simulation and, if
desired, make changes to it. A special feature makes it possible to issue commands
249
directly to a specific agent. In theory, the simulation can be run in such a way that
the user assumes the role of an agent. This is not mere videogaming: research being
undertaken at Arizona State University asks people in a lab setting to play various
economic games, including some based on the dynamics of an irrigation setting (Marco
Janssen, pers. comm.). One long-term goal of the HWM Simulation is to use it to
create ever more detailed irrigation settings and observe how they are dealt with by
living subjects. (One small aside is noteworthy here: the 3-D representation of the
model (see Figure 6.3) is not merely a gimmick nor a visual nicety to supplement
the 2-D maps; whereas a map conveys a bird’s-eye view of an entire landscape, a
3-D representation allows the depiction to reflect a limited viewpoint, akin to (or,
possibly, matching) the limited viewpoint that an agent or a human playing the role
of an agent might actually have.)
The ABCM framework makes no further distinction among agents, meaning that
it is possible to create agents that represent any real-world entity; a single agent
may represent a real-world individual or a group. It should be noted, though, that
the motivation behind the ABCM’s structures is that agents are humans who will
interact with the environment and with each other. The ABCM Framework is not
a general, all-purpose agent-based modeling toolkit; such toolkits (see, for example,
North et al. 2005) provide rich scheduling features that allow highly complicated
collections of agents to interact and maintain the illusion of simultaneity. However,
one reason they expect collections of agents to be complicated is that they tend to
ask all of their elements to be agents: if this approach were implemented in the HWM
framework, plants and crops, for example, would be considered agents.
6.1.8
The need for Agent-Based Modeling in the HWM
The trajectory in which we are interested for the Hohokam is ultimately not a trajectory of canals, or crops, or streamflow or soil, but of people. When we speak of
250
our interest in the self-organization of the system, or in the resilience of the system
to perturbations, we are referring to the behavior and arrangements of the Hohokam
themselves. Indeed, it can be argued fairly neatly that the parts of the system that
collapsed was not the canals or the fields- which, after all, were put to use again when
the Europeans arrived only a short time after the Hohokam left- but the social system
that had, apparently, worked for the Hohokam for a millennium and then dispersed.
Agent-based modeling is the appropriate strategy for building our understanding of
this.
There is an important point to emphasize in using this approach, however. As with
the rest of the modeling framework, agent-based modeling is a bottom-up endeavor.
It presumes separate components that may (but won’t necessarily) come together
in some kind of concert or pattern, but that may also move away from this into
fragments. To model this means to determine ways that these components can act
in concert or not; moreover, if the crux of the modeling framework is refining our
vocabulary of potential action, it means being very specific about how this is done.
This contrasts with a top-down approach, such as that proposed by Hegmon et al.
(2008). There the archaeological evidence in favor of a resilience interpretation of the
Hohokam is marshalled, but in a way that we would find difficult to incorporate in
the HWM framework. For example, certain archaeological indicators (ball courts and
large-scale architecture) are interpreted to mean higher interconnectedness among
social factions, which is then taken to mean a greater fragility to problems that may
occur in the system. However, in our approach- moving, arguably, from the semantic
to the syntactic- we would need to define the actions that lead to the creation of these
structures, and allow our agents to choose to undertake them or not; this would likely
be an elaborate undertaking, even if it is entirely technically feasible in the HWM
system.
251
6.1.9
Examples: Agents on Landscape 3
In this section I present an example of agents existing in Landscape 3; their task is to
farm the fields along the north and south canal systems. The agents are not spatially
located per se, but each agent is given control of exactly one field.
In the interest of simplifying this initial example, agents can perform a very limited
repertoire of actions. Beginning on March 1st of each year, each agent can plant a
crop in its field. The agent can thereafter control the amount of water deposited onto
the fields via the canal system. This control is limited: once per day the agent can
choose to open or close only the gate that directs flow onto its own field; has no other
means of controlling the operation of the canal system. Note that this relies on the
LPG algorithm to control flow dynamics, because in the LPG system controls placed
at the terminal nodes of the canal system have predictable and intuitively satisfying
effects that are not representative of real-world canal system dynamics.
The agent choice to open or close the gate is based on the level of water in the
field, measured in mm. If this level is below a threshold the gate is opened, and it
remains open until the level exceeds another threshold. In the simplest version, the
thresholds are the same for all agents and are immutable.
The agent selects from a catalog of available crops; if this is the first time the agent
has planted a crop at this time of the year the selection from the list of possible crops is
random, but subsequent choices are directed based on prior outcomes. All of the crops
available in the Configuration are organized along a simple axis by length of growing
time. For the first demonstration, the five available plants vary in two respects.
First, they vary in length of growing season from 80 to 120 days (in increments of
10); second, they vary in crop factor from .8 to 1.2, (increments of .1) but inversely
with the length of the growing season, so that the crops that grow fastest also need the
most water. Note that the agents assess all of the plants in whatever Configuration is
252
being used19 ; different collections of plants can be swapped in or out of the simulation
as needed, and the choice to have growing season inversely correlated with water need
was made in the hope of demonstrating interesting and surprising dynamics.
When the agents have planted their crops (and automatically opened their gates
on planting day) they follow them for the entire length of the growing cycle (even if
the crop dies along the way). On the date when the crops mature, the agent harvests
them. Naturally this begins to stagger the schedule of agent actions, because some
agents will have planted crops that mature earlier than others. Once the crop is
harvested the agent will plant its next crop of the year. This crop will also be followed
and then harvested; if there is time left- all crops must finish growing by the end of
the year- the agent will continue to plant crops in succession.
Each time a crop is harvested the agent evaluates the crop’s yield. If the yield is
above .95 (in arbitrary units, but common for all plants), the agent will, effectively,
decide that this is an indication that a shorter growing crop could be planted next
year (the rule is based on the assumption that shorter growing plants need more
water per day). The agent knows which crops it planted during the sequence of the
year, and will substitute a shorter growing crop in this position next year, if one is
available from the configuration.
If, conversely, the crop is poor, the agent will move in the other direction, attempting to plant a slower growing and less water-needful crop at the same time next year.
(‘The same time’, of course, meaning at the same position in its crop schedule; the
actual date may have shifted if the crops preceding it were replaced with shorter or
longer growing crops.) The main advantage to planting faster-growing crops is that
more of them could, theoretically, be fit into the March 1 - December 31 window;
with the configuration presented here, the maximum number of crops is three. (But
note that the agent does not strategize in terms of how many crops can be planted;
it only selects shorter or longer growing crops one at a time.)
19
Technically the agent class performs this, so that it is done only once and not by each agent.
253
Figure 6.25. Agent Demo 1. Results are from the agent-based dynamics on the
north and south canal systems using Gila flow reconstructions and arbitrary plant
values.
The idea behind this arrangement is that when those agents at the start of the
canal have ample water they will be able to close their gates and water will be available
to those at the tail end; when the first recipients take water off they may, depending
on the flow coming in to the system, remove so much that none will be available to
those downstream. The downstream agents may therefore find that they must employ
different cropping schedules and strategies than their upstream competitors.
The demonstration represented in Figure 6.25 shows results from a typical run.
The summary is coarse, and shows the total yield for all agents for each year in the
north vs. the south canal systems. There is some additional background that explains
the results. First, the variation of flow within the year was performed using the data
for the Gila seasonal flow reconstruction (see above). Second, the headgate efficiencies
were configured according to the demonstrations above, so that for some portions of
the year the downstream headgate, feeding the south canal system, captures and
moves more water. Flow is not varied by year, however.
The results indicate that the southern canal system does just fine with the amount
of water it receives: after a very short time adjusting its crop schedules it is able to
achieve a nearly perfect full yield- three crops are harvested by all agents and all are
254
almost entirely untouched by water troubles (and hence all three are of the fastestgrowing kind of crop). In contrast, the north system cannot do this; its total yields
are less.
Inspection of the simulation while it is running (not reproduced here) explain
this. During the summer months when the flow is lowest the agents at the tail ends
of the north canal system do not receive enough water; their crops not only suffer,
they die completely, even after they all switch to the longest growing crop. In theory
they could receive enough even if there was not sufficient water entering the system,
provided some of their upstream colleagues had their gates closed; but this does not
happen.
There may be little “interesting and surprising” in these results. It is gratifying
to see that the south canal agents can figure out how to achieve a nearly perfect
score, but if we are hoping to observe how a system can find a solution to cope with
greater adversity, we are disappointed in the north system’s performance. One thinks
intuitively that there ought to be some solution in which water is distributed so that,
even if it is in short supply, the distribution allows all the crops to come closer to
surviving and being harvested. Our agents, configured as they are, do not find it.
This is not unexpected: there is no mechanism built into this example that provides
any incentive for the upstream agents to help the downstream agents. In terms of the
flow of information, there is nothing that even conveys the message to the upstream
agents that their downstream brethren are suffering. The only thing they do is, when
their fields are saturated, close their gates. We, knowing that plants can go a day
without suffering any loss in yield, might want them to close their gates in such a
way that the downstream fields receive water, but there is no way that they can do
this given the information they have at hand.
This could be a jumping off point into more elaborate and well-constructed examples; instead it will be a cautionary tale of the power but difficulty in the approach.
For the possibilities for agents here are nearly infinite. We could change what agents
255
sense: they could gauge the health of their crops and use that to judge whether to
add water or not. This would be challenging- given that the agents will not have
direct access to the plant growth algorithm it may be difficult for them to predict
what will happen if a plant is deprived its daily draught- but it might be better than
relying on field water depth. We can let the thresholds for when to open and close
gates vary, and build in more cooperation by biasing this rule toward keeping the
gates closed as much as possible (even simple additions might help: an agent should
close its gates when its crop is dead, for example). We could institute something like
a market in which if an agent wishes to use water there is a cost associated. We could
ask agents to manage more than one field, at different positions along the system. We
could borrow from the Lansing and Kremer model in Bali (J. Stephen Lansing, pers.
comm.) and allow pests to enforce some kind of specific timing to watering events.
But if I have previously alluded to the lack of a ‘bestiary’ of complex systems, this
provides the example: we cannot pull off a shelf the recipe for getting 30 agents who
have this kind of relationship to one another to ‘self-organize’. Work to find general
principles that might allow us to do so is an ongoing front. Any solution, however,
must rapidly move through even more open doors: does it address the challenges of
different intrannual schedules, or if those that arise if annual variation is allowed?
Does it scale to different numbers of agents or, if this matters, different positions and
groupings along canal branches?
Perhaps the richest area for exploitation is in the relationships among the agents.
The simulation is intended to help understand human social behavior- even if so
far in this discussion there has been a heavy emphasis on the particulars of the
environment in which that behavior will be examined. Agents permit examination
of numberless combinations of social rules that we might wish to consider. Suppose,
to offer just one example, we assume that kinship ties can exist between agents at
different positions along the system, and that these ties precondition whether an
agent will accede to either a general scheme for distributing water or to some specific
256
instance of a decision; the rules for creating these kinship ties- marriage rules- and
their effectscould be explored.
There are too many possibilities to give more than this insufficient sample. However, it is possible offer another demonstration of the simulation’s potential to use
agents in a different way (even if at the same time I expand the list of possibilities
even further). In this example, there is an additional agent of a different kind, an
agent that operates at a level above the individual agents discussed above. This
agent is called a Canal System Manager, and one exists for each of the two canal
systems on the landscape. The Canal System Manager has the power to insist that
water move through the system in specific ways. However, the power is limited, first
because the agent is not endowed with any kind of perception of the actual field
conditions; this could be changed for other examples, but for now we assume that it
is entirely schedule-based. Second, and more importantly, because the agent cannot
control individual fields but only entire branches of the system. For this, for the first
time, we formalize a distinction between the main canal line and the N-S distribution
canals by marking the correct branches of the canals appropriate and, importantly,
by adding this new distinction to the agent’s vocabulary; this new agent can close the
gate leading to any or all of the distribution canals. In this way the structure of the
canal system enters into the agents’ collective means of taking action. The agent can
be thought of either as a central controlling authority or as the collective agreement
of all of the agents; from an information flow perspective, it is disconnected from the
individual situations of any of the other agents.
The proposed rule for how this central agent chooses to close or open the canals
is: every fourth day it closes, for that one day only, four of the first five distribution
canals. The one that is not closed cycles among the five. This should ensure that
the sixth distribution canal- the numbers are hard-wired and the rule would have to
be rewritten for any other context- gets some water on at least a few days, possibly
allowing it to avoid having too many consecutive days without water and killing the
257
Figure 6.26. Agent Demo 2. Results are from the agent-based dynamics on the
north and south canal systems using Gila flow reconstructions and arbitrary plant
values, with Canal System Managers.
crops.
It’s a good theory; it doesn’t work- at least not as well as expected. Figure 6.26
shows the results from a typical run. The south canal system continues to do thrive,
even with this rule in action. The north system improves, but only very slightly.
Extending the speculation above that information flow is significant we can enquire
if the central agent were to have the ability to perceive more of the system than
any individual agent it could make better decisions than the arbitrary one we have
asked it to here. This is not to say that it will necessarily make better decisions than
the agents acting in their individual or small-group interests, given what they can
perceive (and assuming that ‘better’ is well-defined). It demonstrates, however, a
range of such situations that can be explored within the framework we’ve created.
6.1.10
Conclusions: Agent-Based Modeling and exploratory archaeology
Two extended points have run through this discussion: first, the importance of defining a suite of model elements whose interactions are what matters to us, rather than
their internal workings; second, the action of establishing connections among those
elements and its impact on the overall behavior of the model. Agents can be viewed
258
in this light. They are something like connective wild-cards: they can link any set
of elements with any other, through their ability to perceive across the whole of the
model (if we permit them) and their ability to act across it. We can see this in action in a simple example. Previously I noted that the Landscape 3 north and south
mountains were not directly opposite each other, but were, the canal systems, offset
by 500 m. In no example presented so far as this mattered, because nothing connects
the mountains and the canals. But imagine that we have agents that are located in
space, and that can move, but their movement is limited by distance (a trio of related
propositions, but it is possible to have on located agents, and also to have agents that
move without limit; here I propose that agents must be limited in how they move- a
sort of speed limit). In these circumstances the distances from the canal-based fields
to the mountain ones might matter; and if this is so, then comparisons of the two
canal systems might be skewed if one were 500 m farther away from its fields than
the other.
The limitations of agents- to perceive and to act- are important to our understanding of how their behavior can affect the model, but it is their role in connecting
model elements that may be of greater value. I will note that the earlier discussion
on understanding features of the canal system is only completed when agents are
added: while the low-level features of the canal system- the slope and nature of the
channels, etc.- are amenable to analysis purely on physical grounds, the higher level
structures- the nature of the networks it forms and the chronological changes to itare approachable only when agents are added. That this derives directly from the
agents’ limitations even opens the possibility of a new definition of agent-based modeling based on how information is permitted to flow through the model rather than
on how the model was constructed. This is an intriguing suggestion, but it takes us
away from the center of our purpose; for now I will turn to another aspect of the
HWM framework, the construction of arguments from simulation input and results.
259
6.2
Building argument chains
The ABCM framework provides a mechanism for organizing the input and output
data for collections of simulation runs and using them to form larger chains of argumentation. Ideally this would participate fully in the scheme laid out in Chapter 3,
to wit, that the construction and evaluation of theories and hypotheses could be performed in a strict logical environment for which we would use a relational database,
and that this would ultimately allow the simulation to be put to use as a deductive
engine and play an appropriate role in a larger scientific enterprise that would lead
to real-world validation and to new hypotheses. Not all of this is fully implemented
in the ABCM framework; nevertheless, the framework provides a large number of advances in that direction, some of which offer mainly practical advantages while others
are more strictly logical. In this discussion, I will begin with the practical issues and
return briefly to the logical ones, using the latter part as a starting point for the
overview in the concluding section of the chapter.
The practical advantages of the ABCM framework for organizing information are
numerous; they are generally referred to as ‘auditing’ functions, and this refers to the
proposition that for any simulation run, it should be possible to determine exactly
the input used for that run and the code it used, so that it should be possible to
reproduce any run exactly at any time in the future. A closer look at how this is
achieved, however, reveals that it is something more than mere accounting.
Thus far I have omitted details about how the simulation is actually performedthat is, what is the procedure for designing and executing a simulation run; this is not
directly relevant to the issues addressed so far, but brief mention of it is appropriate
here because it can be run in two distinct ways, and the differences between them
illuminate an important aspect of the simulation framework. The two ways are to
either run the simulation in ‘Interactive’ mode or by structuring more formal runs.
Interactive mode (see Figure 6.27) allows the simulation to be incrementally stepped
Figure 6.27. The HWM Interactive Window. The left area is a menu bar of standard command buttons; the right
allows the user access to maps and graphs of the simulation‘s current state and data being collected. The top of the
central area allows the user to enter commands that interrogate or direct the simulation, with the results shown in the
area just below it. The lower central area contains standard forms that allow some commands to be used via menus
rather than text entries.
260
261
Figure 6.28. The HWM Simulation Menu. Note that the organization of this screen
matches the organization of the HWM and the ABCM framework. On the left are
project specific-data in static form. In the central area the data are divided into
three kinds: simulation data, parameter data, and probes. Within the simulation
data, divisions are into Data Model and Code Model variations; Data Model variants
encompass Configurations and Histories, which are comprised of Narratives. In the
rightmost area are options to run the simulation and to review data from previous
runs.
forward by a user who may, between steps, interrogate the simulation or make changes
to it. The changes can be of any kind: changes to parameter values, adding or
removing instances of simulation objects such as fields or canals (or even mountains),
or changes to the state of these objects, such as arbitrarily re-setting the temperature.
There is even a means for directing agents to undertake actions, effectively allowing
a user to step into the role of an agent (see above). This mode allows what might
be termed an intimate approach to the simulation, and is often useful in working
intuitively toward understanding some particular simulation issue.
In contrast, a formal run requires a ‘hands-off’ approach: the run must be configured and executed to its conclusion without making changes of any kind. One reason
for this should be obvious: it makes it easier to keep track of what collection of input
262
Figure 6.29. The HWM Run Simulation Screen. Note that this reflects the way the
ABCM framework structures simulation run data, and therefore requires only three
elements: the ‘Simulation’ contains the Data Model (which contains the Configuration
and the History) and the Code Model, while the Parameter Set contains specific values
the algorithms can use during this run, and the Probe Set specifies which information
is to be collected during the run.
led to what output20 . But it also conforms to our logical framework: the simulation
is a deductive engine that begins with fundamental assertions about the way its components should interact and traces out their implications to some collection of final
values.
We can note the practical advantages of the simulation framework easily. Because
the input data requirements can become very high, grouping information together is
key to managing the construction and recording of each run. In the ABCM framework, this is accomplished through the framework described in Chapter 4; the HWM
interface to permit this allows the management of different kinds of source data on
one screen (see Figure 6.28 and annotations), and the construction of a simulation
run on another (see Figure 6.29). By making sure that components are grouped properly when the simulation input is crafted, it makes the aggregation and comparison
of output from multiple runs more convenient. And by providing a guide for how
input should be structured- in packages of ‘parameters’ vs. ‘code models’ vs. ‘histories/narratives’, etc.- the framework acts to structure our thinking about the nature
20
Although it is possible to keep track of the actions the user undertakes in interactive mode,
too; the HWM interface does this, making a full transcript of all such sessions (even including
annotations).
263
of the hypotheticals we are proposing for our simulated world, and the initial axioms
from which the deductive engine will work.
There are some important comments to be made on the implications of the different kinds of input packages. Code models are effectively statements that the relationships that inhere among elements will be made up of specific dynamics. Histories
and narratives are drivers for the simulation; they represent the dynamics that will
occur in a world in which these things also exist or come into existence, regardless
of the internal state of the simulation. Data models include Configurations, which
statements about attributes of things in the world- in the HWM simulation this refers
especially what plants are available, but there is an assumption that this will be a
constant throughout a given simulation run. Parameters are close kin to code models
but can be arranged into meaningful comparative groups. Probe sets inform what we
want to extract from the simulation they are separate from all of the other elements
in that they do not affect the behavior of the simulation in action. Each of these
has a clear role to play; the entire package, however, is an ensemble: they represent,
collectively, the set of assumptions that we are using as the soure for our deduction.
We can use these components to achieve our stated logical simulation goal: assume
a large collection of statements to the effect that A⇒B; trace out all of these to
their conclusions. The special structures- Histories, Narratives, Code Models, Data
Models, Parameters, and Configurations- represent a clean way to divide the kinds of
assumptions we work with in the exploratory approach I have proposed.
When a set of simulation runs has generated their data, the ABCM framework
allows the user to assemble collections of related runs into packages, and takes, for
each kind of data, the set of all unique assumptions and arranges these into a single,
easily reviewable package. In this package each kind of statement (configuration,
narrative, etc.) is described in full, but only once, even if it is used in multiple runs
and even if it is accompanied by other alternatives that are also used by other runs
in the same set. When coupled with comments and references, the effect approaches
264
a complete argument laid out from raw data to output. Two examples, provided in
the electronic version of this document (see Appendix A) demonstrate this. They
are based on example runs that extend our agent-based example from the preceding
section. Briefly, the example is expanded by provided two additional alternatives: the
ability to use the Salt intra-annual flow in addition to that of the Gila, and a second
collection of plants from which the agents can choose.
Taken together, these runs use a large combination of elements. They share a
single parameter set (those aspects being kept common between them). They differ
in their code models in only one respect, the generation of the ‘Canal Manager’ agent.
Their plant configurations differ widely: the one used above is run for half of the runs,
while a new configuration that includes some realistically named plants is used for
the other half. The realistically named plants include one plant with two different
plant configurations (just as ’mythweed’ existed in several varieties above) but also
several other independent plants. At least one of the plants has multiple stages with
different characteristics in each stage. The difference in intra-annual flow variation is
created by varying one narrative in two distinct histories.
The first example takes the results from the four runs using the ‘Canal System
Manager’ agent and and combines them into one table and associated graph; this is an
example of what the ABCM framework terms an ‘Analysis’; a parallel analysis exists
for the four runs that did not use the Canal System Manager agents. The second
example presents the ‘Summary’ that combines all of these. The multiple individual
components of each are listed so that the entire chain of input data is associated with
the output data and, eventually, the conclusions.
(For those who are curious, the results are given in Figures 6.30 and 6.31. The
runs using the Salt River seasonal flow are more successful than those using the Gila
data; the runs using the newly concocted plant collection are dismal failures compared
to the abstract plants used in the runs above. The likely cause of this is the lack of
congruence between plant longevity and water need, rendering the agents’ rule for
265
Figure 6.30. Field Manager Analysis graph. The graph shows the combined results
of four runs of the agent-based field management simulations.
changing crop schedules very poor.)
One additional benefit of this approach, revealing one of the advantages we gain
by clearly structuring out input data, is what is produced by following the strictures
the approach requires: when input data are packaged into discrete elements, these elements begin to form library of possibilities. A key need in the exploratory approach
is to break the immense space of possibilities for exploration into more manageable
volumes; we can look to the small library of components that have been discussed in
this chapter to see this already in action: the various data on streamflow (annual and
intra-annual), landscape and canal systems, and plants are the beginnings. These
provide a wide set of combinations to be explored; when coupled with the collaborative element that the ABCM hopes to foster, so that experts who work closely with
separate parts of the problem can contribute the packages of input data they think
are appropriate and worthwhile, this library becomes not only a convenience but a
contribution that the ABCM approach facilitates.
But the logical aspect of constructing these arguments is equally important. The
chains of logic begin- must begin- in discrete statements about the world. These
must be structured in a way that allows them to be played out through the engine
266
Figure 6.31. Canal Manager Analysis graph. The graph shows the combined results
of four runs using the ‘Canal Manager’ agent, for both the north and south canal
systems.
of the simulation, and this forces a construction of a vocabulary for them. That this
vocabulary is necessarily common across variations of the simulation is an additional
benefit: it means that working through possibilities in the simulation means revising
the theory being applied. This is the starting point for our return to the larger logical
issues of the bottom-up simulation approach.
6.3
Conclusion: Dimensions of Exploration outward from
the Middle Ground
We can now re-examine the claims made in the initial parts of this dissertation in the
light of the examples presented.
The fundamental theme underlying the modeling approach presented here is this:
models are only imperfect reflections of the reality we hope to understand. They are,
metaphorically, square and blocky when the world is curved. But if we choose our
blocks with care we can see that their blockiness is not a liability, but an asset. The
models are useful not in spite of their blockiness, but because of it.
We see this at the end of our discussion in the ability to use these blocks to create
chains of argument that contain no ambiguities and that can, when properly used,
267
reveal the inconsistencies and support the patterns we seek, and contend are in, our
simulation results. This is one side of the transfer from a semantic approach to a
syntactic one; another, though, is the ability to see the inconsistencies before the
blocks we build are even put together- and in some cases because we see that they
cannot be.
The idea put forth in the initial chapter was to find a modeling ‘middle ground’;
the examples above are a first attempt at doing so. In some cases it is possible to
imagine simpler components, but each of the examples also marks a point from which
we could go further. We can increase the detail of the landscape; we can add to the
characteristics and dynamics of rivers and the headgates on them; we can make our
virtual canal systems more like the ones we knew the Hohokam to use; we can use the
fullest model of plant growth, chemical transport, and soil processes, and any other
components we might feel we need to include. But before we begin to do this we can
pause and ask whether the additional detail is worth the cost. More importantly, we
can assess the cost, and hope to understand it, because we have built and limited the
other pieces that impact it. And we can use these lightweight and tentative pieces
to fill in initial questions that may stand between us and the larger ones we might
ideally wish to ask.
The recognition that our fundamental pieces are abstract focuses our attention
on their external aspects and on how they interact; this, in turn, is what frees us to
pursue the range of goals the exploratory approach offers. When the components of
the model are seen to be abstractions, and we assume from the start that the target
of the model matches only imperfectly, we are positioning ourselves to take advantage
of the ‘intrinsic generality’ of models (see Chapter 3, page 100); through this we can
pursue any of the six goals I listed in Chapter 3. At one end of the continuum these
goals form is the pursuit of how the dynamics of a specific context played out, in the
deepest detail we wish to pursue. We can, if our interests and our purposes compel
us to, drill downward through the axes of scope and resolution; we can adjust the
268
composition of our model to include more components if we think they might impact
our findings; and we can question whether the elements we have included cohere into
a system with dynamics that we perceive are more telling than just a collection of
events- or, if not, then the events, in our model and in our target system, may actually
be just the accidents of history.
At the other end, however, is the idea that the dynamics we seek to understand
apply not only in the original context but in entirely different ones- different archaeological societies, or within non-human social and ecological systems, or even entirely
physical, nonliving systems. The goal in this approach is to look at the pattern formed
by the elements in the model and by the connections among them. If complexity theory can describe system behavior as robust or resilient, we may hope to find what are
the boundaries of the classes of systems that exhibit these, and perhaps other similar,
behaviors. Perhaps agent-based models, when defined by these formal characteristics,
can be shown to be one unique class; perhaps there are others.
The process of discovering this relies on the abstract nature of the models. This
allows the examination of counterfactual cases and of cases where the outcomes are
not fixed but are instead ranges of possibilities. This moves us past the questions of
‘what was’, and beyond the sketching of boundaries of ‘what was necessary.’ It allows
us to go beyond what we would know if we had that miraculous machine that gave us
the power to see everything that ever happened in the attested history, and instead
consider the questions of ‘what if’. My contention is that this the appropriate path
to reach the final class of questions, and to take in the scope of a millennium or more
of human history, and to begin to find a more general explanation, and an answer to
the question, ‘why?’
269
Chapter 7
Summary and Prospectus
In several of the preceding chapters I have referred to broader threads of intellectual
history and their impacts on the activities and approaches of archaeology. The longest
of these that I have discussed is that which runs through the course of twentiethcentury philosophy of science, with the rise of Logical Positivism and its subsequent
decline. I have reviewed how the syntactic view of theories came to be supplanted
with a semantic view; we have inherited a framework for science in which models
are primary . Yet I have also shown that this leaves us somewhat adrift: models are
vaguely defined and poorly understood instruments. If they are to be pushed even
more to the fore in our archaeological pursuits (see, for example, Kohler and van der
Leeuw 2007) these difficulties must be resolved.
Alongside this is another trend, in which complexity theory is opening new and
previously unimagined possibilities for explaining how our world has come to be
structured in the way we see it. In the Hohokam case it has raised the new possibility
that no central state was required to manage the canal system, but rather that the
system components were such that the system was ‘self-organizing.’ But this kind of
possibility seems to exist in many other areas, so that we are left wondering whether
self-organizing systems are idiosyncratic oddities, able to exist only under unusual
circumstances, or whether they characterize the world in which we live- if only we
could recognize them. Archaeology offers a unique, long-term lens, for asking about
the trajectory of human life on earth, surely one of the most challenging and important
arenas in which to investigate the universe’s subtle complexity.
These two threads are linked, of course: models and modeling have helped us
discover that complexity, and agent-based modeling has played a key role in extending
270
complexity theory into the biological and social sciences. Neither of these trends
would have been possible without a third thread: technology. Agent-based approaches
may have been dreamt of by scientists in the early 20th century but couldn’t have
been implemented; only with higher processing power could they actually be created.
I have argued that a second technological thread is also of importance: the diversion
from ‘pure’ relational database theory to commercially saleable database programs
which sublimated the important logical aspects of databases in favor of a metaphor
of simple storage and retrieval. The result is that the computing tools with which
we are most familiar and which we have most easily available are not the ones that
can be most gainfully used in performing the tasks required by the new modeling
approach.
In this closing chapter, I would like to briefly address some additional threads that
I propose give an indication of the trajectory of the new model-based archaeology and
of the possibilities of the ABCM and HWM frameworks. I begin with three salient
intellectual issues that I believe will require more work to resolve; these are discussions
that I introduce here and expect to see played out in the literature in the near-term
as issues in the modeling approach make themselves more manifest. After this I will
turn more specifically to the future prospects of the ABCM framework that I have
introduced here, looking backward at a few other influences that helped shape it and
impact its future possibilities. I close with a status report for the more specific project
of the HWM Simulation. Because the HWM Simulation is intended to be an ongoing
investigation, this will be something of a snapshot, but it should be possible to give
an idea of how far it has come and where it is going.
271
7.1
Three intellectual issues: glancing backwards and looking ahead
I present here three intellectual issues that seem to demand more attention as the
model-based archaeology moves forward. First, I refer to some issues that archaeology
inherits from debates that took place in the late 20th century, recast in the light of
the new modeling approach; second, the general issue of how to interpret the results
of an approach that involves the construction of ‘alternative histories’, one of the
issues in a modeling framework that will be furthest from the traditional purview of
the archaeological endeavor; and finally, the general issue of formally bridging the gap
between a model’s predictions and the empirical work that will be required to fit it
into the larger scientific process.
7.1.1
How ‘New’ is a Model-Based Archaeology?
A major theme in American archaeology in the second half of the 20th century was
the rise of the so-called ‘New Archaeology’, especially a main trunk called ‘Processual’
archaeology along with a second branch called ‘Behavioral’ archaeology. These can
be examined, usefully, in the light cast by the intellectual history given in Chapter 3that is, against the decline of Logical Positivism and its replacement with a semantic
approach to the relationship between model and theory. The touchstones for the New
Archaeology were the writings of its major proponent, Lewis Binford, and I will use
these as the starting point for the discussion and, for brevity’s sake, will refrain from
straying far from them.
The first historical point to be made is that the trajectory of the New Archaeology
tracks fairly closely to what might be expected from the outlines of broader intellectual history already given. Binford’s early writings strongly advocated a hypotheticodeductive approach drawn from Karl Popper’s writings, in which induction is essentially written out of science, the generation of hypotheses (‘context of discovery’)
272
is considered unknowable and hence irrelevant, and scientific knowledge is obtained
through falsifiable hypotheses only. Constructing these hypotheses, however, required
additional elements. In Binford’s 1968 work (Binford 1972a), he refers to ‘bridging
arguments’ between the aspects of a phenomenon that were unknown and those that
were testable. Later (according to his own account; see Binford 1981b) he recognized
the larger lacuna, and began to call for a “Middle Range” theory that could connect
dynamic aspects of a cultural system to the static archaeological record. For our
purposes, we can recast this as a shift from a desire for pure theory and testable
hypotheses to one in which a collection of interim relationships among elements- a
model- is pushed to the fore.
The Behavioral Archaeologists followed the same impulse but in a different direction. Where Binford hoped to gain insight into the archaeological record, Behavioral
Archaeologists proposed that the purview of archaeology be expanded to the larger
domain of all interactions between human beings and their material world (Schiffer
et al. 1995). The two approaches were quite different from a logical standpoint, to the
degree that Binford forcefully denounced much of the Behavioral program (Binford
1981a).
This dissent is revealing. The Behavioral programme rested on the idea that the
archaeological record is a transformation of the operational cultural system (Schiffer’s
C- (cultural) and N- (natural) Transforms; see Schiffer 1995). Binford vigorously
opposed the idea that the archaeological record as it was created by humans be
considered a distortion at all: it could only be considered so if it were to be compared
to some set of expectations, and instead Binford insisted that it be considered on its
own terms only. His argument did not apply to postdepositional changes of nonhuman
origin, which were of a different character; but he vigorously believed that it was
inappropriate to speak of some human behavior as ‘distorting’ some expected record
that a misguided (in his view) archaeologist might be hoping for.
There were two important and intertwined logical issues that underlay this debate,
273
one related to inference and one to explanation. Binford had positioned his new
approach against what he termed the ‘traditional’ model; in this model, he held,
culture was considered a mental construct- that is, existing in the minds of individualsand human agency was given a causal role in explanations. A repeated target of
Binford’s was the construction of what came to be called ‘Culture History’, especially
if that history involved processes of migration or diffusion; Binford emphasized that
migration and diffusion were phenomena to be explained, not explanatory in and of
themselves.
In contrast to this approach Binford hoped to study culture as something more
abstract: an adaptive ‘system’. The ethnic identity or mental constructs of the
people in such a system were irrelevant or epiphenomenal in this view. Explanations
that invoked such mentalist concepts were, in Binford’s view, historical rather than
scientific. Moreover, the Behavioral Archaeology approach, in Binford’s view, was
flawed because it repeated the sins of the traditionalist model: culture was given
primacy and modeled as existing in human beings’ minds. This meant that the
explanations created in the Behavioral approach would also be historical and hence
flawed.
These arguments impacted the issue of inference in the following way: if one’s
interest is in the set of transformations that formed the archaeological record, one
arrives at a question of chain of inference: can one make inferences about mentalist
concepts like religion from archaeological evidence, and are there degrees of inferential distance from the solidity of the artifactual remains? The Behavioral approach
attacked this as a problem and attempted to find general ways to move up that chain
of inference; Binford, distrusting the results of such an endeavor anyway because any
explanation crafted from them would be faulty, demanded that the archaeological
record be considered only in its own terms.
These arguments continued, but we will leave them here and turn to one of Binford’s more recent efforts, which will illuminate the way these issues are transformed in
274
an explicitly model-centered context. In 1999 Binford published a critique of an earlier
model created by Ammerman and Cavalli-Sforza (1984) that addressed the spread of
pottery and domesticated plants across Europe. In the original model, which can be
named the demic diffusion model, the advantage of domesticates caused populations
using them to grow and eventually to be forced to expand outward. Hence the beginnings of domestication in the Near East led to a population pressure that compelled
a movement of people into Europe. The key component in support of this model is
the timing of the appearance of the new elements in the archaeological record; this
timing reflects the idea of a wave of advance outward from the Levant, covering, over
the course of several thousand years, most of the European continent like a blanket
spread roughly from southeast to northwest.
Binford proposes that the reliance on “Time as a clue to cause” is misguided. His
countermodel is quite different. He argues that local populations in all parts of Europe
would have experienced population growth beginning with the retreat of glacial ice,
and that this growth would have continued for all populations at rates determined
by their subsistence strategies. He provides several categories of such strategies (like
reliance on huunting terrestrial animals vs. gathering wild plants), assigns them to
initial areas based on climate, and associates rates of population growth with each.
He proposes that when a given population density passed certain thresholds, a shift
in adaptive strategy was required. The catalog of such transitions was limited and
the shifts were deterministic, none more so than the final one: the shift to the use of
domesticated plants, which Binford calculated to occur at a density of 9.098 persons
per hundred square kilometers.
Using this model, elaborated by a few other considerations, Binford calculated
the time at which populations would reach the critical ‘packing density’ at which the
shift to domesticates would occur. When plotted this timing approximates that of
the ‘wave of advance’ model: sites at the southeastern extent of Europe undergo this
switch earlier, followed by their neighbors just to the northwest, and so forth. This
275
timing, however, is incidental: no actual transmission or movement is implied.
Binford mentions that his model fits the archaeologically attested evidence more
closely that the demic diffusion model, but this is incidental to his larger point,
which is that the demic diffusion model is another repetition of what he considers
to be historicism passing as explanation. In fact, he makes this criticism not only
of the full demic diffusion model, but of two separate variants of it: one in which
population pressure forced a genuine movement of people across Europe, and the other
in which only the technology moved because it was adopted by successive neighboring
indigenous local populations. Either of these, he argues, is a retreat to the unscientific
and historical mode of explanation of the ‘traditional’ model in archaeology (see
Binford 1972b).
Before returning to the larger topic, one final note should be made about Binford’s model. He documents an exception that his model does not accommodate: the
appearance of Linear Bandkeramik. This, he argues, is not merely a shift among subsistence strategies, it represents the introduction of a means of exploiting a genuinely
new ecological niche. He writes:
“The emergence of a new niche is frequently followed by a relatively rapid
filling of the niche space- that is, the geographic region in which the essential habitat conditions necessary to the success of the new niche are
distributed- which results from either increased reproductive rates within
the new niche space or in-filling by adjacent and local populations that
adopt the new niche strategies.” (Binford 1999, p. 27-28)
This is, of course, exactly analogous to the scenario that the original model (in its
two permitted variations) proposes for the broader pattern across Europe; Binford
agrees (explicitly) that this is the correct model for what occurred with respect to the
Bandkeramik culture, and thus it would seem that there is a contradiction between
Binford’s opinion of the two cases.
276
This is an extended background, but it allows us to ask in what way the new
‘model-based’ archaeology addresses the issues that were so contentious in earlier
frameworks, and whether these issues are resolved, ignored, or even obscured.
The first observation in pursuit of these questions is that both the original ‘wave
of advance’ model and Binford’s countermodel could easily be created as simulations.
We can for rhetorical reasons put aside two claims that inhere in this specific case- the
claim that one model matches the empirical evidence better than the other, and the
claim that Binford makes early in his critique that the original model’s description of
the effects of population pressure is not congruent with observations of the behavior
or modern populations- and easily envision a condition where two models equally
predict the same outcome (presumably one could consider resolving this through
further research, but our purpose is better served by assuming that this cannot or at
least has not been done). In fact, we could consider three models, the third being a
combination of the two in which the impetus to shift adaptive strategies is driven by
Binford’s proposed dynamics and the actual shift is implemented as either population
movement or technological diffusion coincidental to this shift.
This raises the dilemma that illustrates the central issue: all of these are valid
models, but one of them was considered to be of a different explanatory character than
the other two. This raises a key question: can the formal properties of these models
be used to understand why they would be treated in epistemologically different ways?
We can consider in greater detail the criteria that Binford uses to judge his model
different, and compare that to a simple implementation as an agent-based model. In
the simple implementation, agents are located across the landscape and represent the
adaptation used by the local inhabitants. In keeping with Binford’s approach they
would not necessarily represent distinct groups; we would assume that through time
the individuals participating in each system would change (through birth and death
if not migration), and so the agents in our model would represent the system, not
the people. Agents have identical sets of state transition rules; they are thus uniform
277
(even though they may possess different values for their state variables). The uniform
set of state transition rules is achieved by invoking an adaptive system drawn at a
high enough level of abstraction that local idiosyncracies are ignored: when a system
shifts from hunting to gathering, it is irrelevant if it moves from hunting rabbits to
gathering berries or from hunting squirrels to gathering nuts. This uniformity seems
central to his claim that the model is explanatory and not historic. The deterministic
rules that he uses to direct these agents’ transitions from one state to another have
an additional cache in that they are drawn from biology; Binford can claim that they
are not a strictly human phenomena, but rather represent systems that are found
more generally. This, too, supports the claim that his model is not merely historical.
Because the agents in Binford’s model are entirely deterministic and uniform, they are
relegated to non-causal roles. Causality is pushed to the truly independent variables:
geography and climate.
In contrast, an implementation of the demic diffusion model might use agents that
represent specific groups. Agents might further be of two distinct kinds: those who
use domesticates and pottery and those who do not. Either rules of movement or of
cultural transmission would be implemented, and in this way agents would influence
one another. These agents are more ‘solid’ than Binford’s: whereas the agents in the
Binfordian model might (implicitly) exchange members, these agents effectively bump
into one another. Moreover, as they play out the game that is effectively defined by
their interactions, they appear to take on a causal role: they are endowed with a
range of options and seem to exercise choices among them (i.e. whether to split off a
portion of population, or to switch identities from one group to the next, depending
on the implementation.) Hence it might seem possible to use fact the differences in
the kinds of agents to assess the model’s explanatory value differently.
The problem is that these differences may also simply be illusions that depend on a
superficial understanding of the mechanisms of implementation. All agents are really
deterministic; leaving aside the possibility of randomness, all agents are endowed with
278
a menu of options and a set of rules to move among them, and hence the seemingly
more astract agents in Binford’s model and the more personified agents in the other
are formally identical. Even the issue of ‘kinds’ of agents may be merely an aspect
of implementation: an agent may contain all the possible states and rules for moving
among them, and the ’identity’ of an agent is resolvable entirely to its state. This
issue is one avenue where there is room for more work: when are there two ‘kinds’ of
agents vs. when are there simply agents in different parts of a common state space is
an underaddressed issue. On the other hand, the property of information flow among
the agents is a formal difference between the two models that can be formally defined
and recognized in any implementation. The independence of climate as a ‘driver’ (no
information flows back to climate in this model) is one aspect of this.
The final problem is that the actions in the model are ultimately all equally causal:
constructing the model means asserting that X will impact Y, A will impact B, and
so forth in a large collection of dynamic assertions. ‘Cause’, in the sense that Binford
hoped to demonstrate, is obscured. The more fundamental nature of this problem
is reflected in Binford’s own example, and the fact that the Linear Bandkeramik
represent an ‘historic’ exception to his model. His argument, ultimately, is not that
history doesn’t happen, only that it doesn’t serve as an explanation.
This returns us to the issue of the ‘chain of inference’, and the ‘reconstructionist’ program that Binford decried; indirectly, it also takes us back to the proposition
that started this dissertation: what if we could observe the entirety of activities that
produced the archaeological record? The Processual approach insisted that the archaeological record be treated in its own terms, and not as a distortion, because,
paradoxically, we knew it not to be: it was “most likely a structured consequence of
the operation of a level of organization difficult, if not impossible, for an ethnographer to observe directly.” (Binford 1981a p. 198). This relied on a tautology (the
archaeological record includes only those sites that met these needs, and not, as the
Behavioralists would counter, the world as a whole), but in effect meant that part of
279
the definition of scope, scale, and resolution was already done for us. Modeling, in
effect, allows us to break away from this- and, to be sure, this is necessary, because we
certainly should not rely exclusively on the benificence of time to provide us with the
record that will ‘most likely’ inform us in our investigations- but it raises the opposite
question: if we could see everything (or, better, if we employ a method that allows us
to propose an array of possible ‘everythings’ and compare them to what we can see),
do we run the risk of reconstructing everything, but missing the explanations? The
challenge is that while modeling offers an exciting approach for crafting explanatory
frameworks, it does not have an explanatory nature built into it. Going forward, there
will have to be different strategies of modeling developed to accommodate different
explanatory frameworks as they emerge.
Two other points are related to this. The first is that in the preceding chapter I
effectively made use of the formal characteristics of models and the quality of information flow among elements in the model to define the distinguishing characteristics of
agent-based models. I propose that work of this kind, expanding the kinds of models
by finding new categories of ways that information flows through them, will be an
important direction in which the field of modeling progresses. Second, I earlier made
reference to the fact that we do not have a firm map of the constellations of complex
systems that we may find ourselves addressing. Work with the formal characteristics
of models, at the level at which, to refer again back to material I presented in the
opening chapter, the Bali Model of Steve Lansing (pers. comm.) and the model of
Indonesian forest dynamics are the same (Lisa M. Curran, pers. comm.), will push
this effort forward. We may find that this is a rich map of a widely varied world, or we
may find that there are a few categories of such dynamics that apply very generally.
I propose that this will be an exciting area of work, and one in which the unique
long-term perspective offered by archaeology will play an important role.
280
7.1.2
Running the tape again
Of all the challenges of the new approach to archaeology, perhaps the most far reaching
is the idea of ‘alternative histories’- of asking what would happen if history were to
happen again. This kind of thinking has become so pervasive that it has entered
the zeitgeist. Decades ago James Burke produced a series of television programs and
a book called Connections (Burke 1978) in which he outlined the unexpected and
occasionally far-flung links between seemingly unrelated events in history. The net
effect was to leave the viewer astonished at how small chance occurrences led to wide
ranging results. But the new approach is coming to the fore. Jared Diamond’s Guns,
Germs and Steel (1999) may be the pinnacle (so far), with its thesis that the large-scale
dynamics that played out between the societies established on the several continents
of the world were the nearly inevitable product of those continents’ arrangement and
the character of their resources. The wonderment is no longer at how small chance
effects determined history, but at how those small chance effects were conditioned by
larger historical context, and how, therefore, history (writ large) was likely to happen
in more or less the same way no matter what.
We are not good, yet, at playing the game of ‘what if?’ but a recent archaeological
example grapples with this issue, and offers an enticing approach. Griffin and Stanish
(2007) recently published an analysis of the settlement pattern for a 2000-year period
around Lake Titicaca. Their agent-based modeling approach found that in many (but
not all) of their simulation runs a general pattern arose which had not been built into
the simulation’s rules in any way, a pattern of initial settlement in the south and an
ultimate decline in the south and shift to the north. They attempted to understand
this by crafting an analytic solution, a description of the general dynamics that would
have led to this pattern, abstracted from many of the details of their simulation
and related instead to more general patterns of trade, settlement, and conflict given
the position of the lake and the routes available for movement and trade with the
281
surrounding areas.
Thus can an analytic approach (such as, perhaps, Janssen and Anderies’s (2007)
approach for modeling resilience in the Hohokam case) bolster our confidence in a
simulation. But it does not rescue us completely; we are still left with the tension
between whether the real outcome was likely or unlikely, and whether we can accept
or must reject a model that suggests that reality played out in a highly unlikely way.
Having two models make the same claim is not really more helpful than having one;
they mutually verify but do not validate. More broadly, we still lack a guide to how
the analytical model is supposed to relate to the simulation and to the real-world
situation, though the argument must clearly be that the analytical approach at some
level of abstraction describes them both.
There is a sense in which this question of history if it were to happen twice is a
path already trod. The Annales school of historians wrestled with the same issue,
and argued that single events were meaningless without a place in a larger trajectory
that determined them; even what seemed to be the most profound watershed was,
in some larger analysis, the result of underlying tensions that would have eventually
found expression in some other way if not the one that actually played out. We would
like to know, somehow, if we were writing history correctly; all of our histories are
models, but we do not know have a guide for knowing if they are models of the right
things.
7.1.3
Models, Experiments, and Scientific Research
A second important front in the current state of play of a model-based archaeology
are the epistemological issues of how models are related to the real world. A fuller
review of this was given in Chapter 3, but it is worthwhile here to turn forward to
see some of the unresolved issues that must be on the docket for future work.
One issue is that of representation: how models represent elements in the real
282
world. There can, as I noted earlier, be multiple modes for this, but each one will
carry prescriptions and proscriptions for how a model constructed using it can be
applied. This contributes to the problems that lie in a second issue, the grouping
of models for purposes of comparability. I alluded to some cases in which models
shared a common state space but different rules for moving among those states,
and proposed that grouping these simulations together was entirely acceptable, but
there are other cases in which multiple versions of a model might be compared,
but the justification for doing so is weaker than might be hoped. This is, of course,
crosscut by the actions of interchanging components, as I have argued is necessary for
exploring potential complex systems; this introduces additional complications both
with components and at the joints of the system they define, complications that are,
as yet, poorly worked out.
Where these issues are played out most keenly is in the larger scientific process I
also described in Chapter 3: the hope that model results will be taken as hypotheses
and subject to empirical tests. A brief exercise can illustrate the difficulties with
this. Suppose our hope is to determine which of two models correctly describes the
dynamics reflected in the archaeological record. We assume, then, that each model
proposes some set of antecedent conditions and dynamics rules (A1 and A2 , to use the
earlier notation) that each arrive at some consequent C1 and C2 . Presumably both
sets of consequences are considered to reflect what is observed in the archaeological
record (C); but we have not yet defined what it means to do so: in what way does a
model consequent ‘match’ the real-world? Moreover, if there exists any hope of the
two models being differentiated then the outcomes C1 and C2 must be different in
some way, but must nevertheless also be ‘the same’ in some way (because they both
‘match’ reality). One way to deal with this is to divide each C into components (Ca ,
Cb , etc.) and show that each model yields a different subset from the domain of possible component combinations, which might presumably then direct an investigation
designed to see whether the actual record contained some specific component found
283
in one but (necessarily) absent in the other.
Noting only that these issues quickly grow complicated, I will leave this kind
of work for future research, and turn instead to the more concrete prospects for the
ABCM system (which, while it makes possible a suite of approaches to these problems,
does not resolve all of them) and, finally, for the HWM Simulation.
7.2
The ABCM Framework
Several other threads of history might also be said to be affecting the prospects for
the ABCM framework proposed here. If the core of the ABCM framework is the
integration of a database and a simulation, the increasing importance of databases
in archaeology should certainly be mentioned. Datasets from archaeological projects
in the (sometimes distant) past are being transformed into electronic resources. One
area in which this is being played out is in the collection of new archaeological data,
where old view of data as merely data is being forced out by the concern for the
‘voice’ of a given datum: who is asserting that a particular datum is true. When
the data set is created from a chain of operations beginning with some observation
(i.e. a protocol statement) and leading to the construction of some final report, issues
like the correction of obvious errors and re-classification immediately raise the issues
of voice and assertion that the ABCM framework attempts to lay bare. Meanwhile,
there is also an impetus to assemble as many as possible into integrated bundles, so
that archaeological research in a given topic area can use common gateways to explore
the archaeological record in new ways (Dentamaro et al. 2007, Kintigh 2006). This
imposes a number of technical and conceptual challenges, most commonly the need
for a unified thesaurus so that the terminology applied from one excavation is related
to another, at least to the degree that searches across the database can be performed
without missing important data. The ABCM framework implicitly projects that this
trend will continue in two ways: first that the task of refining these thesauri (the
284
technical term is an ontology) is in essence the same task that is required to define
the vocabulary of a simulation model, and second that the eventual uses of these large
data sets may someday include not only static, searchable data but data put to work
in dynamic simulations.
This emphasis on archaeological databases is one part of a larger interest in applying technology to archaeological data in general. This is often thought of in the
context of the delivery of archaeological data (see the aptly named Delivering Archaeological Data Electronically, Carroll 2002), a domain that includes a large number
of concerns, up to and including the design of accessible websites (see Carroll and
Marable 2002 in that volume). Deeper issues related to this have also begun to be
explored. One is that of funding (a strong concern in Aldenderfer 2002); another is
the characteristics befitting an archaeological data infrastructure (McCartney 2002;
Kintigh 2006).
A second trend that impacts the ABCM framework may be the increasingly interdisciplinary and collaborative approach required to apply complex systems dynamics
to the archaeological record. Because information from so many domains may be
brought to bear on a given problem, it is increasingly common for multiple researchers
to work together. The ABCM facilitates this by providing a common framework in
which multiple participants can propose ideas and thus to gain insight from the proposals of others and the results these proposals obtain. Audit issues are a longstanding
problem in simulation modeling: keeping track of which runs were run for what reasons and produced what results1 . This is a challenge with a single researcher, and
would be, of course, magnified if the modeling task were undertaken by a team of individuals. What might easily become an unstructured and dispersed collection of forays
into a problem is neatly audited by the ABCM’s structure for building arguments;
1
Eiteljorg (2002) discusses the importance of audit trails in the context of transformations of
archaeological data; I wholeheartedly agree that this is a problem that must be reckoned with, but
I believe that the context should be considered to be the construction of arguments, a conceptual
effort, rather than a mechanical or purely technical one.
285
the integration of source data (including proposed dynamics) and output into a single
structure provides a convenient forum for collaboration, while also serving to reconcile
(or at least make transparent) differences in the approaches of different participants.
The collaborative approach that is permitted in the ABCM also melds syntactic and
semantic approaches nicely: models (and theories) are defined syntactically and thus
transparently, but the broader task of exploring a fabric of interconnected propositions that are difficult to test singly is now shared, so that knowledge construction
becomes a social act, and each participant may contribute his or her expertise to the
broader understanding being created by the team as a whole.
Ultimately it may be possible to define theories in advance of defining simulations
to test them, and to see the progression of values initially considered possible to
either being more widely accepted or discarded on the basis of the ongoing work in
the simulation environment. The ABCM may act as a kind of truth maintenance
system in which proposals that contradict earlier assertions are flagged for evaluation
or resolved (see Stanejović et al. December 1994 for an overview that is far more
technical than the rather fuzzy hope expressed here). The semantic view of science as
a practice permits inconsistencies to linger as working hypotheses, but the danger, in
my view, is that they may be unresolved or even unnoticed; instead, an active system
for knowledge construction, involving a database that can relate even observational
data linked with a structure for drawing implications from those data, could offer a
context in which such inconsistencies were automatically flagged so that appropriate
resolutions could be sought.
7.2.1
Status and Prospects
Whether the ABCM framework becomes a primary vehicle for these kinds of explorations, or whether it exists merely as an early example is not yet know; only time
will reveal this. One touchstone will be whether the divisions that the ABCM em-
286
ploys among the separate model elements will be useful for the purposes of the areas
of model-based research that I proposed in the preceding section lie on the field’s
horizon; some may be very useful (for example, the distinction between static data,
dynamic data, and simulation drivers), while others may only be starting points, and
it will be interesting to watch how others adopt or modify the framework to these
ends.
For now, the ABCM Framework as described in this dissertation can be downloaded separately from the HWM simulation and used to construct additional simulations of the same kind as the ABCM. The actual distribution of the ABCM has yet
to begin, and only time will tell if the theory, and the software, are appealing.
7.3
The HWM Simulation: Status, Prospects, Possibilities
The HWM Simulation is to be officially installed and made available to Hohokam
researchers during the summer and fall of 2009. As with the ABCM framework itself,
the ultimate breadth of use for the HWM simulation (and the additional components,
like the FlowAg toolkit) will not be known for some time. In lieu of concrete examples
of its extended use beyond my original designs, I will instead offer some possibilities
for how it could be used. In doing so I will raise two logical issues that can affect the
short- and long-term trajectory of how the HWM simulation may move forward. I
begin, however, by emhpasizing the distinction between the large-scale trajectory of
the Hohokam, which on one level has driven the form and content of the simulation,
and the other, smaller scale issues to which the simulation can be applied.
There has been a strong emphasis in this discussion on the use of the HWM
Simulation to explore issues related to the so-called ‘collapse’ of the Hohokam. This
is an important arena for research if our goals are, as put forward in the opening
chapter, to relate our understanding of the Hohokam trajectory to our own decisions
as we try to plan for the challenges that our own impacts on our environment will
287
create. We would like to know why the Hohokam experiment in the Phoenix Valley
ended, and what lessons this can teach us so that ours can continue.
But if it is appropriate to emphasize this, it is also inappropriate to overemphasize
it. We are learning (see Dean 2007) that the collapse was not a dramatic, single event,
and its lessons, though valuable, may be subtle and plural. Moreover, what makes the
Hohokam ‘collapse’ remarkable is not that the collapse occurred but that it took place
after a millennium of continued success. We can take as many lessons, or perhaps
more, from understanding how the Hohokam persisted through centuries as we can
by understanding what eventually led to their departure.
This is more than an abstract point. From a simulation perspective- and this,
perhaps, explains why the collapse has been a recurring theme in the discussion hereone cannot study either persistence or failure without the simulation allowing for
both possibilities. A simulation aiming for such a goal must include states that we
consider successes and failures, and must for allow trajectories that move from one
to the other. In this sense failure is ‘built in’ to the simulation. However, this is only
accommodation of a possibility: it is not preordained. A simulation such as we need
to study the Hohokam must allow for ultimate failure of the system, but it should
not- and cannot- require it. In keeping with the multiple goals of the exploratory
approach we are as interested in scenarios in which the collapse does not occur as in
those where it does. The understanding we seek will be born from comparing these.
We are also interested in movement through a wider range of varying states- more
than just moving from success to failure. The differing periods of the Hohokam trajectory suggest large-scale reorganizations at different times; these are not collapses but
are simply alternative adaptations. Frameworks like resilience (Holling 1973, 2001;
Hegmon et al. 2008) and robustness (Jen 2005, Wagner 2005) ask us to examine these
transitions in strategic ways: the emphasis is on how the systems might be stressed
and what responses might have occurred to those stresses. There are, importantly,
a variety of stresses. For example, a recurring theme in this discussion, and an im-
288
portant aspect of the Hohokam world, is the availability of streamflow on the rivers
that fed their irrigation systems. But within ‘availability’ there are different kinds
of problems: shortages needed different responses than floods, and the timing of any
shortage or flood event can change the responses needed. But what may also have
mattered is the patterning of these events over longer time: consistent, predictable
difficulties may be accommodated via responses that are unable to deal with events
that are unpredictable either in severity, timing, or both, even if the absolute measures
of the unpredictable challenges are lower. In our simulations we may be interested
in taking our systems to their breaking points, but we are also interested in finding
characteristics of systems that can accommodate ranges of such challenges, or move
among different states to those that best deal with the demands pressing upon them.
We can shift the emphasis of the simulation even further by zooming in more
closely, and asking what comprises the ‘states’ whose transitions we are hoping to
explore. While the major thrust of the simulation is understanding the trajectory
that results as the simulated system changes, there is, in advance of this, the question
of what these states should be. Each forms a kind of picture of Hohokam life. We
have at our disposal the range of modeling options laid out in Chapter 1: we can
vary the scope, resolution, composition, and coherence of our model’s fundamental
components. The picture that we choose to make can vary along any of these axes.
This implies- or at least is more than coincidental with- that the picture can be one
that accommodates one of several organizational scales: we can look at the whole
of a Hohokam canal system or only at some of its components, even at the level of
household, or upwards to interactions across canal systems and among villages.
At any scale we may encounter the issue of elements in our picture for which
we do not have complete data; we can paint only an incomplete picture, and must
fill in gaps provisionally. The function of the modeling environment is to facilitate
this, and its ability to organize, integrate, and find inconsistencies among data is its
strength; alternatively, it is possible to use the simulation to refine our knowledge
289
of the boundaries of possibilities, so that we move from believing a wide range of
conditions might have prevailed to knowing, because we have narrowed what is compatible with other things we believe, that the actual condition must have been within
a narrower range. Any domain that falls within our picture or can be constructed out
of the components in the cartoon, can be investigated within the model framework.
An easy example is the Hohokam agricultural calendar. Our knowledge of this will be
heavily dependent on the characteristics of the plants the Hohokam grew, which we
are still acquiring. It will also depend on the timing and capacity of water availability, which can be represented flexibly in the simulation. The agent-based example in
Chapter 6 allows agents to find one solution given one set of data, but this is only a
reflection of the broader general problem of how to fit the elements in a collection of
plants (their water needs and growth times) into the constraints of a water schedule.
That this problem has several additional levels- the idea that a schedule must incorporate flexibility to deal with interannual variation and other unforeseen conditions,
and the idea that the schedule that is optimum at one position in the canal system
depends on the schedules being implemented by the other actors along the systemillustrates that there are rich areas to explore far removed from the larger question
of the eventual end of the trajectory; that there are more components we can add
(for example, communal participation in the labor to maintain the canal system, and
means to integrate ‘tailenders’ (see Howard 2006) into the system even if their crops
are lost during times of shortage) carries this even further.
The range of questions that can be addressed within the simulation framework
is quite broad, and is not fully knowable in advance. In some cases they will derive
from previous investigations. In Chapter 6 I described how the simulation could be
used to replicate some of Howard’s (1993b) paleohydraulic analyses. One part of his
efforts was to calculate an estimate of the productive capacity of the canal system.
We can see easily that the simulation can improve on this: the rough values for water
supply can be altered by refining our knowledge of the schedule that the plants- of
290
many varieties- would have required and that the canal system would have allowed;
changes in the canal system can be more closely tracked as well, as can limits borne
from the demands of management- a social issue- as well as damage to the system
and need for improvement and repair.
All of this builds on work that has been done before, but allows additional levels
to be constructed around it. I have discussed the possibility that these various questions and issues may be found to be crosscutting, so that assumptions that are made
to pursue one avenue may be undermined by the explorations along another; we are
following threads of a Quinean fabric (see Chapter 3) in which our holistic conceptions of the problems compel us to work opportunistically within domains of larger
questions; this is entirely appropriate for exploring a past that included all of these
elements operating at all of these scales simultaneously. But it raises a question of
boundaries. At the HWM level, the problem has been driven by an interest in the arc
of the larger Hohokam trajectory, which drove the selection of the initial attempt at
a cartoon. The ABCM level, however, could accommodate the addition of practically
any new component. Agents offer an almost-too-easy area where more elements could
be added (demography or kinship relationships, to name just two that have not been
discussed so far), but other aspects could be inserted at higher levels of the framework as well (tool raw material procurement, for example). The issue of boundaries
is not logical but normative: what should be incorporated into the HWM system?
If someone proposed that kinship relationships or resource procurement were key to
understanding the larger trajectory of the Hohokam or their eventual ‘collapse’, then
it would be easy to argue that these components, while not in the original ‘cartoon’,
should be added. But what if, in the context of using the HWM Simulation to pursue some set of questions not easily or apparently related to the larger trajectory,
but making use of the components the HWM Simulation has provided, new elements
were desired. These might allow investigation to move in useful directions, but may
not contribute to the questions about the Hohokam trajectory. How would this be
291
decided?
One could argue that there should be a boundary, so that the HWM System
remains cleanly devoted to one set of problems, even if fairly broadly defined, and
does not include too many elements that are unnecessary. However, some strong
counterarguments can be made. Even if the additional elements are available they are
only optional and could be omitted, and if available, they could be used as controlsshowing that they make no difference if included or not. Alternatively, the fact that
there exists a hope of integrating them into the original problem domain may indicate
that they might be related to it, even if the way they are is not immediately apparent.
But ultimately the strongest argument is that anything that helps clarify our picture
at the closer-in scales is likely to help understand the broader scale, even if we are,
in the end, able to show that it is not consequential to the larger trends we wish to
explore.
Speaking more practically, the hope is that the HWM Simulation will be used
and be found to be useful; the more pressing issue is to make the components within
it more available to those interested in the larger Hohokam trajectory, and allowing
them to explore it as they need to. There is no need to squelch this in pursuit of only
the larger questions of the Hohokam trajectory or its endgame; the better purpose of
the HWM Simulation may be to address the many other possibile issues first, leaving
the questions of the collapse for some later time when we understand these other,
prior issues more thoroughly.
7.4
Conclusion
Our archaeological investigations are always a product of the times in which we live.
We approach the past with the tools we have at hand, and these can, and do, change
through time. We are, at the moment, intrigued by what has been termed a ‘modelbased’ archaeology, and have at our disposal software and hardware tools with which
292
to build certain kinds of models that would earlier have been out of reach. But our
toolkit is more than just the hardware and software; it is comprised of all of the
components of our intellectual approach. I have proposed that our current toolkit, in
this broader definition, is only coming into view. The ‘exploratory’ approach, which
permits us to pursue new kinds of questions, has lacked an intellectual framework
that guides our explorations. The software and hardware tools that have offered such
promise are not necessarily the ones that would fit best with our explorations. I have
offered the ABCM framework as an example and a first foray into providing the kinds
of tools that I believe are necessary for these new kinds of efforts. Beside it I have
proposed a collection of epistemological considerations and an organized set of goals
and maneouvers that I hope will form the beginning of a road map as we move into
this new territory. At the very least, epistemological vagueness and confusion over
the pursuits of similar but differing goals should not slow us as we move ahead.
Hohokam archaeology, too, is at a specific point in its history. We have the new
questions and the new ways to approach it that are offered by new theories, but we
have only the old data with which to work, and in general these data are not in
structures that are easily put to the new uses. The framework I have provided will,
I hope, offer researchers interested in the Hohokam new ways to assemble their data
and their insights, and thus new ways to pursue the sets of interconnected questions
that are known and will arise. The Hohokam bequeathed us a legacy that can be
measured only by the centuries through which they lived; my hope is that the HWM
framework will help further illuminate that legacy, and allows us to learn all that we
can from their remarkable civilization.
293
Appendices
294
Appendix A
A Guide to the Attached Files
A file attached to the electronic version of this document contains samples of three
aspects of simulation output. Two of these three are examples of an Analysis and
a Summary; Analyses and Summaries are defined in Chapter 4, and these specific
examples are referred to in Chapter 6. The actual output of an analysis or summary
from the HWM system would be a web page in html and supporting image files; here
these examples are reproduced in two forms. In the first, the entire web page is given,
including the image files; these can, in theory, be opened in any web browser. The
image files include .svg files for the graphs and may require a special reader appropriate to that browser (the original rendering was on Microsoft’s Internet Explorer with
Adobe’s SVG plug-in installed). Also included is a stylesheet file that provides formatting information, and a javascript file that allows the graphs to be interactive in a
limited way (the lines representing data series may be toggled on and off by clicking on
the series’ label). The second, provided to avoid difficulties with the first, is a manual
conversion of these to individual .pdf files. This is a somewhat distorted version and
the graphs are not interactive, but it may be more easily viewed by some readers.
The html versions also depict editable versions and illustrate how the user interface
allows the analysis or summary to be modified (though this is nonfunctional).
The third sample is a snapshot of the HWM system’s database of plant data. This
is intended mainly to show the structure of plant data, and to provide an indication of
the incipient data that are being collected in the simulation’s database. This file is a
.pdf version of the entire plant database; this is generated automatically by a special
routine packaged with the HWM system (it is not a manual conversion). However,
it does differ from the online version, in that the online version allows the user to
295
navigate through the hierarchy of Plants and Plant Configurations.
296
References
Abbott, David R.
2000 Ceramics and Community Organization among the Hohokam. University of
Arizona Press, Tucson, Arizona.
Abbott, David R.
2006 Hohokam Ritual and Economic Transformation: Ceramic Evidence from
the Phoenix Basin, Arizona. North American Archaeologist 27(4):285–310.
Abbott, David R., Scott E. Ingram, and Brent G. Kober
2006 Hohokam Exchange and Early Classic Period Organization in Central Arizona: Focal Villages or Linear Communities? Journal of Field Archaeology
31:285–305.
Achinstein, Peter
1965 Theoretical Models. The British Journal for the Philosophy of Science
16(62):102–120.
Ackerly, Neal W., JoAnn E. Kisselburg, and Richard J. Martynec
1989 Canal Junctions and Water Control Features. In Prehistoric Agricultural
Activities on the Lehi-Mesa Terrace: Perspectives on Hohokam Irrigation Cycles, edited by T. Kathleen Henderson and Neal W. Ackerly, pp. 146–183.
Northland Research Inc., Flagstaff, Arizona.
Ackerly, Neal W., and Richard J. Martynec
1989 Descriptive Characteristics of Canals within the Las Acequias Irrigation
System. In Prehistoric Agricultural Activities on the Lehi-Mesa Terrace: Perspectives on Hohokam Irrigation Cycles, edited by T. Kathleen Henderson and
Neal W. Ackerly, pp. 94–145. Northland Research Inc., Flagstaff, Arizona.
Ackerly, Neil W., Jerry Brian Howard, and Randall H. McGuire
1987 La Ciudad Canals: A Study of Hohokam Irrigation Systems at the Community Level. Office of Cultural Resource Management, Department of Anthropology, Arizona State University, Tempe, Arizona.
Aldenderfer, Mark
2002 The Larger Context of Data Dissemination and Preservation in Archaeology.
In Delivering Archaeological Data Electronically, edited by Mary S. Carroll, pp.
101–112. Society for American Archaeology, Washington, D.C..
Altaweel, Mark
297
2008 Investigating agricultural sustainability and strategies in northern
Mesopotamia: results produced using a socio-ecological modeling approach.
Journal of Archaeological Science 35:821–835.
Ammerman, A. J., and L. L. Cavalli-Sforza
1984 The Neolithic Transition and the Genetics of Populations in Europe. Princeton University Press, Princeton, New Jersey.
Bayman, James M.
2001 The Hohokam of Southwest North America. Journal of World Prehistory
15(3):257–311.
Bayman, James M.
2002 Hohokam Craft Economies and the Materialization of Power. Journal of
Archaeological Method and Theory 9(1):69–95.
Bayman, James M., and Alan P. Sullivan, III
2008 Property, Identity, and Macroeconomy in the Prehispanic Southwest. American Anthropologist 110(1):6–20.
Beekman, Christopher S., and William W. Baden
2005 Continuing the Revolution. In Nonlinear Models for Archaeology and Anthropology: Continuing the Revolution, edited by Christopher S. Beekman and
William W. Baden, pp. 1–12. Ashgate, Burlington, Vermont.
Bentley, R. Alexander
2003 An Introduction to Complex Systems. In Complex Systems and Archaeology:
Empirical and Theoretical Foundations, pp. 9–23. University of Utah Press,
Salt Lake City, Utah.
Binford, Lewis R.
1972a Archaeological Perspectives. In An Archaeological Perspective, pp. 78–104.
Seminar Press, New York City.
Binford, Lewis R.
1972b Model Building-Paradigms, and the Current State of Paleolithic Research.
In An Archaeological Perspective, pp. 244–294. Seminar Press, New York City.
Binford, Lewis R.
1981a Behavioral Archaeology and the “Pompeii Premise”. Journal of Anthropological Research 37(3):195–208.
Binford, Lewis R.
298
1981b Middle-range Research and the Role of Actualistic Studies. In Bones:
Ancient Men and Modern Myths, pp. 21–30. Academic Press, New York City.
Binford, Lewis R.
1999 Time as a Clue to Cause. Proceedings of the British Academy 101:1–35.
Blanton, Richard E., Gary M. Feinman, Stephen A. Kowalewski, and Peter N. Peregrine
1996a A Dual-Processual Theory for the Evolution of Mesoamerican Civilization.
Current Anthropology 37(1):1–14.
Blanton, Richard E., Gary M. Feinman, Stephen A. Kowalewski, and Peter N. Peregrine
1996b Reply. Current Anthropology 37(1):65–68.
Borgelt, Christian, and Rudolf Kruse
2000 Abductive Inference with Probabilistic Networks. In Abductive Reasoning
and Learning, Handbook of Defeasible Reasoning and Uncertainty Management
Systems, Vol. 4, edited by Dov M. Gabbay and Rudolf Kruse, pp. 281–314.
Kluwer Academic Publishers, Dodrecht, The Netherlands.
Brantingham, P. Jeffrey
2003 A Neutral Model of Stone Raw Material Procurement. American Antiquity
68(3):587–509.
Brown, James Robert
Spring 2008 Thought Experiments. In The Stanford Encyclopedia of Philosophy,
edited by Edward N. Zalta, URL: http://plato.stanford.edu/archives/spr2008/
entries/thought–experiment/.
Brunner, Gary W.
2002 HEC-RAS: River Analysis System Hydraulic Reference Manual. U.S. Army
Corps of Engineers Hydrologic Engineering Center, Davis, California.
Burke, James
1978 Connections. Little, Brown, Boston.
Busch, C. D., Mark Raab, and R. C. Busch
1976 Q = A * V: Prehistoric Water Canals in Southern Arizona. American
Antiquity 41(4):531–534.
Carroll, Mary S. (editor)
2002 Delivering Archaeological Data Electronically. Society for American Archae-
299
ology, Washington, D.C.
Carroll, Mary S., and Bart Marable
2002 Where have all the data gone? Issues in website design. In Delivering Archaeological Data Electronically, edited by Mary S. Carroll, pp. 63–72. Society
for American Archaeology, Washington, D.C..
Cartwright, Nancy
1999 Models and the Limits of Theory: Quantum Hamiltonians and the BCS
Model of Superconductivity. In Models as Mediators: Perspectives on Natural
and Social Science, edited by Mary S. Morgan and Margaret Morrison, pp.
241–281. Cambridge University Press, Cambridge, UK.
Castetter, Edward F., and Willis H. Bell
1942 Pima and Papago Indian Agriculture. Number 1 in Inter-Americana Studies,
University of New Mexico Press, Albuquerque, New Mexico.
Chauviré, Christiane
2005 Peirce, Popper, and the idea of a logic of discovery. Semiotica 153(1):209–
221.
Christiansen, John H., and Mark Altaweel
2006a Simulation of Natural and Social Process Interactions: An Example from
Bronze Age Mesopotamia. Social Science Computer Review 24:209–226.
Christiansen, John H., and Mark Altaweel
2006b Understanding Ancient Societies: A New Approach Using Agent-Based
Holistic Modeling. Structure and Dynamics 1(2):Article 7.
Cirera, Ramon
1994 Carnap and the Vienna Circle: Empiricism and Logical Syntax. Rodopi,
Atlanta, Georgia.
Clarke, David L.
1972a Models and Paradigms in Archaeology. In Models in Archaeology, edited
by David L. Clarke, pp. 1–60. Methuen and Co., Ltd., London.
Clarke, David L. (editor)
1972b Models in Archaeology. Methuen and Co., Ltd., London.
Clark, Jeffery J.
2001 Tracking Prehistoric Migrations: Pueblo Settlers among the Tonto Basin
Hohokam. University of Arizona Press, Tucson, Arizona.
300
Codd, Edgar F.
1970 A Relational Model of Data for Large Shared Data Banks. Communications
of the ACM 13(6):377–387.
Codd, Edgar F.
1979 Extending the Database Relational Model to Capture More Meaning. ACM
Transactions on Database Systems 4(4):397–434.
Codd, Edgar F.
1990 The Relational Model for Database Management: Version 2. AddisonWesley, Reading, Massachusetts.
Cordell, Linda S.
1984 Prehistory of the Southwest. The New World Archaeological Record Series,
Academic Press, Inc. (Harcourt Brace Jovanovich), San Diego, California.
Cordell, Linda S., and George J. Gumerman
1989 Cultural Interaction in the Prehistoric Southwest. In Dynamics of Southwest
Prehistory, edited by Linda S. Cordell and George J. Gumerman, pp. 1–17.
Smithsonian Institution Press, Washington, D.C..
Crown, Patricia L.
1990 The Hohokam of the American Southwest. Journal of World Prehistory
4(2):223–255.
Crumley, Carole L.
1995 Heterarchy and the Analysis of Complex Societies. In Heterarchy and the
Analysis of Complex Societies, edited by Robert M. Ehrenreich, Carole L.
Crumley and Janet E. Levy, pp. 1–5. American Anthropological Association,
Arlington, Virginia.
Date, C. J.
1998 Normalization is No Panacea!
(April):54–61.
Database Programming and Design
Dean, Rebecca M.
2005 Site-use Intensity, Cultural Modification of the Environment, and the Development of Agricultural Communities in Southern Arizona. American Antiquity
70(3):403–431.
Dean, Rebecca M.
2007 Hunting intensification and the Hohokam ”collapse”. Journal of Anthropological Archaeology 26:109–132.
301
Denevan, William M.
2001 Cultivated Landscapes of Native Amazonia and the Andes. Oxford University Press, Oxford.
Dentamaro, Federica, Paolo G. De Luca, Ludovico Genco, Giulia Perrino, Chiara
Cannito, Mariannunziata A. Stufano, and Maria G. Sibilano
2007 A CIDOC CRM-Based Ontology System. In Digital Discovery: Exploring
New Frontiers in Human Heritage: Computer Applications and Quantitative
Methods in Archaeology Proceedings of the 34th Conference, Fargo, United
States, April 2006, edited by Jeffrey T. Clark and Emily M. Hagermeister, pp.
437–444. ARCHAEOLINGUA, Budapest, Hungary.
Diamond, Jared
1999 Guns, Germs, and Steel. W. W. Norton and Co., New York City.
Doolittle, William E.
1990 Canal Irrigation in Prehistoric Mexico: The Sequence of Technological
Change. University of Texas Press, Austin, Texas.
Doolittle, William E.
2000 Cultivated Landscapes of Native North America. Oxford University Press,
Oxford.
Doyel, David E.
1992 A Short History of Hohokam Research. In Emil W. Haury‘s Prehistory of
the American Southwest, pp. 193–219. Tucson, Arizona.
Doyel, David E.
2007 Irrigation, Production, and Power in Phoenix Basin Hohokam Society. In
The Hohokam Millennium, edited by Suzanne K. Fish and Paul R. Fish, pp.
83–89. School for Advanced Research Press, Santa Fe, New Mexico.
Dunnell, Robert C.
1971 Systematics in Prehistory. Free Press, New York.
Eiteljorg, II, Harrison
2002 Introduction. In Delivering Archaeological Data Electronically, edited by
Mary S. Carroll, pp. vii–xii. Society for American Archaeology, Washington,
D.C..
Eiteljorg, II, Harrison
2007 Archaeological Computing. Center for the Study of Architecture, Bryn
Mawr, Pennsylvania.
302
Ellis, G. Lain, and Michael R. Waters
1991 Cultural and Landscape Influences on Tucson Basin Hohokam Settlement.
American Anthropologist 93(1):125–137.
Ensor, Bradley E., Maria O. Ensor, and Gregory W. De Vries
2003 Hohokam Political Ecology and Vulnerability: Comments on Waters and
Ravesloot. American Antiquity 68(1):169–181.
Epstein, Joshua M., and Robert Axtell
1996 Growing Artificial Societies: Social Science from the Bottom Up. Brookings
Institution Press, Washington, D. C.
Epstein, Josh
2005
Remarks
on
the
Foundation
of
Agent-Based
Generative
Social
Science.
Unpublished
MS
available
at
URL:
http://www.santafe.edu/research/publications/ workingpapers/05-06-024.pdf.
A later version appears in 2006: Handbook of Computational Economics,
Volume 2: Agent-Based Computational Econonics, edited by Leigh Tesfatsion
and Kenneth L. Judd, pp. 1585-1604, North-Holland Press.
Feigl, Herbert
1969 The Origin and Spirit of Logical Positivism. In The Legacy of Logical Positivism, edited by Peter Achinstein and Stephen F. Barker, pp. 3–24. The Johns
Hopkins Press, Baltimore, Maryland.
Fish, Paul R.
1989 The Hohokam: 1,000 Years of Prehistory in the Sonoran Desert. In Dynamics of Southwest Prehistory, edited by Linda S. Cordell and George J.
Gumerman, pp. 19–63. Smithsonian Institution Press, Washington, D. C..
Fish, Suzanne K.
1999 How Complex Were the Southwestern Great Towns‘ Polities? In Great
Towns and Regional Polities in the Southwest and Southeast, edited by Jill
Neitzel, pp. 45–58. University of New Mexico Press, Albuquerque, New Mexico.
Fish, Suzanne K.
2000 Hohokam Impacts on Sonoran Desert Environment. In Imperfect Balance:
Landscape Transformations in the Precolumbian Americas, edited by David L.
Lentz, pp. 251–280. Columbia University Press, New York City.
Fish, Suzanne K.
2006 Cross-cultural Perspectives on Prehispanic Hohokam Agricultural Potential.
In Environmental Change and Human Adaptation in the Ancient American
303
Southwest, pp. 46–68. University of Utah Press, Salt Lake City, Utah.
Fish, Suzanne K., and Paul R. Fish
2000 The Institutional Contexts of Hohokam Complexity and Inequality. In Alternative Leadership Strategies in the Prehispanic Southwest, edited by Barbara J.
Mills, pp. 154–167. University of Arizona Press, Tucson, Arizona.
Fish, Suzanne K., and Paul R. Fish
2004 Unsuspected Magnitudes: Expanding the Scale of Hohokam Agriculture.
In The Archaeology of Global Change: The Impact of Humans on Their Environment, edited by Charles L. Redman, Steven R. James, Paul R. Fish and
J. Daniel Rogers, pp. 208–223. Smithsonian Books, Washington, D.C..
Fish, Suzanne K., and Paul R. Fish
2007 The Hohokam Millennium. In The Hohokam Millennium, edited by
Suzanne K. Fish and Paul R. Fish, pp. 1–11. School for Advanced Research
Press, Santa Fe, New Mexico.
Fish, Suzanne K., Paul R. Fish, and John H. Madsen
1992a Evidence for Large-Scale Agave Cultivation in the Marana Community. In
The Marana Community in the Hohokam World, edited by Suzanne K. Fish,
Paul R. Fish and John H. Madsen, Anthropological Papers of the University
of Arizona, pp. 73–87. University of Arizona Press, Tucson, Arizona.
Fish, Suzanne K., Paul R. Fish, and John H. Madsen
1992b Evolution and Structure of the Classic Period Marana Community. In
The Marana Community in the Hohokam World, edited by Suzanne K. Fish,
Paul R. Fish and John H. Madsen, Anthropological Papers of the University
of Arizona, pp. 20–40. University of Arizona Press, Tucson, Arizona.
Flach, Peter A., and Antonis C. Kakas
2000 On the relation between adbuction and inductive learning. In Abductive
Reasoning and Learning, Handbook of Defeasible Reasoning and Uncertainty
Management Systems, Vol. 4, edited by Dov M. Gabbay and Rudolph Kruse,
pp. 1–33. Kluwer Academic Publishers, Dordrecht, The Netherlands.
Fogelin, Lars
2007 Inference to the Best Explanation: A common and effective form of archaeological reasoning. American Antiquity 72(4):603–625.
Forsythe, William C., Edward J. Jr. Rykiel, Randal S. Stahl, Hsin-i Wu, and
Robert M. Schoolfield
1995 A model comparison for daylength as a function of latitude and day of year.
304
Ecological Modeling 80:87–95.
Frigg, Roman, and Stephan Hartmann
Spring 2008 Models in Science. In The Stanford Encyclopedia of Philosophy,
edited by Edward N. Zalta, URL: http://plato.stanford.edu/archives/spr2008/
entries/models–science/.
Gibbon, Guy
1989 Explanation in Archaeology. Blackwell, Oxford.
Gilbert, Nigel
2008 Agent-Based Models. SAGE Publications, Los Angeles.
Graybill, Donald A., David A. Gregory, Gary S. Funkhouser, and Fred L. Nials
2006 Long-Term Streamflow Reconstructions, River Channel Morphology, and
Aboriginal Irrigation Systems along the Salt and Gila Rivers. In Environmental
Change and Human Adaptation in the Ancient American Southwest, pp. 69–
123. University of Utah Press, Salt Lake City, Utah.
Griffin, Arthur F., and Charles Stanish
2007 An Agent-based Model of Prehistoric Settlement Patterns and Political
Consolidation in the Lake Titicaca Basin of Peru and Bolivia. Structure and
Dynamics 2(2).
Griffin, William A.
2006 Agent-Based Modeling for the Theoretical Biologist. Biological Theory
1(4):404–409.
Gumerman, George J.
2007 The Hohokam: The Who and the Why. In The Hohokam Millennium,
edited by Suzanne K. Fish and Paul R. Fish, pp. 141–146. School for Advanced
Research Press, Santa Fe, New Mexico.
Hamilton, A. G.
1978 Logic for Mathematicians. Cambridge University Press, Cambridge, England.
Harry, Karen G., and James M. Bayman
2000 Leadership Strategies among the Classic Period Hohokam. In Alternative
Leadership Strategies in the Prehispanic Southwest, edited by Barbara J. Mills,
pp. 136–153. University of Arizona Press, Tucson, Arizona.
Hartmann, Stephan
305
1999 Models and Stories in Hadron Physics. In Models as Mediators: Perspectives on Natural and Social Science, edited by Mary S. Morgan and Margaret
Morrison, pp. 326–346. Cambridge University Press, Cambridge, UK.
Haury, Emil W.
1976 The Hohokam: Desert Farmers and Craftsmen. The University of Arizona
Press, Tucson, Arizona.
Hegmon, Michelle
2003 Setting Theoretical Egos Aside: Issues and Theory in North American
Archaeology. American Antiquity 68(2):213–243.
Hegmon, Michelle, Matthew A. Peeples, Ann Kinzig, Stephanie Kulow, Cathryn M.
Meegan, and Margaret C. Nelson
2008 Social Transformation and Its Human Costs in the Prehispanic U.S. Southwest. American Anthropologist 110(3):313–324.
Holling, C. S.
1973 Resilience and Stability of Ecological Systems. Annual Review of Ecology
and Systematics 4:1–23.
Holling, C. S.
2001 Understanding the Complexity of Economic, Ecological, and Social Systems. Ecosystems 4:390–405.
Howard, Jerry Brian
1990 Paleohydraulics: Techniques for Modeling the Operation and Growth of Prehistoric Canal Systems. Unpublished Master’s thesis, Arizona State University,
Tempe, Arizona.
Howard, Jerry Brian
1993a Between Dessiccation and Flood: A Computer Simulation of Irrigation
Agriculture and Food Storage During the Hohokam Classic Period. Submitted
to the Ruppe Prize Competition, Department of Anthropology, Arizona State
University.
Howard, Jerry Brian
1993b A Paleohydraulic Approach to Examining Agricultural Intensification in
Hohokam Irrigation Systems. Research in Economic Anthropology, Supplement
7 pp. 263–324.
Howard, Jerry Brian
2006 Hohokam irrigation communities : a study of internal structure, external
306
relationships and sociopolitical complexity. Unpublished Ph.D. thesis, Arizona
State University, Tempe, Arizona.
Hunt, Robert C.
1988 Size and the Structure of Authority in Canal Irrigation Systems. Journal
of Anthropological Research 44(4):335–355.
Hunt, Robert C., David Guillet, David R. Abbott, James M. Bayman, Paul R. Fish,
Suzanne K. Fish, Keith Kintigh, and James A. Neely
2005 Plausible Ethnographic Analogies for the Social Organization of Hohokam
Canal Irrigation. American Antiquity 70(3):433–456.
Janssen, Marco, and Marty Anderies, Stylized Models to Analyze Robustness of Irrigation Systems. In The Model-Based Archaeology of Socionatural Systems,
edited by Timothy Kohler and Sander van der leeuw, pp. 157–173, School for
Advanced Research Press, SantaFe, New Mexico.
Jen, Erica
2005 Introduction. In Robust Design, edited by Erica Jen, pp. 1–6. Oxford University Press, Oxford.
Josephson, John R.
2000 Smart Inductive Generalizations are Abductions. In Abduction and Induction: Essays on the Relation and Integration, edited by Peter A. Flach and
Antonis C. Kakas, pp. 31–44. Kluwer Academic Publishers, Dordrecht.
Josephson, John R., and Susan G. Josephson (editors)
1994 Abductive Inference: Computation, Philosophy, Technology. Cambridge
University Press, Cambridge, UK.
Kauffman, Stuart
1995 At Home in the Universe: The Search for Laws of Self-Organization and
Complexity. Oxford University Press, New York.
Kintigh, Keith
2006 The Promise and Challenge of Archaeological Data Integration. American
Antiquity 71(3):567–578.
Kohler, Timothy, James Kresl, Carla Van West, Eric Carr, and Richard H. Wilshusen
1999 Be There Then: A Modeling Approach to Settlement Determinants and
Spatial Efficiency Among Late Ancestral Pueblo Populations of the Mesa
Verde Region, U.S. Southwest. In Dynamics of Human and Primate Societies: Agent-Based Modeling of Social and Spatial Processes, edited by Timo-
307
thy Kohler and George J. Gumerman, pp. 145–178. Oxford University Press,
New York City.
Kohler, Timothy, and Sander van der Leeuw (editors)
2007 The Model-Based Archaeology of Socionatural Systems. School for Advanced
Research Press, Santa Fe, New Mexico.
Kuhn, Thomas S.
1970 The Structure of Scientific Revolutions. 2nd ed. . The University of Chicago
Press, Chicago.
Lansing, J. Stephen
1991 Priests and Programmers: Technologies of Power in the Engineered Landscape of Bali. Princeton University Press, Princeton, New Jersey.
Lansing, J. Stephen
2002 Artificial Societies and the Social Sciences. Artificial Life 8(3):279–292.
Lansing, J. Stephen
2003 Complex Adaptive Systems. Annual Review of Anthropology 32:183–204.
Lansing, J. Stephen, James N. Kremer, and Barbara B. Smuts
1998 System-Dependent Selection, Ecological Feedback, and the Emergence of
Functional Structure in Ecosystems. Journal of Theoretical Biology 192:377–
391.
Law, Averill M., and W. David Kelton
2000 Simulation Modeling and Analysis. McGraw-Hill, Boston, Massachusetts.
Leach, J. D.
2007 Reconsidering Ancient Caloric Yields From Cultivated Agave In Southern
Arizona. Journal of the Arizona-Nevada Academy of Science 39(1):18–21.
Levins, Richard
1966 The Strategy of Model Building in Population Biology. American Scientist
54(4):421–431.
Lipton, Peter
2004 [1991] Inference to the Best Explanation. Routledge, London.
Lovelock, James
1990 Exploring Daisyworld. In The Ages of Gaia, pp. 45–64. Bantam Books, New
York.
308
Lutgens, Frederick K., and Edward J. Tarbuck
1989 The Atmosphere: An Introduction to Meteorology. Prentice Hall, Englewood
Cliffs, New Jersey.
Mabry, Jonathan
1999 Las Capas and Early Irrigation Farming. Archaeology Southwest 13(1):14.
Mabry, Jonathan
2005 Changing Knowledge and Ideas about the First Farmers in Southeastern
Arizona. In The Late Archaic Across the Borderlands, edited by Bradley J.
Vierra, pp. 41–83. University of Texas Press, Austin, Texas.
Mabry, Jonathan (editor)
2008 Las Capas: Early Irrigation and Sedentism in a Southwestern Floodplain.
Center for Desert Archaeology, Tucson, Arizona.
Masse, Bruce
1981 Prehistoric Irrigation Systems in the Salt River Valley, Arizona. Science
213:408–414.
Mayhall, C. Wayne
2003 On Logical Positivism. Wadsworth; Thompson Learning, Inc., United States
of America.
McCartney, Peter
2002 Long-Term Management and Accessibility of Archaeological Research Data.
In Delivering Archaeological Data Electronically, edited by Mary S. Carroll, pp.
91–100. Society for American Archaeology, Washington, D.C..
Midvale, Frank
1968 Prehistoric Irrigation in the Salt River Valley, Arizona. Kiva 34(1):28–32.
Miller, John H., and Scott E. Page
2007 Complex Adaptive Systems: An Introduction to Computational Models of
Social Life. Princeton University Press, Princeton, New Jersey.
Mills, Barbara J. (editor)
2000 Alternative Leadership Strategies in the Prehispanic Southwest. University
of Arizona Press, Tucson, Arizona.
Mitchell, William P.
1973 The Hydraulic Hypothesis:
14(5):532–534.
A Reappraisal. Current Anthropology
309
Morrison, Margaret, and Mary S. Morgan
1999 Models as Mediating Instruments. In Models as Mediators: Perspectives on
Natural and Social Science, edited by Mary S. Morgan and Margaret Morrison,
pp. 10–37. Cambridge University Press, Cambridge, UK.
Muenchrath, Deborah Ann
1995 Productivity, Morphology, Phenology, and Physicology of a Desert-Adapted
Native American Maize (Zea mays L.) Cultivar. Unpublished Ph.D. thesis,
Iowa State University, Ames, Iowa.
Muenchrath, Deborah Ann, Maya Kuratomi, Jonathan A. Sandor, and Jeffrey A.
Homborg
2002 Observational study of maize production systems of Zuni farmers in semiarid New Mexico. Journal of Ethnobiology 22(1):1–33.
Murphy, John Todd
2000 Approaching Maya Polities From the Side: Models of Classic Maya Political
Structure. Unpublished Master’s thesis, University of Arizona.
Neitsch, S. L., J. G. Arnold, J. R. Kiniry, and J. R. Williams
2005 Soil and Water Assessment Tool Theoretical Documentation: Version 2005.
Grassland, Soil and Water Research Laboratory, Agricultural Research Service, Temple, Texas.
North, Michael J., Tom R. Howe, Nick T. Collier, and J. R. Vos
2005 The Repast Simphony Runtime System. In Proceedings of the Agent
2005 Conference on Generative Social Processes, Models and Mechanisms,
ANL/DIS-06-5, edited by C. M. Macal, Michael J. North and D. Sallach, pp.
151–158. Co-Sponsored by Argonne National Laboratory and the University
of Chicago, October 13-15.
Odenbaugh, Jay
2003 Complex Systems, Tradeoffs, and Theoretical Population Biology: Levin‘s
“Strategy of Model Building in Population Biology” Revisited. Philosophy of
Science 70(5):1496–1507.
Orzack, Steven Hecht
2005 Discussion: What, if anything, is “The Strategy of Model Building in Population Biology?” A Comment on Levins (1966) and Odenbaugh (2003). Philosophy of Science 72:479–485.
Orzack, Steven Hecht, and Elliott Sober
1993 A Critical Assessment of Levins‘s The Strategy of Model Building in Pop-
310
ulation Biology (1966). The Quarterly Review of Biology 68(4):533–546.
Perez, Pascal, and David Batten (editors)
2003 Complex Science for a Complex World: Exploring Human Ecosystems with
Agents. Australian National University E Press, Canberra, Australia.
Premo, Luke S.
2006 Patchiness and Prosociality: Modeling the Evolution and Archaeology of
Plio-Pleistocene Hominin Food Sharing. Unpublished Ph.D. thesis, University
of Arizona, Tucson, Arizona.
Premo, Luke S.
in press On the Role of Agent-Based Modeling in Post-Positivist Archaeology.
Courtesy of the author. To appear in Archaeological Simulations: Into the 21st
Century, edited by A. Costopoulos and Mark Lake. University of Utah Press.
Quine, Willard Van Orman
1951 Main Trends in Recent Philosophy: Two Dogmas of Empiricism. The Philosophical Review 60(1):20–43.
Read, Dwight W.
1990 The Utility of Mathematical Constructs in Building Archaeological Theory.
In Mathematics and Information Science in Archaeology: A Flexible Framework, edited by Albertus Voorrips, pp. 29–60. Holos, Bonn, Germany.
Redman, Charles L., and Ann Kinzig
2004 Water Can Flow Uphill: A Narrative of Central Arizona. Unpublished MS
courtesy of the authors.
Reid, J. Jefferson, and Stephanie Whittlesey
1997 The Archaeology of Ancient Arizona. University of Arizona Press, Tucson,
Arizona.
Renfrew, Colin, and J. F. Cherry (editors)
1986 Peer Polity Interaction and Socio-Political Changes. Cambridge University
Press, Cambridge.
Rey, Georges
2003 The Analytic/Synthetic Distinction. Stanford Encyclopedia of Philosophy
URL: http://plato.stanford.edu/entries/analytic–synthetic/.
Rice, Glen
1998 War and Water: An Ecological Perspective on Hohokam Irrigation. Kiva
311
63(3):263–301.
Rice, Glen
2000 Hohokam and Salado Segmentary Organization. In Salado, edited by Jeffrey S. Dean, pp. 143–166. University of New Mexico Press, Albuquerque, New
Mexico.
Scarborough, Vernon L.
2003 The Flow of Power: Ancient Water Systems and Landscapes. School of
American Research Press, Albuquerque, New Mexico.
Schaafsma, Hoski
2007 Hohokam Field Building: Silted Fields in the Northern Phoenix Basin. Kiva
72(4):431–457.
Schiffer, Michael Brian
1995 Behavioral Archaeology: First Principles. University of Utah Press, Salt
Lake City, Utah.
Schiffer, Michael Brian, J. Jefferson Reid, and William Rathje
1995 The Four Strategies of Behavioral Archaeology. In Behavioral Archaeology:
First Principles, Foundations of Archaeological Inquiry, pp. 67–73. University
of Utah Press, Salt Lake City, Utah.
Showalter, Pamela Sands
1993 A Thematic Mapper Analysis of the Prehistoric Hohokam Canal System,
Phoenix, Arizona. Journal of Field Archaeology 20(1):77–90.
Sigmund, Karl
1993 Games of Life: Explorations in Ecology, Evolution and Behaviour. Penguin,
New York City.
Skyrms, Brian
1996 Evolution of the Social Contract. Cambridge University Press, Cambridge,
England.
Smith, Norman Kemp
1964 Immanuel Kant‘s Critique of Pure Reason. MacMillan and Company Ltd,
London.
Southall, Aidan
1988 The Segmentary State in Africa and Asia. Comparative Studies in Society
and History 30(1):52–82.
312
Stanejović, Mladen, Sanja Vranes̆, and Dusan Velas̆ević
December 1994 Using Truth Maintenance Systems: A Tutorial. IEEE Expert pp.
46–56.
Tang, Shui Yan
1992 Institutions and Collective Action: Self-Governance in Irrigation. Institute
for Contemporary Studies Press, San Francisco, California.
Turney, Omar A.
1929a Prehistoric Irrigation I. Arizona Historical Review 2(1):12–52.
Turney, Omar A.
1929b Prehistoric Irrigation II. Arizona Historical Review 2(2):11–52.
Turney, Omar A.
1929c Prehistoric Irrigation III. Arizona Historical Review 2(3):9–45.
Turney, Omar A.
1929d Prehistoric Irrigation IV. Arizona Historical Review 2(4):33–73.
van Benthem, Johan
2000 Foreword. In Abduction and Induction: Essays on the Relation and Integration, edited by Peter A. Flach and Antonis C. Kakas, pp. ix–xi. Kluwer
Academic Publishers, Dordrecht.
Wagner, Andreas
2005 Robustness and Evolvability in Living Systems. Princeton University Press,
Princeton.
Wahlin, Brian T., and Albert J. Clemmens
2006a Automatic Downstream Water-Level Feedback Control of Branching
Canal Networks: Theory. Journal of Irrigation and Drainage Engineering
May/June:198–207.
Wahlin, Brian T., and Albert J. Clemmens
2006b Automatic Downstream Water-Level Control of Branching Canal Networks: Simulation Results. Journal of Irrigation and Drainage Engineering
May/June:208–219.
Waters, Michael R., and John C. Ravesloot
2000 Late Quaternary Geology of the Middle Gila River, Gila River Indian Reservation, Arizona. Quaternary Research 54:49–57.
Waters, Michael R., and John C. Ravesloot
313
2001 Landscape Change and the Cultural Evolution of the Hohokam along the
Middle Gila River and Other River Valleys in South-Central Arizona. American Antiquity 66(2):285–299.
Waters, Michael R., and John C. Ravesloot
2003 Disaster or Catastrophe: Human Adaptation to High- and Low-Frequency
Landscape Processes-A Reply to Ensor, Ensor, and Devries. American Antiquity 68(2):400–405.
Wilcox, David R.
1999 A Peregrine View of Macroregional Systems in the North American Southwest, A.D. 750-1250. In Great Towns and Regional Polities in the Southwest
and Southeast, edited by Jill Neitzel, pp. 115–141. University of New Mexico
Press, Albuquerque, New Mexico.
Wilson, John P.
2003 Peoples of the Middle Gila: A Documentary History of the Pimas and
Maricopas, 1500‘s - 1945. Unpublished MS on file at Arizona State Museum
Library, Tucson, Arizona.
Winterhalder, Bruce
2002 Models. In Darwin and Archaeology, edited by John P. Hart and John Edward Terrell, pp. 202–223. Bergin and Garvey, Westport, Connecticut.
Wobst, H. Martin
1974 Boundary Conditions for Paleolithic Social Systems: A Simulation Approach. American Antiquity 39(2):147–178.
Woodbury, Richard B.
1960 The Hohokam Canals at Pueblo Grande, Arizona. American Antiquity
26(2):267–270.
Woodbury, Richard B.
1961 A Reappraisal of Hohokam Irrigation. American Anthropologist 63(3):550–
560.
Yoffee, Norman
2005 Myths of the Archaic State: Evolution of the Earliest Cities, States, and
Civilizations. Cambridge University Press, Cambridge.
Yoffee, Norman, Suzanne K. Fish, and George R. Milner
1999 Communidades, Ritualities, Chiefdoms: Social Evolution in the American
Southwest and Southeast. In Great Towns and Regional Polities in the South-
314
west and Southeast, edited by Jill Neitzel, pp. 261–271. University of New
Mexico Press, Albuquerque, New Mexico.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement