Asus W2W-A1 User`s guide

Author(s)
Reynolds, James V.; Smith, Craig L.
Title
Virtual environment training on mobile devices
Publisher
Monterey, California: Naval Postgraduate School
Issue Date
2013-09
URL
http://hdl.handle.net/10945/37700
This document was downloaded on May 15, 2015 at 06:12:31
NAVAL
POSTGRADUATE
SCHOOL
MONTEREY, CALIFORNIA
THESIS
VIRTUAL ENVIRONMENT TRAINING ON MOBILE
DEVICES
by
James V. Reynolds
Craig L. Smith
September 2013
Thesis Advisor:
Second Reader:
Joseph Sullivan
Erik Johnson
This thesis was performed at the MOVES Institute
Approved for public release; distribution is unlimited
THIS PAGE INTENTIONALLY LEFT BLANK
REPORT DOCUMENTATION PAGE
Form Approved OMB No. 0704-0188
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing
instruction, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection
of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including
suggestions for reducing this burden, to Washington headquarters Services, Directorate for Information Operations and Reports, 1215
Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction
Project (0704-0188) Washington DC 20503.
1. AGENCY USE ONLY (Leave blank)
2. REPORT DATE
September 2013
3. REPORT TYPE AND DATES COVERED
Master’s Thesis
5. FUNDING NUMBERS
4. TITLE AND SUBTITLE
VIRTUAL ENVIRONMENT TRAINING ON MOBILE DEVICES
6. AUTHOR(S) James V. Reynolds, Craig L. Smith
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)
Naval Postgraduate School
Monterey, CA 93943-5000
9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS(ES)
N/A
8. PERFORMING ORGANIZATION
REPORT NUMBER
10. SPONSORING/MONITORING
AGENCY REPORT NUMBER
11. SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author and do not reflect the
official policy or position of the Department of Defense or the U.S. government. IRB protocol number
____NPS.2013.0061IR-EP7-A____.
12a. DISTRIBUTION / AVAILABILITY STATEMENT
12b. DISTRIBUTION CODE
Approved for public release; distribution is unlimited
A
13. ABSTRACT (maximum 200 words)
Over 100 million tablet computers have been sold in the last three years. They now have the computing power of a
state-of-the-art laptop of just a few years ago. This computing power and market saturation allows them to become
viable virtual environment (VE) trainers. Tablets have a different set of input modalities and user expectations, which
need to be taken into careful consideration when a VE trainer is designed. The authors developed a VE call for fire
(CFF) trainer and explored the processes necessary to make it successful. In order to utilize tablet hardware to its full
potential, the authors devised the Window to the World (W2W) paradigm as it applies to a mobile device. The authors’
tablet CFF trainer, Supporting Arms Trainer—Mobile (SAT-M), was compared to the Marine Corps’ current laptop CFF
system, ObserverSim. Despite being in early development, participants with and without CFF experience
overwhelmingly preferred SAT-M (p=0.002). Reasons included the ability of W2W to mimic real world physical motion,
an easier to use interface, and a decrease in extraneous cognitive load.
14. SUBJECT TERMS Virtual environment, mobile, tablet, simulation, training, call-for-fire,
window to the world
17. SECURITY
CLASSIFICATION OF
REPORT
Unclassified
18. SECURITY
CLASSIFICATION OF THIS
PAGE
Unclassified
NSN 7540-01-280-5500
15. NUMBER OF
PAGES
151
16. PRICE CODE
19. SECURITY
20. LIMITATION OF
CLASSIFICATION OF
ABSTRACT
ABSTRACT
Unclassified
UU
Standard Form 298 (Rev. 2-89)
Prescribed by ANSI Std. 239-18
i
THIS PAGE INTENTIONALLY LEFT BLANK
ii
Approved for public release; distribution is unlimited
VIRTUAL ENVIRONMENT TRAINING ON MOBILE DEVICES
James V. Reynolds
Major, United States Marine Corps
B.A., Bucknell University, 1996
M.S., University of Rhode Island, 1999
Craig L. Smith
Major, United States Marine Corps
B.S., Iowa State University, 1997
Submitted in partial fulfillment of the
requirements for the degree of
MASTER OF SCIENCE IN
MODELING, VIRTUAL ENVIRONMENTS, AND SIMULATION
from the
NAVAL POSTGRADUATE SCHOOL
September 2013
Authors:
James V. Reynolds
Craig L. Smith
Approved by:
Joseph Sullivan, PhD
Thesis Advisor
Erik Johnson
Second Reader
Christian Darken
Chair, MOVES Academic Committee
Peter J. Denning
Chair, Department of Computer Science
iii
THIS PAGE INTENTIONALLY LEFT BLANK
iv
ABSTRACT
Over 100 million tablet computers have been sold in the last three years. They
now have the computing power of a state-of-the-art laptop of just a few years
ago. This computing power and market saturation allows them to become viable
virtual environment (VE) trainers. Tablets have a different set of input modalities
and user expectations, which need to be taken into careful consideration when a
VE trainer is designed. The authors developed a VE call for fire (CFF) trainer
and explored the processes necessary to make it successful. In order to utilize
tablet hardware to its full potential, the authors devised the Window to the World
(W2W) paradigm as it applies to a mobile device.
The authors’ tablet CFF
trainer, Supporting Arms Trainer—Mobile (SAT-M), was compared to the Marine
Corps’ current laptop CFF system, ObserverSim.
Despite being in early
development, participants with and without CFF experience overwhelmingly
preferred SAT-M (p=0.002). Reasons included the ability of W2W to mimic real
world physical motion, an easier to use interface, and a decrease in extraneous
cognitive load.
v
THIS PAGE INTENTIONALLY LEFT BLANK
vi
TABLE OF CONTENTS
I.
INTRODUCTION ............................................................................................. 1
A.
PROBLEM STATEMENT..................................................................... 1
B.
MOTIVATION ....................................................................................... 2
C.
RESEARCH QUESTIONS ................................................................... 5
D.
ORGANIZATION OF THE THESIS ...................................................... 6
II.
BACKGROUND .............................................................................................. 9
A.
INTRODUCTION .................................................................................. 9
B.
CALL FOR FIRE PROCEDURE TRAINERS TIMELINE ..................... 9
1.
M32 Sub-caliber Mortar Trainer, ca. 1960 ............................ 10
2.
M31 14.5 mm Field Artillery Trainer, ca. 1976...................... 10
3.
Training set Fire Observation (TSFO), ca. 1982 .................. 10
4.
MiniTSFO, ca. 1985 ................................................................ 11
5.
Indoor Simulated Marksmanship Trainer-Enhanced, ca.
1998 ......................................................................................... 11
6.
Forward Observer Training Simulator, ca. 1998 ................. 12
7.
Forward Observer Personal Computer Simulator, ca.
2002 ......................................................................................... 12
8.
Joint Fires and Effects Trainer System, ca. 2003................ 13
9.
Guard Unit Armory Device Full-crew Interactive
Simulation Trainer II, ca. 2003 .............................................. 13
10.
Call for Fire Trainer, ca. 2005 ................................................ 14
11.
FOPCSim 2, ca. 2005 ............................................................. 14
12.
Deployable Virtual Training Environment, ca. 2005 ............ 14
13.
Supporting Arms Virtual Trainer, ca. 2009 .......................... 15
14.
Observer Simulator, ca. 2010................................................ 15
C.
PREVIOUS WORK............................................................................. 17
III.
TASK ANALYSIS ......................................................................................... 19
A.
HUMAN ABILITY REQUIREMENTS REVIEW .................................. 20
B.
COGNITIVE TASK ANALYSIS .......................................................... 24
IV.
REQUIREMENTS ......................................................................................... 27
A.
OVERVIEW ........................................................................................ 27
B.
USE CASE SCENARIOS AND SYSTEM CHARACTERSTICS ........ 27
1.
SAT-M ..................................................................................... 27
a.
Case 1 .......................................................................... 27
b.
Case 2 .......................................................................... 28
2.
Characteristics of SAT-M and tablets .................................. 28
a.
User Expectations ....................................................... 28
b.
Device Input................................................................. 28
c.
Limitations and feedback ........................................... 29
d.
Centralized Distribution.............................................. 29
3.
DVTE ....................................................................................... 30
vii
C.
D.
E.
F.
G.
H.
I.
J.
K.
a.
CASE 1: ........................................................................ 30
b.
CASE 2: ........................................................................ 30
4.
Characteristics of the DVTE and Laptops ........................... 31
5.
Physical Interaction with the DVTE / CAN ........................... 32
a.
Input ............................................................................. 32
6.
SAVT / MSAT .......................................................................... 32
a.
CASE 1 ......................................................................... 32
b.
CASE 2 ......................................................................... 33
7.
Characteristics of the SAVT.................................................. 33
SUMMARIZATION OF THE SIMULATORS ...................................... 34
SUMMARY OF CAPABILITIES ......................................................... 36
FUNCTIONAL REQUIREMENTS ...................................................... 36
NONFUNCTIONAL REQUIREMENTS .............................................. 41
PRODUCT FEATURES ..................................................................... 42
CONFIGURATION MODULE ............................................................. 43
VIEW MANAGER MODULE .............................................................. 43
USER ACTIONS FIRE MISSION PROCEDURE ............................... 43
AFTER ACTION REVIEW.................................................................. 44
V.
SYSTEM DEVELOPMENT ........................................................................... 45
A.
BACKGROUND ................................................................................. 45
1.
Model-View-Controller ........................................................... 45
a.
Model............................................................................ 46
b.
View .............................................................................. 46
B.
INTERFACE DESIGN STUDY ........................................................... 51
C.
OPERATING SYSTEM AND HARDWARE SELECTION .................. 52
D.
BACKEND LIBRARY SELECTION ................................................... 52
E.
SOFTWARE PRODUCTION .............................................................. 52
F.
LIMITATIONS .................................................................................... 53
G.
CONCLUSION ................................................................................... 54
VI.
EXPERIMENT ............................................................................................... 55
A.
BACKGROUND ................................................................................. 55
B.
HYPOTHESIS .................................................................................... 55
C.
METHOD ............................................................................................ 55
1.
Participants ............................................................................ 55
2.
Apparatus and Location ........................................................ 56
a.
Equipment ................................................................... 56
b.
Location ....................................................................... 56
3.
Scenario.................................................................................. 56
4.
Procedures ............................................................................. 57
a.
Tasks ............................................................................ 57
b.
Conditions ................................................................... 58
VII.
RESULTS ..................................................................................................... 59
A.
GENERAL .......................................................................................... 59
B.
LIKERT SCALE QUESTIONS ........................................................... 59
viii
1.
C.
D.
Analysis of Likert Questions ................................................ 60
a.
Question 1: Training with this Device on a Regular
Basis Will Improve My Ability to Conduct CFF in
the Field ....................................................................... 60
b.
Question 2: It Was Difficult Navigating through the
Device to Find the Appropriate Information While
Completing the Tasks ................................................. 60
c.
Question 3: The Real-World Physical Actions and
Conducting A Task In The Virtual Environment
Are the Same ............................................................... 60
d.
Question 4: The Button Icons Provide Intuitive
Inference of What Would Happen When They Are
Pressed ........................................................................ 60
e.
Question 5: It is Easy to Move though the Screens
without Losing One’s Place ....................................... 60
f.
Question 6: Having This Software Available at My
Unit Would Improve My Units Ability to Perform
Their Mission ............................................................... 61
g.
Question 7: It Was Hard to Understand what the
Buttons Did .................................................................. 61
h.
Question 8: The 3D View Interface Was Intuitive ..... 61
i.
Question 9: The Device Accurately Represents the
Real World Physical Motion Required to Conduct
the Task ....................................................................... 61
j.
Question 10: The Overall Interface is Intuitive ......... 61
k.
Summation of All 10 Likert Question Answers ........ 61
l.
Summation of Likert Questions, Eliminating
Redundancy ................................................................ 62
2.
Summary of Results .............................................................. 62
3.
Analysis Tools ....................................................................... 63
DIRECT QUESTIONS ........................................................................ 63
1.
Analysis of Direct Questions ................................................ 64
a.
Question 11: Which device was more intuitive to
use?.............................................................................. 64
b.
Question 12: If the software on both devices were
about equivalent I would prefer to use? ................... 64
c.
Question 13: If each device had the same feature
set I would prefer to use?........................................... 64
d.
Question 14: This device is more convenient to
train with? .................................................................... 64
2.
Summary of Results .............................................................. 64
3.
Analysis Tools ....................................................................... 65
TRAINING AND ORDER ................................................................... 65
1.
Summary of Results .............................................................. 65
2.
Further Analysis .................................................................... 66
ix
E.
F.
OPEN ENDED QUESTIONS .............................................................. 67
DISCUSSION ..................................................................................... 69
1.
Is a VE trainer on a tablet possible? .................................... 70
2.
Is the “window to the world” paradigm seen as a
valuable addition to VE training? ......................................... 70
3.
Would military officers both trained and untrained in
CFF see a value in VE tablet CFF training? ......................... 71
4.
Further Discussion ................................................................ 71
VIII.
CONCLUSION .............................................................................................. 73
A.
GENERAL OBSERVATIONS ............................................................ 73
B.
SUCCESS .......................................................................................... 73
C.
LIMITATIONS .................................................................................... 75
IX.
FUTURE WORK ........................................................................................... 77
A.
IMPROVING SAT-M TRAINING SOFTWARE ................................... 77
1.
CFF .......................................................................................... 77
a.
Tier One, The Need to Haves ..................................... 77
b.
Tier Two, Viable Trainer.............................................. 78
c.
Tier Three, Individual Training ................................... 79
2.
New Features ......................................................................... 79
a.
Voice Recognition ....................................................... 79
b.
Map Data Downloaded of the Internet ....................... 80
3.
Other Applications ................................................................. 80
B.
ADDITIONAL EXPERIMENTS ........................................................... 80
C.
NEW PLATFORM .............................................................................. 81
APPENDIX A. INTERFACE DESIGN TESTING ..................................................... 83
A.
BACKGROUND ................................................................................. 83
B.
INTERFACE DESIGN STUDY ........................................................... 83
1.
Success Criteria..................................................................... 83
2.
Method .................................................................................... 84
a.
Target participant population .................................... 84
b.
Proposed demographics ............................................ 85
c.
Actual demographics.................................................. 85
3.
Procedures ............................................................................. 86
a.
Tasks ............................................................................ 86
4.
Likert Survey Results ............................................................ 86
a.
Open Ended Questions .............................................. 87
b.
Structured Interview ................................................... 88
5.
Discussion.............................................................................. 88
APPENDIX B.
EXPERIMENTAL DOCUMENTATION .................................... 97
LIST OF REFERENCES ........................................................................................ 125
INITIAL DISTRIBUTION LIST ............................................................................... 129
x
LIST OF FIGURES
Figure 1.
Figure 2.
Figure 3.
Figure 4.
Figure 5.
Figure 6.
Figure 7.
Figure 8.
Figure 9.
Depiction of “window to the world” using SAT-M .................................. 5
Photo of students using TSFO to practice CFF procedures (From
United States Army Field Artillery School, 1989) ................................ 11
FOPCSim screen capture................................................................... 13
Marines using SAVT (From Bilbruck, 2009) ....................................... 15
Simulator resource requirement over time.......................................... 16
Screen capture of SAT-M’s vector 21b view....................................... 48
Screen capture of ObserverSim’s naked eye view ............................. 50
Screen capture of SAT-M’s naked eye view ....................................... 51
Improvement builds as new technology is adopted into training
system design process ....................................................................... 74
xi
THIS PAGE INTENTIONALLY LEFT BLANK
xii
LIST OF TABLES
Table 1.
Table 2.
Table 3.
Table 4.
Table 5.
Table 6.
Table 7.
Table 8.
Table 9.
Table 10.
Table 11.
Table A1.
Table A2.
Cognitive and specific knowledge / skills needed to perform CFF
tasks (After McDonough & Strom, 2005) ............................................ 20
Psychomotor and sensory perceptual abilities needed to perform
CFF tasks (After McDonough & Strom, 2005) .................................... 21
HARs comparison between real world and FOPCSim (After
McDonough & Strom, 2005) ............................................................... 23
HARs comparison between real world and a CFF tablet system
(After McDonough & Strom, 2005) ..................................................... 24
Summary of current simulation tools described in use case
scenarios (After A. DiBenedetto, email to author, July 30, 2013; J.
Gavin, email to author, July 27, 2013; J. Gralin, email to author,
July 29, 2013) ..................................................................................... 35
SAT-M capabilities.............................................................................. 36
Two by two cross-over design ............................................................ 58
Wilcoxon signed-rank test results for Likert scale questions asked
post experiment .................................................................................. 63
Direct Question Sign Test Results ...................................................... 65
Two Sample t-test for Training and device order ................................ 66
Results of Oneway ANOVA on Q3 ..................................................... 66
Interface design success criteria ........................................................ 84
Likert survey results of interface testing ............................................. 87
xiii
THIS PAGE INTENTIONALLY LEFT BLANK
xiv
LIST OF ACRONYMS AND ABBREVIATIONS
AO
Air officer
CAN
Combined arms network
CAS
Close air support
CFF
Call for fire
CFFT
Call for fire trainer
CLRF
Common laser range finder
COTS
Commercial off-the-shelf
GOTS
Government off-the-shelf
CTA
Cognitive task analysis
DAGR
Defense advanced GPS receiver
DoD
Department of Defense
DOS
Disk operating system
DVTE
Deployable virtual training environment
EWTGPAC
Expeditionary warfare training group pacific
FDC
Fire direction center
FiST
Fire support team
FOC
Full operational capability
FOPCSim
Forward observer personal computer simulator
FOTS
Forward observer training simulator
FSCC
Fire support control center
GLTD
Ground laser target designator
GPS
Global positioning system
GUARDFIST II
Guard unit armory device full-crew interactive simulation
trainer II
GUI
Graphical user interface
HARs
Human ability requirements
HE
High explosive
HE/MT
High explosive / mechanical time
HLA
High level architecture
HMD
Head mounted display
xv
HOB
Height of burst
HSV
High speed vehicle
ICM
Improved conventional munitions
IOC
Initial operational capability
ISMT-E
Indoor simulated marksmanship trainer-enhanced
ITS
Individual training standards
ITX
Infantry training exercise
IZLID
Infrared zoom laser illuminator designator
JFETS
Joint fires and effects trainer system
JFO
Joint forward observer
JSAF
Joint semi-autonomous force
JTAC
Joint terminal attack controller
LTD
Laser target designator
MCAGCC
Marine Corps air ground combat center
MCCRES
Marine Corps combat readiness evaluation system
MCO
Marine Corps order
MEF
Marine expeditionary force
MiniTSFO
Miniature training set fire observation
MOVES
Modeling, virtual environments and simulation
MSAT
Multi-purpose supporting arms trainer
MTO
Message to observer
MVC
Model-view-controller
O&M
Operations and maintenance
OP
observation post
OT
Observer target
PC
Personal computer
PDSS
Post deployment software support
PEO-STRI
U.S. Army program executive office for simulation, training,
& instrumentation
PLDR
Portable lightweight designator rangefinder
POSREP
Position report
RDT&E
Research development testing and evaluation
xvi
RFMSS
Range facility management support system
SAT-M
Supporting Arms Trainer-Mobile
SAVT
Supporting arms virtual trainer
SDS
Software delivery system
SME
Subject matter expert
SOP
Standard operating procedures
T&R
Training and readiness
T/O
Table of organization
TACP
Tactical air control party
TSFO
Training set fire observation
TTECG
Tactical training exercise control group
TTPs
Tactics techniques and procedures
UAV
Unmanned aerial vehicle
USMC
United States Marine Corps
VE
Virtual environment
W2W
Window to the world
WP
White phosphorus
xvii
THIS PAGE INTENTIONALLY LEFT BLANK
xviii
ACKNOWLEDGMENTS
First and foremost, we would like to thank our spouses for their support
and patience during this process. We would also like to acknowledge the Navy’s
Modeling and Simulation Office, who provided key financial support to the
project. To our thesis advisor Dr. Joseph Sullivan and our second reader Erik
Johnson, thank you for your insights and encouragement. And a special thank
you to the visual simulation and game-based technology team, without whom we
never would have been able to prove the concept.
xix
THIS PAGE INTENTIONALLY LEFT BLANK
xx
I.
A.
INTRODUCTION
PROBLEM STATEMENT
Military simulation training is not what it should be. It is often slow to adopt
new technology. When innovations are adopted, frequently they are shoehorned
into old paradigms, failing to maximize their advantages. This results in military
simulation training that is not as effective as it could be, training opportunities are
lost and expensive older simulators are run when a better cheaper option should
be available.
In the beginning of 2010, the first mass produced touch screen tablet
computers became commercially available. In less than three and a half years
over 100 million have been sold (Associated Press, 2012). iPads released in the
fall of 2012 have a 1.4 GHZ dual core processors (Shimpi, 2012), making them
as powerful as laptops were just a few years earlier. It is often hard to tell when a
technology is mature enough that is merits being adopted. The authors will
demonstrate that tablet computers have achieved the necessary user base and
maturity needed to become viable platforms for military virtual environment
training.
Call for fire (CFF) is an ideal mission set for virtual environment (VE)
training. There is a large demand signal for it in the Marine Corps, as most every
Marine would benefit from some exposure to it. CFF is a perishable skill that
requires frequent currency training. Additionally, live indirect fire training is both
expensive and time consuming. VE training will never replace sending live
rounds down range. However, it does greatly increase training opportunities and
when used in conjunction with live fire training ensures that the training event is
maximized.
To demonstrate the capacity and potential of ve training using the tablet
platform, a call for fire (CFF) virtual environment (VE) training simulation was
developed. We followed a process that emphasized the reuse of previous design
1
work performed to produce CFF VE simulations for desktop / laptop systems.
The development path was tailored towards the unique features and capabilities
of tablet systems. The authors felt it was paramount that advances in technology
be incorporated into the design process. After a functional prototype was created,
we devised an experiment that compared the tablet system to an existing
desktop / laptop CFF VE training system. Due to time constraints the objective
was not to make an exact copy of the desktop CFF simulator but to create a
tablet simulator as a proof of concept.
The focus of this thesis is not on CFF training but on VE training on
tablets. The authors show that it is possible to provide high quality VE training on
a tablet and that there is a desire to have this capability in the fleet. We will also
explore some of the strengths and weaknesses of tablet VE training using CFF
as our vehicle.
B.
MOTIVATION
In the Marine Corps, there is an adage: “killing time kills Marines.” On any
given day Marines spend a great deal of time waiting for the next training event
or being transported from one place to another. To take advantage of this idle
time, small unit leaders often have “hip pocket” classes that discuss pertinent
tactics techniques and procedures (TTPs). The small unit leaders’ training fidelity
is limited by the resources they have available. Situationally, expedient training
methods such as throwing little rocks at bigger rocks to simulate CFF provides
some instruction, but it does not provide the same quality as that received in a
simulation-training center. Coordinating and executing joint forward observer
(JFO) training in a simulation-training center is an option, but this requires a block
of time that is not always available to a unit in a full pre-deployment training
workup cycle. Existing simulation systems provide little opportunity for
spontaneity or executing training in battalion spaces with high fidelity training
systems. This situation is less than optimal, especially when considering the new
era of fiscal constraint. In order to maintain proficiency, innovators within the
2
Department of Defense (DoD) must look towards easily distributed simulation as
a viable alternative to live fire training (Deputy Commandant for Combat
Development and Integration, 2012).
United States Marine Corps (USMC) 2012 Science and Technology Plan
identifies a critical training and education gap in training science and technology
objective number six: Warrior simulation:
Marines need to train as they would fight as small units, particularly
for dismounted operations. However, live training resources,
facilities, ranges and training areas are limited. Simulation
capabilities are needed to provide real-time effects and realistically
engage the senses during challenging, rapidly reconfigurable
scenarios to increase small units’ opportunities to train when they
do not have access to live resources. Develop capabilities to
realistically simulate munitions (friendly and enemy) effects within
live, virtual, and constructive training environments. Develop the
ability to stimulate operational equipment used in live training
environments from virtual or constructive environments, to improve
the capability of simulations to augment and enhance live training
opportunities and to reinforce realistic training using actual
equipment as often as possible in conjunction with simulators and
simulations. (Deputy Commandant for Combat Development and
Integration, 2012)
Live fire training has always been constrained by the fiscal environment.
History shows that military budgets ebb and flow in a relatively unpredictable
manner (Walker, 2013). Current political and economic conditions indicate that
DoD will be facing significant budget cuts (American Forces Press Service,
2013). These cuts will affect major acquisitions programs as well as every day
unit operations and maintenance budgets. A reduction in funding creates fewer
opportunities for required forward observer live fire proficiency training.
Even if budgets are unconstrained, operational tempo often limits a unit’s
ability to train. For example, over the last 10 years units in combat have been
conducting operations that may not be in line with their primary mission (i.e.,
artillery units acting as provisional infantry) (Kroemer, 2006). During these
combat tours there is little time or opportunity for the units to maintain their call
3
for fire skills. Upon completion of combat tours, where units are tasked to perform
missions outside their primary skill sets, servicemen return with significant
atrophy in their skills. Existing CFF training simulations include deployable
options, which have been proven to adequately address cognitive skill retention
in CFF tasks (McDonough & Strom, 2005). These deployable options have not
been effective in addressing psychomotor and sensory perceptual categories of
human ability requirements (HARs) assessment (McDonough & Strom, 2005).
One potential solution is to bring the simulation-training center to the
service member. Supporting arms trainer-mobile (SAT-M) will be a suite of
software programs that can be downloaded by the users to their personal tablet
devices.
This research investigates the iterative process of training simulation
development leveraging the rapid advances in commercial off-the-shelf (COTS)
and government off-the-shelf (GOTS) hardware as it progresses to desktop /
laptops and finally evolves to mobile tablet systems. Personal computer (PC)
based training has been previously validated for a variety of purposes; this study
focuses on the differences required for tablet based simulation. We believe that
by leveraging the native technology contained in most tablet devices, a one-toone mapping between action in the real world and the virtual environment can be
accomplished. For example, a user conducting virtual training on a tablet will
have to turn his body around to survey his surroundings, much as he would in the
real world. Figure 1 exemplifies the “window to the world” (W2W) concept which
may add an additional element of realism by the very nature of the physical
muscle movements that are required by the system.
4
Figure 1.
Depiction of “window to the world” using SAT-M
When one adds a virtual environment with high fidelity graphics and
sound, we believe it will deliver affordable, portable, and quality simulation
training. Throughout this effort, we will strive to outline and define the way
forward for future development in the area of realistic virtual training
C.
RESEARCH QUESTIONS
The work of Brannon and Villandre, 2002, provides evidence that CFF can
be trained on a personal computer through the creation of forward observer
personal computer simulator (FOPCSim) (Brannon
& Villandre,
2002).
Subsequent research investigated the training effectiveness of the system
(McDonough & Strom, 2005). With rapid advances in technology, how can DoD
best exploit these advances while simultaneously leveraging the existing proven
bodies of work to provide virtual training that is both accessible and effective?
Our work seeks to further previous investigations of virtual environment training.
5
This analysis provides a framework for the progression of virtual training
over the spectrum of desktop, laptop, and mobile tablet PC devices. In the
process we answer the following questions:
D.
1.
Is a VE trainer on a tablet possible?
2.
Is the “Window to the world” paradigm seen as a valuable addition
to VE training?
3.
Would military officers trained in CFF see a value in VE tablet CFF
training?
4.
Would military officers untrained in CFF see a value in VE tablet
CFF training?
5.
What is gained and lost when CFF is executed on a tablet versus a
desktop / laptop?
6.
How does a VE tablet training program need to be different from a
desktop / laptop VE training program?
ORGANIZATION OF THE THESIS
This thesis is organized in the following chapters:
Chapter I, introduction, an overview of the work contained in this thesis
and the problem the authors are trying to solve.
Chapter II, background, provides a historical background on past and
current Joint Forward Observer trainers.
Chapter III, task analysis, provides an analysis of the tasks than an
individual performs as they execute a CFF and how those actions map to SAT-M.
Chapter IV, requirements, looks at the requirements for a tablet based VE
CFF trainer. It specifically details how those requirements are different than the
requirements for a desktop / laptop VE CFF trainer. It provides use cases for
SAT-M and the two dominant Marine Corps CFF VE trainers. This chapter
answers research questions five and six.
Chapter V, system development describes the process followed to create
SAT-M.
6
Chapter VI, experiment outlines the methods and research process
followed.
Chapter VII, results, answers research questions one through four.
Chapter VIII, conclusions, describes the authors’ findings.
Chapter IX, future work, gives an overview of the way ahead for follow on
research and development.
7
THIS PAGE INTENTIONALLY LEFT BLANK
8
II.
A.
BACKGROUND
INTRODUCTION
In 2010, Ben Brown conducted an in-depth review of simulation-training
lineage, which he traced back to the very earliest versions of “serious games”.
Examples of these early simulations include chess, Wei Ch’I, and Chaturanga.
Brown’s discussion establishes the storied history of the relationship between
simulations and military training (Brown, 2010). Since those days of yore the
fidelity and range of applications of military simulation have improved and there is
a direct link between these improvements and technology. Today’s technology
makes it possible to train in a fully immersive virtual environment. The range,
depth, and application of current military training simulations vary greatly. The
background information contained in this thesis is limited to the investigation of
simulation-training systems and technology specifically for the purpose of
conducting call for fire procedural training.
B.
CALL FOR FIRE PROCEDURE TRAINERS TIMELINE
By its nature CFF training in a live fire environment is costly in terms of
manpower, coordination and funding. CFF training simulation is needed to offset
some of the constraints associated with live fire training. By using simulation
Marines are able to train when resources are restricted.
Current and past CFF procedural training simulations fall into three broad
categories: outdoor simulated firing ranges, indoor permanently fixed classroom
facilities, and portable/deployable configurations. Each of these groups has
distinct advantages and disadvantages.
Outdoor simulated firing ranges provide a robust experience that mimics
the challenges of actual live fire; their primary disadvantage is that they are
resource intensive. Indoor permanently fixed classroom facilities leverage
dedicated computer resources and space to create fully integrated scenarios;
however the facilities require coordination, scheduling, and reoccurring
9
maintenance. Portable / deployable simulations are found on laptop and portable
devices, their greatest advantage is availability and convenience for the user, but
without support staff they are only as good as the software running on them. The
following are examples of these different CFF training technologies. They are
listed in chronological order based on earliest found reference to their use in the
DoD.
1.
M32 Sub-caliber Mortar Trainer, ca. 1960
The M32 simulation utilized a CO2 powered pneumatic sleeve inserted into
an 81mm mortar tube. The device would fire a large 25mm training projectile
onto a miniature range. This training device requires a significant amount of
logistical support, including a large outdoor range area, instructor personnel who
must establish a “to scale” range with maps, and extensive specialized
equipment maintenance (Headquarters Department of the Army, 1960).
2.
M31 14.5 mm Field Artillery Trainer, ca. 1976
The M31 simulation utilized a single fire rifle barreled assembly to simulate
the fires of artillery. It required a miniature range with a special map. Designed
for outdoor use the system was intended to be a low cost alternative for artillery
units to train all of the personnel involved with call for fire conduct and execution.
This system also required a robust support system to include the range setup,
maintenance of equipment, and procurement of ammunition. This was not a
simulation that lent itself to individual training proficiency. (Headquarters
Department of the Army, 1976).
3.
Training set Fire Observation (TSFO), ca. 1982
TSFO was a classroom artillery fire simulation where slides of terrain and
weapons effects were projected onto a screen as seen in Figure 2. One of the
first indoor simulated CFF trainers, it was used extensively for many years within
DoD. The simulation required a large support system consisting of classroom
10
facilities, information technology services and contractor support (Headquarters
Department of the Army, 1991). It is unknown if any of these systems are
presently in use.
Figure 2.
4.
Photo of students using TSFO to practice CFF procedures (From
United States Army Field Artillery School, 1989)
MiniTSFO, ca. 1985
MiniTSFO was a DOS-based PC simulation developed by Captain Bill
Erwin as a research project that was then incorporated at West Point for cadet
artillery fires training. The software is one of the earliest documented attempts to
provide computer based CFF training (United States Army Field Artillery School,
1989).
5.
Indoor Simulated Marksmanship Trainer-Enhanced, ca. 1998
ISMT-E is a marksmanship training simulation. It is normally installed in a
permanent facility; using video projection the environment is displayed for the
trainees to practice CFF procedures. It requires a trained operator and an
instructor versed in CFF (if the operator is not). The ISMT uses actual equipment
11
integrated with the computer simulation for training. Personnel are also required
to maintain and manage the equipment. (Program Manager Training Systems,
2013).
6.
Forward Observer Training Simulator, ca. 1998
A computer based classroom training simulator, FOTS is primarily used by
the Navy and Marine Corps for introductory school house CFF training. The
system requires instructor support as well as facilities and personnel to maintain
it (Naval Air Systems Command, Training Systems Division, 1998).
7.
Forward Observer Personal Computer Simulator, ca. 2002
Created in 2002 by Brannon and Villandre as a MOVES Institute Master’s
thesis research project, the FOPCSim was originally intended to be prototype
software that could fill the CFF training proficiency gap caused by limited
resources. As live fire training is expensive, the idea at the time was to leverage
improvements in computer technology to create a simulation robust enough to
provide CFF procedure training for forward observers who were already qualified.
It was originally developed for desktop computers. FOPCSim, see Figure 3, used
a proprietary 3D game engine to run the simulation and the licensing costs were
prohibitive to widespread fielding (Brannon & Villandre, 2002).
12
Figure 3.
8.
FOPCSim screen capture
Joint Fires and Effects Trainer System, ca. 2003
JFETS is an immersive training simulation for the training and rehearsal of
nearly all aspects of indirect fire control procedures. According to General
Maples, the chief of field artillery in 2003, JFETS introduces highly realistic
conditions and situations that add realism to the virtual environment. It is quite
large and requires permanent facilities and contractor support (Maples, 2003).
9.
Guard Unit Armory Device Full-crew Interactive Simulation
Trainer II, ca. 2003
This simulation provided CFF VE training by integrating the actual devices
used by joint forward observers (JFOs) into the training scenario. GUARDFIST II
was scalable from one to 30 trainees. It required classroom space, instructors,
and maintenance personnel (U.S. Army Program Executive Office for Simulation,
Training, & Instrumentation [PEO-STRI], 2003).
13
10.
Call for Fire Trainer, ca. 2005
CFFT replaced GUARDFIST as the primary Army CFF simulation-training
system. It is a classroom installed system that incorporates many of the tools
used by joint forward observers. As many as 30 students can train on the system
simultaneously. The design of the system necessitates a classroom environment
with facilities and personnel (Mitchell, 2005).
11.
FOPCSim 2, ca. 2005
FOPCSim 2 is a continuation of the research conducted at the MOVES
institute by Brannon and Villandre. In 2005, McDonough and Strom extended the
work of the original authors by creating a more robust version that was freely
distributable to all Marines for personal use. This virtual environment CFF
procedure trainer could be loaded on any Microsoft compatible personal
computer. It ran on an open source engine, avoiding vendor lock in, thus making
it free to distribute. FOPCSim 2 was widely used throughout the Marine Corps
(McDonough & Strom, 2005).
12.
Deployable Virtual Training Environment, ca. 2005
DVTE is a computer based simulation software suite that provides a
multitude of virtual environment training options with the primary focus on
combined arms Marine Air Ground Task Force integration and rehearsal training.
Current revisions of this particular system include copies of FOPCSim as well as
ObserverSim within the combined arms network (CAN) software package (DVTE
Development Team, 2010). The intent of DVTE was to provide a deployable
virtual training solution for Marine forces as a means to maintain proficiency in
their skills while forward deployed. The software is maintained as a program of
record within the Marine Corps and updates are provided annually via portable
hard drives. This method of software maintenance requires the end-user to
perform upgrades on the system, which implies that the receiving units must
have some technological understanding and the time to upgrade the suite; no
small task as one suite consists of 32 laptops (Grain, 2012).
14
13.
Supporting Arms Virtual Trainer, ca. 2009
The SAVT began as a non-program of record in the Marine Corps and
Navy with the name Multi-purpose Supporting Arms Trainer (MSAT). In Figure 4
we see joint terminal attack controllers (JTAC) and JFOs training on SAVT
utilizing its fully integrated real world equipment suite in an immersive virtual
environment. The environment is projected onto a 15’ high by 10’ radius dome.
This simulation system requires permanent facilities, maintenance, operators,
and instructors (Bilbruck, 2009).
Figure 4.
14.
Marines using SAVT (From Bilbruck, 2009)
Observer Simulator, ca. 2010
ObserverSim, included in the CAN software on DVTE, is the next iteration
of FOPCSim. As stated in the ObserverSim User’s Guide, “based on the original
simulation created by the MOVES Institute of the Naval Postgraduate School”
(DVTE Development Team, 2010). ObserverSim improves on the original design.
The above examples of virtual environment CFF procedure trainers are
not meant to be all inclusive. There are many other examples of current
technology either in use or in development. The Army Program Executive Office
15
for Simulation Training and Instrumentation 2013 catalog lists four simulations
that could be used for CFF procedural training and rehearsal.
Presently, all virtual environment CFF procedure trainers fall within the
spectrum of high-end large scale classroom facilities requiring significant
additional resources, through deployable laptop based simulations, requiring a
small degree of additional resources. Figure 5 depicts this concept, where the top
of the diagram indicates CFF simulations requiring the highest amount of
additional resources. Resources include maintenance, facilities, instructors,
funds, and any other item required to run the simulation outside of the trainee
themself.
Figure 5.
Simulator resource requirement over time
16
C.
PREVIOUS WORK
In 2002, David Brannon and Michael Villandre investigated the potential
for a computer based CFF procedures trainer. Their efforts led to the
development of FOPCSim. As previously described, FOPCSim was a proof of
concept which showed a computer simulation could effectively reproduce the
tasks required of a JFO. A thorough cognitive task analysis was conducted and
the work established that many aspects of CFF procedure training can be trained
inside a PC VE. After an experiment with the prototype software and experienced
JFOs, “the results obtained indicate individuals trained in the forward observer
task can use the FOPCSIM to maintain and improve proficiency for a skill set that
is perishable without regular practice” (Brannon & Villandre, 2002). It is important
to note that at the time the Marine Corps had few CFF VE resources available
outside of the schoolhouse environment.
In 2005, James McDonough and Mark Strom conducted follow up work
with FOPCSim. The research was intended to extend Brannon and Villandre’s
previous work, transitioning FOPCSim from a prototype to a complete simulation
that could run on existing computer equipment already in Marine Corps’
inventory. McDonough and Strom began with the cognitive task analysis
conducted by Brannon and Villandre, then applied a human ability requirements
assessment to determine the degree to which FOPCSim tasks map to the
execution of the real world task. They found 27 skills required to complete basic
CFF in the real world: 12 cognitive tasks, 10 sensory-perceptual, three
psychomotor, and two that require special knowledge or skill. During further
analysis, it was determined that cognitive tasks matched well between simulation
and the real world, however psychomotor and sensory-perception related tasks
did not. Subsequently, software was developed and then tested. Based on the
results of this experiment McDonough and Strom determined that FOPCSim,
when used as a training tool performed as well as, and in some cases better
than, the legacy training method used in the control group (2005).
17
A significant finding of this work showed that a VE training simulation
could be used to maintain certain perishable skills. Both sets of researchers
identified a training shortfall resulting from the lack of available simulation
capability. Their proposed solutions and designs were based on the technologies
of their times.
18
III.
TASK ANALYSIS
Our task analysis begins where McDonough and Strom’s ended in 2005.
As previously discussed, they conducted a HARs absence presence assessment
as part of their research. Their assessment revealed that psychomotor and
sensory perceptual tasks are not well replicated within a desktop / laptop VE CFF
simulation (McDonough & Strom, 2005).
A HARs assessment compares the execution of a real world task to the
execution of that task in a VE. The HARs assessment tool was developed in
2003 by Cockayne and Darken. It is summarized by the following quote:
The chapter begins with a discussion of taxonomic science and
classification as related to the development of the Human Ability
Requirements (HARs) taxonomy for human performance
evaluation. It discusses the extension of real-world taxonomy
method and tools into VEs and how these can be used to extend
and complement conventional task analyses. It is the linking of
human abilities as required by task components to interaction
techniques and devices that is of concern. Our research was based
on the need to understand how humans perform physical tasks in
the real world in order to guide the design and implementation of
interaction techniques and devices to support these tasks in VEs.
(Cockayne & Darken, 2003)
As desktop PCs typically use a mouse and a keyboard as the primary
human computer interface and tablet devices use a multi-touch touchscreen in
conjunction with accelerometers and gyroscopes as the primary human interface,
comparing the two systems based on input modalities is nontrivial. A HARs
assessment creates a framework that allows for the comparison of the two. The
difference in input modalities, particularly as they relate to simulated training
tasks, is the focus for our investigation.
19
A.
HUMAN ABILITY REQUIREMENTS REVIEW
In 2005, McDonough and Strom used the HARs taxonomy to identify 27
skills required to perform CFF tasks. Of the 27 skills, 12 were identified as
cognitive skills and are listed in the top portion of Table 1.
Cognitive skills
Oral comprehension
Deductive reasoning
Oral expression
Information ordering
Memorization
Spatial orientation
Problem sensitivity
Visualization
Mathematical reasoning
Perceptual speed
Number faculty
Time sharing
Specific knowledge / skills
Map reading
Electronic knowledge
Table 1.
Cognitive and specific knowledge / skills needed to perform CFF tasks
(After McDonough & Strom, 2005)
After their analysis, which is detailed on page 20 of The Forward Observer
Personal Computer Simulator (FOPCSIM) 2, they concluded that the FOPCSim
software simulated CFF tasks as related to cognitive and specific knowledge /
skills mapped well to the actual real world CFF tasks that require cognitive
abilities and specific knowledge / skills. The results supported similar findings by
Brannon and Villandre, who in 2002 concluded that:
… the FOPCSim user must perform the same steps to determine
target location and formulate the call for fire as they would in the
real world. FOPCSim maintains cognitive fidelity to the real task,
but sacrifices physical fidelity. The performance differences are due
to the physical interface and not the cognitive element. (Brannon &
Villandre, 2002)
These results establish that CFF cognitive tasks can be effectively incorporated
into a training simulation. Both previous versions of this software were developed
for, and ran on, desktop or laptop devices that were typical of that period. In
2005, the year McDonough and Strom developed FOPCSim 2, Apple released
the PowerBook G4, a laptop with 1.67 GHz G4 processor and 512 MB of RAM
(Norr, 2006). In late 2012 Apple released the 4th generation iPad, which shipped
20
with a 1.4 GHz dual core A6X processor, and 1 GB of RAM, making the tablet as
least as powerful as the PowerBook G4 (Shimpi, 2012). If it was possible to
create a CFF VE training simulation that runs on the PowerBook G4, then it is
certainly possible to create one that will run on the current generation of tablet
computers. Therefore, we conclude that the cognitive component of the CFF VE
can continue to be replicated on modern tablet computers while maintaining the
same level of training efficacy.
The psychomotor and sensory perceptual skills required to conduct CFF,
as identified by McDonough and Strom, are listed in Table 2.
Sensory / perceptual
Near vision
Hearing sensitivity
Far vision
Auditory attention
Night vision
Sound localization
Depth perception
Speech recognition
Glare sensitivity
Speech clarity
Psychomotor
Control precision
Arm / hand steadiness
Reaction time
Table 2.
Psychomotor and sensory perceptual abilities needed to perform CFF
tasks (After McDonough & Strom, 2005)
Sensory / perceptual abilities are particularly difficult to replicate in a VE.
For instance, we can simulate a night environment on a display, but unless the
simulator is blacked out, it does not exercise true night vision. The illusion of
distant objects can be easily simulated in a VE using 3D rendering techniques;
however this simulation of a distant object does not actually require the use of a
human’s far vision ability. Other sensory / perceptual abilities that present unique
challenges to a VE simulation include hearing sensitivity, sound localization,
speech recognition, and speech clarity. The aforementioned abilities can be
effectively recreated in a VE with special equipment and simulator configuration.
However, our research is limited to desktop / laptop and tablet devices and the
simulation of most sensory / perceptual abilities is outside the capabilities of the
hardware.
21
We were able to narrow the field of human abilities as they relate to
simulated task mapping differences between desktop / laptop systems and tablet
systems by focusing on psychomotor tasks. Our analysis focuses on the modality
differences between desktop / laptop and tablet systems, and their ability to train
psychomotor skills. Tablet devices have a unique input control methodology,
using accelerometers and gyroscopes to capture movement. A VE can use this
input control methodology to change perspective, which maps to how a human
being observes the real world. The software simulations used in our analysis are
oriented from a first person perspective. With the tablet system the user must
physically move their body, head, and eyes in order to change their view in the
VE. In the desktop / laptop system the user’s head and eyes are always looking
forward at the stationary monitor and movements of the mouse control changes
in perspective. W2W, the authors’ concept for VE, utilizes the strength of the
tablet system: the VE is all around the user, not locked in a stationary monitor.
Only those factors which diverge due to hardware differences between
desktop / laptop and tablet systems were included in our analysis. These were
determined by narrowing and validating the scope of the HARs absence /
presence assessment. The psychomotor skills area had the greatest divergence
and these were mapped by McDonough and Strom to Brannon and Villandre’s
cognitive task analysis, listed in Table 3.
22
CFF Human Abilitiy Requirements assessment
Comparison
Absense / Presence Test
Human abilities
Call for Fire Task Analysis (Tasks)
1.1.1 Utilize GPS.
1.1.2 Utilize Map and Compass.
1.1.3 Utilize available tanks sights or laser range equipment for resection.
3.2.3.2 Polar/Laser Polar
3.2.3.3 Shift from Known Point:
3.2.6 Send-Method of Fire and Control:
3.2.12 Conduct-Spottings:
3.2.12.1 Height of Burst
3.2.12.2 Range
3.2.12.3 Deviation
x
x
x
m
x
x
m
m
m
m
m
m
x
x
x
x
modeled in real world not in FOPCSim x
modeled in both m
Table 3.
HARs comparison between real world and FOPCSim (After
McDonough & Strom, 2005)
Table 3 only displays the task to ability mapping for psychomotor skills
when the comparison chart in McDonough and Strom indicates there is a match
for the task in the simulation or real world. Non-psychomotor tasks are excluded
from the table. Using the same process the tablet system was compared with
human abilities (Table 4).
23
CFF Human Abilitiy Requirements assessment
Comparison
Absense / Presence Test
Human abilities
Call for Fire Task Analysis (Tasks)
1.1.1 Utilize GPS.
1.1.2 Utilize Map and Compass.
1.1.3 Utilize available tanks sights or laser range equipment for resection.
3.2.3.2 Polar/Laser Polar
3.2.3.3 Shift from Known Point:
3.2.6 Send-Method of Fire and Control:
3.2.12 Conduct-Spottings:
3.2.12.1 Height of Burst
3.2.12.2 Range
3.2.12.3 Deviation
m
m
m
m
m
m
m
m
m
m
m
m
m
m
m
m
modeled in real world and not in a tablet system x
modeled in both m
Table 4.
HARs comparison between real world and a CFF tablet system (After
McDonough & Strom, 2005)
In Table 4, all CFF tasks that required psychomotor skills in the real world
have a higher mapping to the task executed in simulation when a tablet system is
used for CFF VE training. This supports and is explained through the unique
input technology (accelerometers/ gyroscopes) and W2W, rotating the device
itself as a window into the VE. All of the CFF tasks identified are highly
dependent on psychomotor skills in the real world and the tablet can replicate
these skills in a manner that is nearly analogous to real world action.
B.
COGNITIVE TASK ANALYSIS
During their previous investigation Brannon and Villandre completed a
thorough cognitive task analysis which can be found in The Forward Observer
Personal Computer Simulator (FOPCSIM), chapter III. They conducted their
analysis using the goals, operators, methods, and selection rules (GOMS) model.
The source for their CTA was Field Manual (FM) 6–30, Tactics, Techniques, and
Procedures for Observed Fire. FM 6–30 was superseded by Army Technical
Publication (ATP) 3–09.30 Techniques for Observed Fire in August, 2013. We
24
complete a cross walk of the CFF tasks in the two publications to ensure that
there have not been any significant revisions which would alter the previous task
analysis or the HARs assessment. This comparison was limited to those tasks
requiring psychomotor abilities as we have previously established that the
remaining human abilities can be easily replicated (cognitive and specific
knowledge / skills) or require special equipment (sensory / perceptual). The
authors’ review of the FM 6–30 and ATP 3–09.30 revealed that the revisions to
CFF publication did not change the basic tasks required of a JFO. In particular
we can validate that a JFO must still have the basic skills to determine selflocation (via GPS or map and compass), target location (via target designation
device or map study, and understanding of the required elements of the CFF
brief. Therefore the CTA performed by Brannon and Villandre is still valid for the
purposes of studying CFF procedures on a tablet system.
25
THIS PAGE INTENTIONALLY LEFT BLANK
26
IV.
A.
REQUIREMENTS
OVERVIEW
Before the requirements for SAT-M can be derived there needs to be an
understanding of how it fits into the overall CFF simulation-training continuum. As
discussed in Chapter II, currently the two most commonly used CFF simulators in
the USMC are the SAVT and the software suite on the DVTE. In order to place
the simulators within a greater context, use cases for the three systems are
presented. These demonstrate the niches that each of the three simulators fill.
Only once the niche for SAT-M is understood can the requirements be derived. It
is important to remember that the requirements are being driven not only by the
needs of the software to provide certain fidelity and functionality but also by the
user’s expectations of and the limitations and capabilities of tablet systems.
B.
USE CASE SCENARIOS AND SYSTEM CHARACTERSTICS
1.
SAT-M
a.
Case 1
12th Marine Regiment has just departed Okinawa heading to Camp
Fuji Combined Arms Training Center to execute artillery training. They embarked
on the high speed vehicle (HSV) and the trip will take over 24 hours. While on the
HSV the artillery liaison officer brings together the JFOs to conduct CFF training.
A few on the JFOs have tablet computers, which have previously been loaded
with SAT-M. During next few hours the artillery liaison officer conducts high
quality VE CFF training, utilizing what otherwise might have been dead time.
Before they embarking on the HSV, to ensure they have the most current build of
SAT-M, all the tablets were connected to the internet to download any updates.
During the training the artillery liaison officer maintains his tablet in
“instructor mode.” This allows him to observe what the JFOs are doing in realtime and dynamically adapt the scenario based on the JFOs’ performances,
27
increasing or decrease difficulty as appropriate. He is also able to recall their past
missions to look for trends, allowing him to focus the training on those areas the
JFOs find most difficult.
b.
Case 2
Corporal Doe has been selected by his battalion to go to the JFO
Course. In preparation for the course he has taken the MarineNet classes on
CFF, received face-to-face instruction from one of the battalion’s JTACs, and
downloaded SAT-M onto his roommate’s tablet. In the evenings he spends a few
minutes in his barracks running through CFF scenarios, building an
understanding of the fundamentals of CFF. He does not always train alone as a
fellow squad mate has also been selected for the course. They often link their
tablets via Bluetooth and train together in the same virtual environment.
2.
Characteristics of SAT-M and tablets
a.
User Expectations
It is important to note that user’s expectations differ when using
tablets or desktops / laptops. This encompasses the obvious (smaller screens,
lighter weight and no keyboard) to the less obvious: software that is easy to use,
the simplicity of interconnecting devices, use of the “cloud” for distribution and
data storage. When running a tablet application users expect that a reference
manual will not be needed. A button’s function will be expressed through its icon
and the user will not be lost in layers of menus. It is also expected that tablets will
seamlessly enter a network; there is no need for setting IP address and subnet
masks.
b.
Device Input
Tablets are meant to be used on the go and as such have
additional hardware not found in desktops or laptops. The current generation of
tablets has built in GPS, accelerometers, gyroscope and in some cases a
magnetic compass. They also come with microphones, speakers, cameras, and
28
of course multi-touch enabled touchscreens. This allows entirely new paradigms
for interacting with the device. While SAT-M does not take a revolutionary
approach to the user-device interface; it does try to take advantage of some of
these inputs. Physically moving the tablet to change ones view, as described in
the W2W, is one way. Modern tablet multi-touch enabled touchscreens allow for
parallel inputs, more than one icon can be activated at a time. A mouse allows
interaction with only one icon, resulting in serial inputs. There are some
workarounds with keyboard shortcuts, but they are in general unwieldy. Real
world devices follow a parallel input paradigm.
c.
Limitations and feedback
With the exception of sharing media content, tablets are designed
for individual use. Combined with W2W, it would be very hard for an instructor to
evaluate the performance of more than a few individuals. Device portability also
creates the expectation that it can be used without an instructor. It is therefore
critical that the system be able to provide useful feedback; feedback of the sort
that extends beyond reporting number of corrections, round accuracy, and
mission execution time.
With a small screen a tablet device is not an ideal platform to
conduct mission planning. However, doing so allows the software to extrapolate
far more about user thought process and skill level than if just the CFF mission is
executed on it. For example, the device will know user accuracy in plotting their
position and the target’s location. Useful feedback can then be provided using
tablet-specific hardware and software: Was the user looking at the target when
the rounds impacted? Did the user double check their position on the map, or
just report their position straight from the defense advanced GPS receiver
(DAGR)?
d.
Centralized Distribution
The Apple App Store and Android Market have set the precedence
for software distribution on mobile devices. Users expect to go to a centralized
29
hub to download their applications. They also expect the hub to indicate when
their software is out of date, and provide those updates when prompted. This
follows a centralized control structure, making it easy to push out software
changes to the user. Where it falls short is in the distribution of custom made
scenarios and environments. Some tablet operating systems make it extremely
difficult to transfer custom made content from one tablet to another.
3.
DVTE
a.
CASE 1:
The battalion air officer (AO) has gathered a group of perspective
JTACs at the simulation center where he is conducting close air support training
using the combined arms network (CAN). He chose to execute the training at the
simulation center because he can get the support of a dedicated simulation
center operator who will run the joint semi-autonomous force (JSAF) server for
him. The JSAF server allows DVTE laptops to network together, putting the
participants in the same virtual environment.
The AO knows that one of the obstacles to non-aviators
successfully completing the course is having an appreciation for the aircraft’s
perspective, especially when trying to talk the aircraft onto a target. The AO
creates a scenario to help the Marines get an understanding of how the same set
of roads and buildings looks drastically different depending on the position of the
observer. The scenario contains two perspectives; one where the trainee is
observing from the ground, and another where a different trainee is observing the
battlefield from an overhead unmanned aerial vehicle (UAV) feed. Both trainees
are attempting to talk an aircraft onto a target. During the course of the training,
no virtual rounds were fired, but a great deal of learning occurred.
b.
CASE 2:
A JTAC assigned directly to a company has been running squad
size classes on CFF. The lecture part of the training is over and the JTAC wants
30
the Marines to get practice in CFF mission planning and execution. He has set
up a classroom in Battalion spaces with a squads worth of DVTE laptops. The
Marines are running a scenario set in Twentynine Palms. They could be
executing the mission planning on the laptops; but in this case the JTAC wants
them to do it on real maps, using mapping pens and protractors, as they would
do it if called on to conduct the mission in combat. As he is the only instructor,
the JTAC created a simple scenario. He spends the training time answering
questions and reviewing the Marines mission planning paperwork.
4.
Characteristics of the DVTE and Laptops
As can be seen in the above two examples the CAN allows multiple
people to be trained in a classroom setting. The ratio of instructors to students
depends on proficiency and the complexity of the scenario. The CAN also has
some unique features that the other systems do not; a user can fly an aircraft in
the VE or as in Case 1, observe the battlefield through a UAV feed.
One of the drawbacks to the CAN and DVTE is the dissemination of
software changes. Not only do the owning units needs to get the changes, they
then have to install them; a non-trivial task given that 32 laptops are in a DVTE
suite. In July 2013, when project manager (PJM) DVTE, John M. Gralin, was
asked about software updates to the DVTE, he said:
The software is incrementally developed with efforts occurring each
year up to the planned FOC of Sep 2017. Some of these are
developmental efforts funded with RDT&E funding and other efforts
are considered software maintenance (a.k.a. Post Deployment
Software Support [PDSS]) funded with O&M funding. Software
upgrades are provided once per year to the fleet primarily through
the MEF Battle Simulation Centers. These software updates are
provide via the DVTE Software Delivery System (SDS), which is
contained on an external hard drive. Once the SDS is plugged into
the "Suite", the software will push across the whole suite in
approximately 8 hours. Updates to the CAN are included on the
SDS and the CAN is currently up to v1.8. Not all versions (v1.0
through 1.8) of the CAN were released to the fleet as some were
superseded by later versions prior to the annual update. (J. Gralin,
email to author, July 29, 2013)
31
There are other barriers to the use of the DVTE, especially in battalion
spaces. It is time consuming to set up and network the laptops, they take up a
decent amount of space, and they are vulnerable to theft.
5.
Physical Interaction with the DVTE / CAN
There are some general assumptions about how the DVTE laptops are
operated. Though it is possible to set up the laptops in an austere environment,
they are typically used in a sanitized classroom with a desk and good lighting; the
user has the space to take notes and execute mission planning right at their
workstation. The software provides a robust set of mission planning tools.
However, as digitized mission planning bears very little resemblance to how it is
done in the real world, and as the system provides no direct user feedback based
on the accuracy of the mission planning, these tools are rarely used.
a.
Input
The DVTE uses a mouse and a keyboard for user input and most
users have a great deal of experience with the two. Unfortunately they do a poor
job of capturing the activities one would have to perform when executing the
mission. A user sits in a chair staring at the screen, only moving their hands.
Cognitively, the trainee might be conducting the correct activities but they are not
getting the muscle memory from conducting the psychomotor task of physically
moving in order to gain a new perspective.
6.
SAVT / MSAT
a.
CASE 1
At the tactical air control party (TACP) course a graded event is
underway. Three students, who completed their mission planning the night
before, are playing the roles in a Fire Support Team (FiST), one is controlling the
aircraft, another is laser designating the target with the portable lightweight
designator rangefinder (PLDR) and a third is suppressing the enemy’s surface to
air threat via indirect fire. A dedicated simulation operator is both running the
32
SAVT and playing the role of the aircraft. A TACP instructor is taking notes and
evaluating the students’ performances. The three students cycle through the
positions, executing slightly different missions each time.
b.
CASE 2
In July 2013, Jack Gavin, SAVT operator at Marine Corps Base
Twentynine Palms, provided the following case study.
In the SAVT at Twentynine Palms, as part of Infantry Training
Exercise (ITX Program), Battalion Fire Support Teams (FiST) under
the tutelage of the Tactical Training Exercise Control Group
(TTECG) conduct rehearsal exercises prior to executing live fire
training aboard MCAGCC. The complete battalion Fire Support
apparatus is exercised and coached with regard to safety and
efficiency. In attendance are; the unit company FiSTs, The
Battalion Fire Support Control Center (FSCC), pilots role playing
simulated tactical aircraft from the supporting Aircraft Squadrons,
and Coyotes from the TTECG. The simulator itself is operated by a
single operator, who is a former naval aviator, or in the case of
other SAVT sites, a qualified JTAC. At Twentynine Palms, it is a
former AH-1W pilot with 3800 flight hours. In the absence of Role
Players from supporting squadrons, the operator will assume the
role of the tactical aircraft.
All tools available to the FiST team are integrated into the
simulation system, to include; PLDR, IZLID, StrikeLink, and Vector
21b linked to a computer simulated DAGR. The training enables the
Battalion Fire Support apparatus to integrate supporting and
organic assets to include Aviation, Artillery, Mortars and Naval
Surface Fires. As they are conducting training as they might do in
combat, the mission planning is ad hoc, they develop target
locations, check in aircraft, and execute commander’s guidance as
to priority of fires. The objective of the evaluation is not on the
Marines ability to draw laser baskets and plot coordinates on a map
but to control and deconflict multiple assets at once. The location of
the simulation exercise closely matches the actual locations they
will use during the live fire portion of their training. (J. Gavin, email
to author, July 27, 2013).
7.
Characteristics of the SAVT
The SAVT is the state of the art USMC close air support (CAS) / CFF
trainer. It puts the participants on the observation post (OP), with a fully
33
integrated real-world equipment suite. There are only nine of SAVTs in the DoD.
They require a highly trained operator, one who is not only technically proficient
with running the simulator but also tactically proficient with the mission sets.
The SAVT is scheduled in the range facility management support system
(RFMSS) at least 96 hours prior to the training event. The 96 hour scheduling
requirement inhibits spontaneity; units that have a sudden opening in their
schedule cannot take advantage of it.
SAVT’s instrumented copies of the equipment used by a FiST leads to
excellent transfer of training. If the FiST equipment suite changes then the
simulator needs to be changed to properly reflect the new equipment. In July
2013, According to Tony "Phu" DiBenedetto, MSAT operator at EWTGPAC:
The most recent tech refresh was 2010 when the GLTD was
exchanged for a PLDR, and Strikelink and Video Scout capabilities
added. Software upgrades were received with the tech refresh as
well. Ideally the next tech refresh will be around the 2015 time
frame, which may include additional capability such as the JTAC
handheld LTD and a thermal site. (A. DiBenedetto, email to author,
July 30, 2013)
C.
SUMMARIZATION OF THE SIMULATORS
Table 5 is a summarization of the data relating to the three simulators.
34
MSAT/SAVT
DVTE
Location
Only Nine in the DoD
Battalion spaces
Simulation center
Personnel
Requirements
Dedicated
operator
with extensive tactical
experience, either a
former naval aviator or
JTAC
Availability
Scheduled at least 96 156
Full
hours in advance
distributed
active duty
units
Mobility
No
SAT-M
/ Anywhere
If the laptops are User
networked together
there is usually need
for technical support,
typically provided by
the battle sim centers
Networkability Once, in 2012, as a Yes
Yes
proof of concept
Feedback
/ Provided by operator
Limited
Yes,
but
it
is
Tutoring
recommended that a
trained
instructor
provide occasional
feedback
Mission
Included
equipment Usually done out of Best done in the
planning
has mission planning the system
system, can be done
capabilities
outside
of
the
system
Input
Instrumented copies of Mouse and keyboard Touchscreen,
actual equipment used
gyroscope,
by FiST
accelerometer,
compass
System
When
equipment Done yearly and From “the cloud”,
updates
updates during tech pushed through the and conducted when
a change is deemed
refresh
battle sim centers
necessary
Custom
Yes
Yes
Yes, but hard to
scenarios
share
Table 5.
suites If the user has an
across Joint
Knowledge
Marine Online account they
can download it.
Available to all active
duty and reservists
Yes, but difficult
Yes, no challenges
Summary of current simulation tools described in use case scenarios
(After A. DiBenedetto, email to author, July 30, 2013; J. Gavin, email
to author, July 27, 2013; J. Gralin, email to author, July 29, 2013)
35
D.
SUMMARY OF CAPABILITIES
Conceptually, SAT-M will perform much of the same functionality as
FOPCSim 2. It will require a robust mission planning capability, embrace the
inputs available on a tablet, and provide as much feedback as can be usefully
incorporated without inhibiting the training processes. Ideally it will allow an
untrained Marine to learn how to execute CFF on their own while not developing
improper habit patterns. Table 6 outlines key software capabilities.
Supporting Features
Self-location
Benefit
USMC performance standard, improves user
competence
USMC performance standard, improves user
competence
USMC performance standard, improves user
competence
USMC performance standard, improves user
competence
Target-location
CFF Procedure
Utilization of all T/O
Equipment
Table 6.
E.
SAT-M capabilities
FUNCTIONAL REQUIREMENTS
The following requirements were developed under the assumption that
SAT-M will provide only basic CFF training. In some cases they are carried over
verbatim and in others they are an extension and modification of the exhaustive
lists generated by both Brannon and Villandre for FOPCSim 1 and McDonough
and Strom for FOPCSim 2. Both FOPCSim 1 and FOPCSim2 developed VE CFF
trainers. SAT-M is currently in a proof of concept stage, with only enough
functionality embedded to allow for execution of the experiment found in Chapter
VI.
1.
SAT-M shall provide the capability to monitor, score, and evaluate
trainee's performance using EWTGPAC standards as a template.
2.
SAT-M shall allow the initialization and activation of the simulator
into individual training scenarios as well as higher level training
scenarios using high level architecture (HLA) connectivity.
3.
SAT-M shall provide emulated (i.e., computer generated) forces
capable of reacting to indirect fire.
36
4.
The SAT-M simulation shall replicate both enemy and friendly
forces including tanks, trucks, personnel carriers, command and
control vehicles, reconnaissance vehicles, forward area air
defense weapons, dismounted infantry with their associated
weapons, mortars, artillery and rockets.
5.
SAT-M shall permit users to design new scenarios and revise
existing scenarios.
6.
SAT-M shall provide the capability to generate new scenarios for
the ultimate purpose of mission rehearsal.
7.
SAT-M shall provide the capability to place targets and friendly
units at specified coordinates on the simulated terrain. Input
screen allows user to enter number, type, location of targets,
whether they are moving or not, whether they are displayed
sequentially or all at once.
8.
SAT-M’s simulated terrain and environment shall be provided with
the following:
a.
SAT-M shall use the same terrain database as used in the
DVTE CAN (threshold). The SAT-M shall allow the user to
download imagery and topological information from
commonly used internet mapping sites, for example Google
Earth. SAT-M will then incorporate the mapping data into a
scenario so the user can train with it (objective).
b.
The following image quality requirements shall apply as a
total contribution to the complete integrated visual system
(terrain database, image generation system and visual
system). Provide the full spectrum of day and night visibility
to include sunlight and moonlight effects on terrain. Visual
resolution of the simulated terrain shall ensure a true
perspective is maintained when distance to an object
increases or decreases. The visual system shall be capable
of displaying personnel, vehicles, and weapon effects.
Objects shall appear in proper size with distinguishing
characteristics for the indicated range as viewed through the
replicated sighting devices. Terrain feature clarity shall be
sufficient to provide appropriate depth perception and distant
vision.
9.
The SAT-M system shall train and evaluate joint forward observers.
The SAT-M will also provide the capability to exercise combined
arms to train fire support teams (objective using HLA).
10.
The SAT-M will be used to train tasks/events listed in NAVMC
3500.7, Artillery Training and Readiness Manual dated 15 March
37
2007, NAVMC 3500.42A, Tactical Air Control Party Training and
Readiness Manual dated 8 October 2008.
11.
The SAT-M shall replicate laser range finder / designator
equipment (e.g., GLTD, PLDR) to include target observation, fixed
and moving target tracking skills.
12.
The SAT-M shall simulate shell bursts to include sound effects of
the required projectiles, anywhere in the target area with an
observer-target distance of six kilometers (threshold) or 12
kilometers (objective).
13.
The SAT-M shall simulate subsequent bursts, specified adjustment
correction data given by the forward observer, until a fire for effect
or target kill is achieved. Adjustments shall accommodate single
gun, single round missions through multiple guns / multiple rounds /
multiple (projectile type / fuse type) missions with a threshold of up
to six guns.
14.
The SAT-M shall measure and record the call for fire, the distance
between the target and the impact point of the round(s).
15.
Forward observer calls for fire and the adjustment of fires shall be
entered as keyboard and dropdown menu inputs to replicate voice
procedures (threshold). SAT-M will also all CFF and adjustment to
be executed via voice recognition (objective).
16.
The SAT-M shall incorporate center gun and adjustment for final
protective fire missions.
17.
The SAT-M shall simulate smoke screens drifting in a manner
appropriate for a 0-20 mph wind and for variable winds to cover all
directions (360 degrees).
18.
The SAT-M shall simulate illumination and coordinated illumination
missions drifting in a manner appropriate for steady and variable
winds up to 20 mph.
19.
The SAT-M shall determine when rounds or moving targets shall be
sensed as unobserved or lost due to the effect of terrain elevation
features or obscured visibility.
20.
The SAT-M shall provide height of burst (HOB) variations and the
ability to adjust HOB for smoke, illumination, and area adjust fire
missions and high explosive/mechanical time (HE/MT). Variable
HOB to include simulation of air burst without ground effect, air
burst with ground effect and mixed bursts of both air and ground
effects to include any direction and speed.
21.
The SAT-M shall provide simulated air, graze, and mixed bursts
accurate to scale and size with respect to the observer-target
range.
38
22.
The SAT-M shall delay the distribution of rounds by10 seconds
between subsequent volleys for multiple round missions.
23.
The SAT-M shall simulate time of flight of both low and high angle
fire missions. The user may select a compressed time of flight
option upon scenario selection.
24.
The SAT-M will include full function simulation of the following
equipment with the latest technology: binoculars, compass with mils
and degrees, PLDR, IZLID, thermals, DAGR, Vector 21b and PRC117. As new equipment hits the fleet it will become available to train
with in SAT-M.
25.
The field of view shall be 45 degrees. The user will have the ability
to rotate their field of view laterally to achieve 360 degrees of
visibility. The user will also be able to rotate their field of view 90
degrees up and down to achieve 180 degrees vertical field of view.
26.
The SAT-M shall replicate massing of fires at the battery level.
27.
The SAT-M shall provide immediate after action review for a given
training session (threshold) and archive training data for all
students as historical data to focus future training (objective).
28.
The SAT-M shall provide mission replay in which all rounds fired
can be recalled and repeated.
29.
The SAT-M shall provide an instructor tutorial guide/demonstration
program.
30.
The SAT-M shall provide an instructor mode where one tablet can
be set as the instructor, and used to view and manipulate what is
happening in the student’s tablets.
a.
The instructor shall be able to damage units.
b.
The instructor shall be able to regenerate damaged units.
c.
The instructor shall be able to set unit behavior and assign
movement paths to units (i.e. enemy unit is hit by indirect
fire, and responds by running, seeking cover, or returning
fire).
d.
The instructor shall be able to add or remove equipment
from the student’s kit.
e.
The instructor shall be able to add, remove, and move
indirect fire assets.
f.
The instructor shall be able to add or remove enemy,
neutral, and friendly units.
g.
The instructor shall be able to control the day night cycle,
weather and environmental effects.
39
h.
The instructor shall be able to observer the student’s current
and past missions, as well pertinent data such as round
accuracy, transmission errors, and recommendations.
31.
The SAT-M shall compute "did-hit" grid location and HOB for each
weapon and mean point of impact and HOB for each fire mission.
32.
The SAT-M shall perform all known and future types of fire
missions.
33.
The SAT-M shall provide the functions needed to initialize and
control the training exercise. The user will have the ability to reenter
incorrect data.
34.
The SAT-M shall record data with a time-stamp in order to identify
significant points during the playback to highlight and illustrate
lessons learned.
35.
The SAT-M shall provide a means to initiate and terminate the
training exercise.
36.
Degraded modes will be selectable by the SAT-M at initialization
and during any part of the exercise. Examples include ammunition
status, navigation malfunctions, communications problems, no
binoculars, etc.
37.
SAT-M shall provide robust mission planning tools.
a.
SAT-M shall enable the user to plot positions using a virtual
protractor laid over a digitized 1:50,000 or 1:100,000 map.
b.
SAT-M will provide a palette of operational terms and
graphics to mark the map with battlefield control measures,
friendly, neutral, unknown and enemy units. The user will be
able to select an appropriate color when marking the map.
c.
SAT-M will enable to user to place notes and comments on
and beside the marks placed on the map.
d.
SAT-M will have “under the finger” magnification to
compensate for touchscreen inaccuracy.
e.
SAT-M will correlate user map markings to the location of
the virtual units to check on the accuracy of the markings.
The system will then provide feedback to the user based on
their accuracy.
f.
SAT-M will have virtual note paper, enabling the user to write
on the paper with a popup keyboard, but also draw on it with
a selection of pen widths and colors.
40
g.
38.
F.
SAT-M will have a virtual clipboard where the user can
construct and record their CFF missions, mark round
impacts, and target numbers.
SAT-M will provide mission feedback.
a.
SAT-M will provide real-time prompting, dependent on user
set tutoring level, to assist users who are having trouble with
mission execution.
b.
SAT-M will give end of mission feedback based of analysis
of expert behavior, and what they would have done in a
similar situation.
NONFUNCTIONAL REQUIREMENTS
1.
2.
Usability
a.
The SAT-M shall train and evaluate joint forward observers.
b.
The SAT-M shall provide the capability to exercise combined
arms to train fire support teams using HLA connectivity.
c.
Employment tactics. SAT-M shall be operational in any
environment that a tablet computer can operate; to include
garrison and field environments, SAT-M classroom
environments and aboard amphibious ships. This will make
SAT-M available to all locations throughout the world where
Marines are stationed.
d.
Employment prerequisites. SAT-M shall not require special
support requirements such as site preparation, storage
facilities or changes to other items of equipment at the time
of initial operational capability (IOC).
e.
Distribution. SAT-M shall be distributed according to tablet
operating system’s paradigm. For iOS it will be the App
Store, for Android it will be Google Play.
f.
SAT-M will be downloaded through the Joint Knowledge
Online gateway. This is to include the baseline program and
additional scenarios and environments.
g.
Control. SAT-M shall be controlled via the Army Knowledge
Online App Store and Google Play gateway.
Reliability
a.
3.
SAT-M shall be reliable, available and maintainable.
Performance a.
SAT-M shall be able to operate in a stand-alone mode.
41
4.
G.
b.
SAT-M shall replicate operational equipment platforms when
practical to provide training simulation.
c.
In accordance with DoD Directive 5000.59 all systems
currently under development shall be compliant with HLA.
d.
SAT-M shall realistically replicate all subsystem sound
effects, as well as inter-subsystem communication.
e.
Subsystem sound effects shall be in proportion to that of the
actual weapon operations.
f.
SAT-M shall simulate the required sensors and controls for
each subsystem platform to support required training tasks
and tactical exercises.
g.
The training system's sensors and controls shall represent
the physical appearance and replicate the performance of
each platform's sensors and controls.
Supportability
a.
SAT-M shall be designed for ease of preventive
maintenance, repair maintenance, and servicing.
b.
SAT-M will not require new Marine Corps resources or
personnel.
c.
SAT-M will run on Android and iOS tablets.
PRODUCT FEATURES
1.
The final product shall include interactive 3D graphics with
simulated representation of actual terrain.
Digitized 1:50,000 and
1:100,000
maps,
with
a
robust
mission
planning
capability.
Standard JFO equipment virtualized and usable. User
configurable system feedback. An instructor mode to monitor
students and adjust the scenario on the fly.
2.
Inputs
3.
a.
SAT-M will use device gyroscopes and accelerometer to
enable to user to adjust their view in the VE by physically
moving the tablet, as if it were a window into the VE.
b.
SAT-M will enable users to adjust the view in the VE by
using the touch screen to pan and swipe the view.
c.
SAT-M will provide a virtual keyboard that can be stowed
when not in use, for entry of text as needed during mission
execution.
Voice input for user action (future)
42
4.
H.
I.
J.
Graphical user interface (GUI) input for user action
CONFIGURATION MODULE
1.
Specify types, sizes, and location of targets
2.
Stationary and moving targets (future)
3.
Choose different terrain sets
4.
Choose different observation post locations
5.
Choose lensatic or M2 compass (degrees or mils)
6.
Allow entry to configuration module during run time
VIEW MANAGER MODULE
1.
Binocular view
2.
M2 or lensatic compass view
3.
Target designator view
4.
Thermal view
5.
Naked eye view
6.
Night vision device (NVD) view
USER ACTIONS FIRE MISSION PROCEDURE
1.
2.
Choose type of fire mission a.
Adjust fire
b.
Fire for effect
c.
Immediate suppression
d.
Immediate smoke
Choose target location method
a.
Grid b.
Polar
c.
Shift from known point
d.
Laser polar
3.
Input target description (Drop down list to pick from)
4.
Choose method of engagement
a.
High explosive (HE) / Quick
b.
HE / Time
43
5.
K.
c.
HE / Variable time
d.
White phosphorus (WP)
e.
WP M825
f.
Improved conventional munitions (ICM)
g.
Illumination
Enter subsequent corrections
a.
Left
b.
Right
c.
Add
d.
Drop
e.
Up
f.
Down
6.
Enter observer-target (OT) direction
7.
End the current mission
8.
Enter refinements
9.
Establish known points
10.
Utilize standard operating procedures (SOP) for immediate
missions
11.
Allow for sequential viewing of targets
AFTER ACTION REVIEW
1.
2.
Immediate playback of last mission
a.
Playback controls: FF, pause, and rewind control bar
b.
Show grid location and error for target and each impact
c.
Provide recommendations for order of mission execution if
user deviated from subject matter expert (SME) order.
d.
Advise user when they skipped a step, did not appropriately
calculate a value, did not double check a plot or calculated
value, or failed to observe round impacts.
Save results for later review or print out based on user’s name.
a.
Compile results for user.
44
V.
A.
SYSTEM DEVELOPMENT
BACKGROUND
Examining SAT-M through the lens of the model-view-controller (MVC)
design pattern was the first step in developing our application. We used the
design pattern to explore the differences and similarities between desktop /
laptop and tablet VE trainers. This helped us determine where to focus our
limited resources and therefore maximize development efforts. Throughout the
process we leveraged validated CFF VEs, primarily ObserverSim.
1.
Model-View-Controller
MVC assigns the software objects that make up a program “one of three
roles: model, view, or controller” (Apple Inc., 2013). Conceptually each software
object is an isolated entity that does not require knowledge of how other objects
work. The objects interact by providing information when requested and asking
for information when needed.
We used the MVC pattern to examine our development efforts based on
the roles that need to be fulfilled by each object, rather than in a strict object
oriented programming sense. We abstracted the concept and applied it to the
entire program, splitting it into the roles of model, view or controller.
The model portion of the software “encapsulate[s] the data specific to an
application and define[s] the logic and computation that manipulate and process
that data” (Apple Inc., 2013). The view portion of the software knows how to
display, and might allow users to edit, the data from the application’s model
(Apple Inc., 2013). The controller portion of the software acts as an intermediary
between the view portion and the model portions of the program. It is a conduit
through which the view learns about changes in the model and vice versa (Apple
Inc., 2013).
45
SAT-M was initially conceived to run on both Android and iOS devices,
thereby including both tablets and smartphones. Utilizing the MVC design pattern
allows for code reuse, as only the portion of the software that interacts directly
with the system would need to be changed.
What follows is a discussion of the model and view aspects of the MVC
pattern as it pertained to our development effort. As we were unable to obtain the
source code or design documents for ObserverSim, there is little we could infer
about its implementation of the controller. We are therefore unable to leverage
the ObserverSim’s controller in our development efforts.
Incidentally, MVC is the design pattern driving the Cocoa and Cocoa
Touch frameworks used by Apple in their iOS software development kit.
a.
Model
The model portion of a VE CFF program is comprised of the data
necessary for it to run, as well as the associated logic. This includes textures,
models and terrain data. It also includes the functionality of that data. For
example, the virtualized vector 21b is composed of both screen display
information and a program that informs vector 21b response behavior when it is
used.
When developing SAT-M we knew that there would be almost no
difference between ObserverSim and SAT-M’s model; the data for Twentynine
Palms terrain and a virtualized vector 21b are the same regardless if it is
displayed on a desktop or a tablet system.
b.
View
The view portion of a VE CFF program consists of how the
information is presented to, and how the program accepts inputs from, the user.
Due to both the differences in input modalities and screen size between laptop /
desktop and tablet computers SAT-M and ObserverSim diverge the most in this
area, making this our primary focus of development effort.
46
Our effort started by mapping the mouse and keyboard inputs of
ObserverSim to one of the input modalities of the tablets. At our disposal were
the multi-touch enabled touchscreen, accelerometers, and gyroscopes. From
conception we knew that we wanted to have the user’s perspective controlled by
the accelerometers and gyroscopes. Adding a single finger swiping interface to
control the view was discussed but never implemented. At one point we had the
program automatically switch to mission planning mode when the tablet was
turned horizontally. The idea was conceptually interesting, but impractical as it
caused issues when users put the tablet down to take notes and the display
unexpectedly changed.
Once it was established that accelerometers and gyroscopes would
control perspective the remaining inputs were either mapped to the multi-touch
enabled touchscreen or eliminated. Translating the mouse input for ObserverSim
to the tablet was relatively easy, instead of point and clicking, the user touches
the desired button to push. There are some challenges with this methodology as
a finger occludes the screen, is less precise, and larger than a mouse pointer
Whenever possible, to avoid difficulty when a user has only one hand available,
we made the buttons large and near the edge of the screen, thereby allowing the
user to hold the tablet and press the buttons with their thumb. Figure 6 is a
screen shot of SAT-M in the vector 21b view.
Buttons for selecting devices are arrayed on the left side of the
screen, permitting easy use when the tablet is held in the left hand. Future
developments include giving the user the option of choosing which side to place
the button bar.
47
Figure 6.
Screen capture of SAT-M’s vector 21b view
In only one instance does ObserverSim use the right mouse button,
and this for gathering range information with the Vector 21bs. In ObserverSim, if
the both the left and right mouse button are pressed while in Vector 21b mode
heading and distance to the object under the “pointing circle” is displayed. In
Figure 6 the “pointing circle” is just below the technical vehicle. This rare use of
the right mouse button caused problems for some of the research participants in
the experiment, see Chapter VI. Some of the participants who had difficulties had
to be told to use the right button, as they would try and left click on the range
button and only get heading. The multi-touch enabled touchscreen allows SAT-M
to get around this confusion. The user can press both buttons at the same time,
or one at a time, as they see fit. In Figure 6, the two aforementioned buttons are
to the left and right of the vector 21b view, the direction button is as a plus sign in
a circle and the range button is as a white arrow.
48
Mapping the keyboard from ObserverSim to SAT-M was more
difficult than mapping the mouse. In the interest of limited development time, we
chose to use drop down menus, rather than include a fully functional keyboard.
This resulted in the elimination of user controlled walking motion. In ObserverSim
pressing W, A, S or D moves the user forward, left, right or backwards
respectively. Just as in FOPCSim, we chose to have the user stationary; with one
less input to map we were able to keep SAT-M controls simple.
ObserverSim not only uses the keyboard to move around the VE
world, but also uses it for entering mission data. In SAT-M we chose to either
auto populate the mission data, or use dropdown menus. Auto populating the
data required additional logic to ensure that the user had collected the data that
was to be auto populated. We ensured that the drop down menus had the
pertinent options given the missions the user could execute. Using drop down
menus limits the flexibility of the software but allowed us to avoid the
implementation of a virtual keyboard and the underlying logic for parsing the
inputs. In the experiment, see Chapter VII, a few of the research participants had
an issue with the way we mapped ObserverSim’s (Figure 7) keyboard inputs to
SAT-M’s touchscreen.
The Dell Precision M6300 laptops that come with the DVTE suit
have 17” displays, this equate to three times the area of the Asus Transformer
Pad Infinity’s 10.1” display. To compensate for the small screen size the mapping
of icons and viewable area was altered. Figures 7 and 8 are screen captures
from ObserverSim and SAT-M respectively, both taken in the naked eye view.
Differences between the two include the placement of the JFO tool icons, the
relative size of the icons, and the decision to have SAT-M’s icon tool bar occlude
the background.
In SAT-M the JFO tool icons were placed in an occluding bar and
on the side of the screen to facilitate the touchscreen interface. As mentioned
earlier, having the icons on the side of the screen makes the icons easier to
select when the user is holding the tablet. Having the toolbar occlude the
49
background creates a region of the screen where the user’s only interaction is
tool selection. This prevents the user from accidently sending another command
if they missed the desired tool. For example, if the interface allowed for finger
swiping to change perspective the system might infer a missed tool touch as a
finger swipe, changing the viewer’s perspective and potentially disorienting them.
Figure 7.
Screen capture of ObserverSim’s naked eye view
SAT-M’s icons are larger (Figure 8), relative to the screen, than
ObserverSim’s icons (Figure 7). If they had maintained the same size ratio they
would be difficult to select, inhibiting ease of use. A nice side effect of the larger
icons is that they do not need to have extra text describing what the icon is, as
can be seen in ObserverSim’s screen capture (Figure 7).
50
Figure 8.
B.
Screen capture of SAT-M’s naked eye view
INTERFACE DESIGN STUDY
An interface design study was performed to facilitate user interface
development. The intent was to create an effective user interface for the SAT-M,
and to include scenario election, mission execution and mission planning. Mock
up screens were created in HTML. These allowed a user to flow through a
mission, starting with the creation of a user profile. Though much of this work did
not end up in the current build of SAT-M, it did facilitate SAT-M development by
providing the development team with conceptual images and story boards.
Appendix A includes the complete interface design study.
51
C.
OPERATING SYSTEM AND HARDWARE SELECTION
At the time SAT-M development began, the two dominant mobile
operating systems were iOS and Android. To ensure our program would reach
the widest audience, we decided to develop software for both.
SAT-M is not dependent on cellular network access, which simplified our
platform choices. A 3rd generation iPad with 16 GBs of internal flash storage, the
least expensive and most up to date model available at the time, represented the
iOS platform. A comparable Android device, the Asus Transformer Pad Infinity,
model TF700T, was chosen for its similar performance, screen size (10.1” versus
9.7” on the iPad) and inputs, which include gyroscopes, accelerometers, and a
multi-touch enabled touchscreen.
D.
BACKEND LIBRARY SELECTION
Due to our requirement that SAT-M run on both Android and iOS, the unity
3D game engine was chosen as the backend library. Unity 3D allows the
developer to build the software application once and “compile” it for different
target platforms, making it relatively easy to port from one platform to another. An
additional advantage of Unity3D is that a developer license is relatively
inexpensive and once “compiled” the runtime application can be distributed
without additional license costs. License cost is per-developer; the generated
application may be used without additional license fees (Unity Technologies,
2013).
E.
SOFTWARE PRODUCTION
The software was not written by the authors; we used the visual simulation
and game-based technology team located at NPS. As the team supports the
MOVES Institute the authors had ample access to the developers, allowing for
quick turn around with any issues or need for clarification.
The basic application premise was discussed with the software developers
early in the project. Using accelerometers and gyroscopes to control the
52
perspective (vis á vis W2W) in a CFF VE had not been done before, numerous
tech-demonstrations were created to validate the idea. Once both the Asus and
the iPad satisfactorily demonstrated W2W, the user interface was discussed and
planned.
The HTML interface designed and tested previously was demonstrated to
the development team, who then implemented the button graphics and logic. The
devices to be simulated in the software were discussed with the team as well as
the general CFF process. This gave the team enough information to be able to
implement a simplistic simulation of each device required by the software.
The simulated 3D terrain was built using real-world data, with subsequent
modifications to increase the graphical fidelity. The 3D models were taken from
an in-house library and customized to support the application. Audio assets came
from a commercially-licensed audio library and adjusted to provide adequate
audio feedback to the user.
Regular meetings with the development team allowed for frequent
feedback. This process ensured the limited resources for the development of the
software was efficiently used.
F.
LIMITATIONS
As mentioned earlier, during the development of SAT-M there was an
attempt to have it mirror ObserverSim as much as possible. That effort was
restricted by the limited amount of time and development resources available for
the project. The version of SAT-M used in the experiment, see Chapter VI, had a
number of key differences. In some cases the authors specifically wanted SAT-M
to be different than ObserverSim.
•
SAT-M did not have a virtual keyboard. When the CFF and position
report (POSREP) are generated on ObserverSim the user inputs
much of the data via the keyboard. To get around this SAT-M either
auto populated the information when prompted by the user or used
drop down menus.
53
•
ObserverSim had a fully functional DAGR. The buttons on the
virtual DAGR functioned as they would on a real DAGR. SAT-M
used a static screen shot of the DAGR’s present position screen.
•
In developing the scenario for the experiment in ObserverSim it
was not possible for the authors to precisely select the set of
equipment they wanted to have available to the user. As SAT-M
was developed from scratch only what was appropriate to the
experiment’s tasks was presented to the user. Some of the
extraneous equipment in ObserverSim was the clipboard, and the
NVGs and StrikeLink interconnect in the Vector 21b view. Figure 7
is a screen capture from ObserverSim running the experiment’s
scenario. The extraneous clipboard icon is just below the compass
icon.
•
In ObserverSim there was no need for the user to echo back either
the message to observer (MTO) or the shot call. Failure to respond
to the shot call from the fire direction center (FDC) will not
jeopardize the fire mission but failing to respond to the MTO will. It
was the authors’ opinion that not including the required MTO
response resulted in negative training and we added this call to
SAT-M.
G.
CONCLUSION
The process was completed when a tablet version of CFF VE software
was created that would allow the authors to satisfactorily compare it to similar
software on a desktop / laptop PC. Key functionality of the software was
comparable, which allowed the experiment to focus on the disparate input
modalities.
54
VI.
A.
EXPERIMENT
BACKGROUND
McDonough and Strom, in their work on FOPCSim 2, showed that a PC
based VE CFF trainer can improve performance. Though their results were
constrained by not being able to conduct a graded live fire event, there was
enough evidence to show that the software they developed did indeed improve
student performance. In essence SAT-M is an updated version of FOPCSim. It
brings the simulator to tablet systems while updating it to reflect eight years of
technology advancement. The experiment was designed to see how viable the
input modalities of the tablet system are when compared to the existing standard
set by desktop / laptop systems. The focus was not on whether SAT-M could
improve training but if window to the world (W2W) is a viable way to conduct CFF
training, and to try to discover why or why not.
B.
HYPOTHESIS
H0: Users have no preference between using a laptop based VE CFF
trainer and using a tablet based VE CFF trainer.
H1: Users will prefer to use one of the devices over the other.
This is the overall hypothesis of the study. However additional data was
collected to help elucidate why the participants may or may not prefer one
system to the other.
C.
METHOD
1.
Participants
A total of 32 active duty personnel participated in the study. They varied in
rank from O-1 to O-4, one was female and 31 were male. The participants were
drawn from two populations, those trained in CFF and those not trained in CFF.
An individual was considered trained if they had been to a school dedicated to
combined arms training, i.e. Field Artillery School, JFO course, or TACP course,
55
or had been designated by their commanding officer (CO) to conduct CFF. There
were exceptions to the classification. In a number of cases USMC weapons
platoon commanders were classified as trained. A weapons platoon commander
is the leader of company’s FiST, and would have extensive on the job training. In
another case, an individual had had extensive CFF experience over a decade
ago and none since, was classified as untrained.
2.
Apparatus and Location
a.
Equipment
Equipment includes a standard USMC issued Deployed Virtual
Environment Training (DVTE) suite, running the ObserverSim software.
ObserverSim is a PC CFF simulation developed for the Marine Corps and based
on FOPCSim. The tablet PC chosen was the ASUS Transformer Pad Infinity,
model TF700T, which ran the SAT-M tablet based simulation. Additional
equipment included stopwatches, clipboards, writing materials, video recording
equipment, and two laboratory spaces.
b.
Location
The experiment was conducted aboard Naval Postgraduate School,
in Watkins Hall MOVES Laboratory and Glasgow Hall Human Systems
Integration Laboratory.
3.
Scenario
Despite the difference between graphical representation and fidelity the
authors attempted to have the scenarios on the two devices as similar as
possible. The overall scenario had the user in Twentynine Palms with two
targets, an open back pick-up that represented a technical, and a T-72 Russian
main battle tank. The targets were placed such that the user would not be able to
see both at once. They were outside of danger close and within unaided visual
range. The scenario was set in the day. The indirect fire unit was “kilo” battery,
56
consisting of 155 mm howitzers. Additional differences between SAT-M and
ObserverSim are discussed in Chapter V, section F.
4.
Procedures
a.
Tasks
Tasks conducted were derived from the Brannon and Villandre CTA
produced in 2002 (the full text of the CTA is on pages 17 through 42 of Brannon
and Villandre). They include the use of GPS for self-location, use of a compass
to determine bearing to a target, use of a Vector 21b common laser range finder
(CLRF) to determine bearing and distance to a target, and the use of the
software to generate, send, and then execute a CFF mission. These are common
tasks that a JFO would complete in order to build and execute a fire mission.
The experiment went as follows:
•
Obtain consent
•
Complete proficiency questionnaire
•
Execute Protocol “A”, using either the laptop or tablet, device order
was semi-randomly selected. Protocol “A” is:
•
Three minutes of exposure and system familiarization.
•
Task #1—Determine self-location
•
Task #2—Determine bearing to target A with compass
•
Task #3—Determine bearing and distance to target B with
CLRF
•
•
Task #4—Describe icon used to transmit CFF
•
Task #5—Generate and execute CFF brief
Complete Likert scale and open ended questionnaire.
57
•
Execute Protocol “B”, which is identical to Protocol “A” except the
participant switches device, and the Likert scale and open ended
questionnaire has four additional questions that directly ask the
participant about device preference, as well as a final open ended
question.
•
Complete a demographic questionnaire.
Examples of the protocols and questionnaires can be found in appendix B,
experimental design details.
b.
Conditions
The experiment was a two by two cross-over design as shown in
Table 7. This allowed the authors to control for the possibility a participant might
prefer one of the devices over the other due to the order they were presented.
Experience
Device
Trained
Untrained
Tablet
Trained observer using
Untrained Observer using
tablet
tablet
Trained Observer using
Untrained Observer using
PC
PC
Desktop PC
Table 7.
Two by two cross-over design
58
VII.
A.
RESULTS
GENERAL
The overarching goal of the experiment was to determine how viable the
input modalities of the tablet system are when compared to the existing standard
set by desktop / laptop systems. However, as the two platforms were running
different software it is possible that platform preference was due to the software
and not the hardware. As mentioned in Chapter V, to reduce participant bias
there was a concerted effort to have SAT-M’s interface and functionality mirror
that of ObserverSim’s.
B.
LIKERT SCALE QUESTIONS
Two sets of 10 identical Likert scale questions were asked during the
course of the experiment. Of the10 questions, six had to do with the interface,
two pertained to the systems effectiveness as a CFF trainer, and the last two
related the system’s ability to mimic the real world physical activity and motion
required to execute the tasks. Each question set pertained to one of the systems,
either laptop or tablet. The Likert scale questions were analyzed using a
Wilcoxon Signed-Rank test. A two-tailed α of 0.05 was used. Table 8 contains
the results of this analysis. Five out of the10 questions had a statistically
significant difference between the participant’s answers at the 0.05 threshold. In
the five cases when there was statistical significance the participants preferred
the tablet system to laptop system.
Two additional Wilcoxon Signed-Rank tests were evaluated. The first was
on a summation of all10 Likert questions. The test was run to see if there was an
overall device preference. In the second test, as some of the Likert questions
were very similar to each other, the average scores of these questions were
used. This was done to eliminate the possibility that the same sort of question
was overly influencing the results.
59
1.
Analysis of Likert Questions
a.
Question 1: Training with this Device on a Regular Basis
Will Improve My Ability to Conduct CFF in the Field
With a p-value less than 0.002 the participants’ responses provided
a greater indication that the tablet system will improve their ability to conduct CFF
in the field when compared to laptop system.
b.
Question 2: It Was Difficult Navigating through the
Device to Find the Appropriate Information While
Completing the Tasks
With a two-tailed p-value of 0.0794, there is no indication of system
preference.
c.
Question 3: The Real-World Physical Actions and
Conducting A Task In The Virtual Environment Are the
Same
With a two-tailed p-value of less than 0.001, the participants’
responses provided a greater indication that the actions conducted in the
physical world and the actions conducted on the tablet systems VE are more
similar than the physical world to laptop system comparison.
d.
Question 4: The Button Icons Provide Intuitive Inference
of What Would Happen When They Are Pressed
With a two-tailed p-value of 0.44, there is no indication of system
preference.
e.
Question 5: It is Easy to Move though the Screens
without Losing One’s Place
With a two-tailed p-value of 0.22, there is no indication of system
preference.
60
f.
Question 6: Having This Software Available at My Unit
Would Improve My Units Ability to Perform Their
Mission
With a two-tailed p-value of 0.051, there is no indication of system
preference.
g.
Question 7: It Was Hard to Understand what the Buttons
Did
With a two-tailed p-value of 0.24, there is no indication of system
preference.
h.
Question 8: The 3D View Interface Was Intuitive
With a two-tailed p-value of less than 0.04, the participants’
responses provided a greater indication that tablet system’s 3D view interface is
more intuitive than the laptops system’s 3D interface.
i.
Question 9: The Device Accurately Represents the Real
World Physical Motion Required to Conduct the Task
With a two-tailed p-value of less than 0.002, the participants’
responses provided a greater indication that tablet system more accurately
represents the real world physical motion required to conduct the task than the
laptop system.
j.
Question 10: The Overall Interface is Intuitive
With a two-tailed p-value of less than 0.009, the participants’
responses provided a greater indication that tablet system’s interface is more
intuitive than laptop system’s interface.
k.
Summation of All 10 Likert Question Answers
With a two-tailed p-value of less than 0.002, the participants’
responses indicated an overall preference for the tablet system over the laptop
system.
61
l.
Summation
Redundancy
of
Likert
Questions,
Eliminating
Four sets of two Likert scale questions were similar to each other.
For example, Q8: The 3D view interface was intuitive, and Q10: The overall
interface is intuitive, are in essence asking the same thing. To prevent this and
similar redundant questions from overly influential the results, the average of the
redundant questions were used in the calculation. The similar questions are Q2
and Q5, Q3 and Q9, Q4 and Q7, and Q8 and Q10. Questions Q1 and Q6 were
unique and their values were added as is. The resulting two-tailed p-value was
less than 0.001, when redundancy was eliminated the participants showed an
overall preference for the tablet system over the laptop system.
2.
Summary of Results
Table 8 is a summary of the results of the tests. The rows correspond to
the Likert scale questions or the aggregated results of the Likert scale questions.
The columns provide insight into the results of the Signed-Rank test. The second
column, “n”, is the number of non-zero values after the difference between the
paired data was calculated. The third column is the summed signed ranks for the
difference of the paired data when the participant preferred the tablet system.
The fourth column is the summed signed ranks for the difference of the parried
data when the participant preferred the laptop system. The two tailed p-value
was the probability of the values in the third and fourth columns appearing if the
mean of the answers were equal.
62
Q1
Q2
Q3
Q4
Q5
Q6
Q7
Q8
Q9
Q10
All
Eliminate
Redundant
Questions
Table 8.
3.
n
13
22
21
18
22
11
24
21
23
23
30
29
Summed singed ranks
Tablet
Laptop
System
System
2 tailed p-value
91
0
0.0013
180.5
72.5
0.0794
215.5
15.5
0.0005
103.5
67.5
0.4362
164.5
88.5
0.2168
55
11
0.0511
191
109
0.2388
173.5
57.5
0.0392
241
35
0.0015
223.5
52.5
0.0083
383.5
81.5
0.0019
371
64
0.0009
Wilcoxon signed-rank test results for Likert scale questions asked
post experiment
Analysis Tools
The data was analyzed in R using the Wilcox.test function. Histograms
were generated to determine symmetry around the median using JMP. Ideally
when conducting a Wilcoxon Signed-Rank test, the data will have no zeros, there
will be no ties, and the data will be symmetric around the median. In the case of
this data there were ties, zeros and in some cases the data was not perfectly
symmetric. However, when the test indicated that the results were significant,
except for Q8, the p-value were all less than 0.01. Q8 was symmetric around the
median and the authors feel comfortable stating that there is significant
difference between the medians of the participants’ answers as it pertains to this
question.
C.
DIRECT QUESTIONS
After completing the second protocol and answering the associated 10
Likert scale questions the participants were asked four direct questions,
63
numbered 11 through 14. The direct questions had the participant specifically
state a preference between the laptop and the tablet systems. The questions
allowed the authors to directly ask for a preference, and in the case of questions
12 and 13 to a limited degree control for differences between the two systems.
The answers were analyzed using a sign test. Table 9 is a summary of the
results and analysis.
1.
Analysis of Direct Questions
a.
Question 11: Which device was more intuitive to use?
With a p-value of less than 0.004 the participants thought the tablet
system was more intuitive to use than the laptop system.
b.
Question 12: If the software on both devices were about
equivalent I would prefer to use?
With a p-value of less than 0.0006 the participants would prefer to
use the tablet system instead of the laptop system if the software on the devices
were about equivalent.
c.
Question 13: If each device had the same feature set I
would prefer to use?
With a p-value of less than 0.0002 the participants would prefer to
use the tablet system instead of the laptop system if the devices had about the
same features.
d.
Question 14: This device is more convenient to train
with?
With a p-value of less than 0.021 the participants thought the tablet
system was more convenient to train with than the laptop system.
2.
Summary of Results
Table 9 is a summary of the results of the tests. The rows correspond to
the questions. The second and third columns are the number of participants who
64
answered tablet or laptop to the question. The right most column, p-value, is the
probability of the values in the second and third column if the chance of either
being chosen is 50 percent. In Question 11, one of the participants had no
preference; hence the sum of the laptop system and tablet system columns is 31
instead of 32.
Tablet
Laptop
Question System
System
p-value
11
24
7
0.003327
12
26
6
0.000535
13
27
5
0.000113
14
23
9
0.020062
Table 9.
3.
Direct Question Sign Test Results
Analysis Tools
The data was analyzed in Excel using the cumulative binomial distribution
with a probability of 0.50. The number of trails was the number of participants,
32, except for question 11, where one of the participants wrote in the response
“same,” then it was 31. The resulting probability was doubled to account for a two
tailed p-value.
D.
TRAINING AND ORDER
An evaluation was conducted to investigate if system use order or training
had an effect on the data collected. Two-sample t-tests were run on the
difference values between the Likert scale questions in the two protocols. The
tests were run to determine if the mean values of trained and untrained, and the
mean values of laptop first and tablet first were different.
1.
Summary of Results
A summary of the results of the tests is found in Table 10. In nearly all
cases the training level of participants and the order the devices were used
demonstrates no statistically significant difference. However, a difference was
found in regards to Q3, “The real world physical actions and conducting a task in
65
the virtual environment are the same”. Both training and order have a statistically
significant effect on the participants’ answers to Q3. If the participant was
untrained or used the laptop first they gave the tablet system a higher score than
the laptop system.
Q1
Q2
Q3
Q4
Q5
Q6
Q7
Q8
Q9
Q10
Summed
Redundancy Removed
Table 10.
2.
Two Tailed P-Values
Training
Order
0.5320
1.0000
0.6675
0.6674
0.0491
0.0469
0.1763
0.8246
0.6111
0.1797
0.0694
0.4800
0.1559
0.6399
0.6004
0.2913
0.2458
0.6457
0.1560
0.727
0.2909
0.8768
0.1291
0.8387
Two Sample t-test for Training and device order
Further Analysis
A one-way ANOVA test, using JMP, was run on the four item data subset
pertaining to Q3. The test returned a probability of 0.0377, showing significance
at α = 0.05. Table 11 has the means and the lower and upper 95% confidence
intervals.
Trained and laptop First
Trained and tablet First
Untrained and laptop First
Untrained and tablet First
Mean of difference between
n laptop and tablet answers
Lower 95% Upper 95%
8
-1.375
-2.387
-0.363
8
0.000
-1.012
1.012
8
-2.125
-3.137
-1.113
8
-1.375
-2.387
-0.363
Table 11.
Results of Oneway ANOVA on Q3
66
E.
OPEN ENDED QUESTIONS
The open-ended questions were phrased to allow participants the
opportunity to express what they felt was most pertinent from their experience
with the systems. As expected there were a variety of answers. Some related
directly to certain features of the software, for example “Compass should have
metal filament that lined over radial direction to aid in giving accurate report”.
Such statements are interesting in terms of how accurately digitized equipment
represents real world equipment, but they do not drive at the authors’ research.
Fortunately, many of the remarks not only confirmed the results of the Likert
scale and direct questions but also provided some surprising insights.
The most popular subject of comment involved the physical motion
required by the tablet system. These ranged from simple statements, such as
“Tablet has more realistic feel due to physical activity required as in the real
world,” to more thoughtful ones, such as noticing increased opportunities for
training. Some of the more nuanced comments related to the differences
between using the Vector 21b and compass on the tablet system and using the
two virtual devices on the laptop system. It is time consuming to ensure that the
“pointing circle,” the laser recital, is over the target when using a physical Vector
21b. This task frequently requires more than one “squirt”, a colloquialism for
ranging the target, and multiple confirmation “squirts” to ensure that one has the
right distance and heading. In the laptop system the 3D view is controlled with a
mouse. When using this input modality the “pointing circle” of the Vector 21b
stays exactly where it is placed and perfectly still. This creates a condition where
determining the heading and distance becomes unrealistically easy, and there is
no need to confirm with a second or third “squirt”.
To a limited degree the aforementioned condition occurs when using a
floating dial compass virtual device on the laptop system. When using a physical
compass it takes time for the floating dial to come to a rest and requires a steady
hand to ensure the reading is accurate. When using the laptop system, the
compass always gives a perfect bearing to whatever is lined up with the sighting
67
wire. To get a good bearing with the tablet system the user needs to steady the
system gyroscopes and accelerometers. Although the Vector 21b and compass
are not particularly challenging to use, it is harder than the laptop system makes
it appear, whereas the tablet system replicates some of the real world motor
skills required to conduct the task. A comment from one of the participants
summed this notion up nicely “The laptop was easier to manage in terms of
pointing and clicking, but the tablet better approximates holding up the vector”.
Analysis of Likert scale questions three and nine show that participants
believed the tablet system was more representative of real world physical action
and motions required for task execution. A number of statements supported this
finding. “I liked the tablet a little more b/c it did a little better mimicking actual use
of hands and some of the physical motion of looking around & up/down”.
Participants also like the physical motion that the tablet system requires because
it helped them maintain their orientation within the 3D world, “It was much easier
to locate TGTs and not get disoriented when using the tablet”. Other comments
related to the way the physical motion helped maintain participant attention,
“Tablet was generally better in that it kept my attention through the requirement
of movement”.
The authors expected the participants to make comments similar to the
first, but were surprised with how W2W helped participants maintain their
orientation in the VE and increased the participant’s attention. This is especially
interesting when considering how the tablet system was significantly less refined
than the laptop system, with a crude interface and simple and repetitive 3D
terrain.
A number of remarks related to the advantages of training with a tablet
system over a laptop system. Others commented on the mobility and ease of
access for a tablet system, “Very easy to use. Small & portable—convenience
factor is huge”. One participant stated on how the mobility of the tablet allows the
68
trainee to get out of the classroom and practice in more realistic conditions, “You
could take it outside put soldiers in full body armor & simulate a CFF w/out the
range”.
A significant number of comments discussed how to make both the tablet
system and laptop system better. Voice recognition was the number one desired
feature for both devices, allowing the VE user to speak the CFF, as they would in
the real world, instead of filling out forms. The second most desired feature
pertains to the limitation of the SAT-M development effort. Respondents wished
the user could manually enter data into the CFF instead of having that data auto
populate. Numerous participants were concerned with the possible negative
training effects of auto population. One commented, “The laptop was better only
because it had less preformatted response information, which forced me to do
like I would in real life and remember, write down, or reference tools to complete
CFF”.
F.
DISCUSSION
Information
collected
during
the
experiment
can
be
generally
characterized as either a direct or evaluated comparison of the desktop / laptop
and tablet systems. The former was collated from direct questions and the latter
from the Likert scale results and answers to the open ended questions.
Participants’ opinions about the software or their opinions about the specific
hardware were not the focus of the authors’ investigation. We concentrated
instead on what participants thought about the more generic concept of VE
training simulation as designed for tablet devices. That is, the focus was not on
SAT-M running on an ASUS Transformer Pad Infinity, or ObserverSim running
on a DELL Precision M6300 laptop, but rather in the holistic concept of VE
training software running on different devices.
The results of the experiment were used to answer research questions
one through four:
69
1.
Is a VE trainer on a tablet possible?
2.
Is the “Window to the world” paradigm seen as a valuable addition to
VE training?
3.
Would military officers trained in CFF see a value in VE tablet CFF
training?
4.
Would military officers untrained in CFF see a value in VE tablet
CFF training?
1.
Is a VE trainer on a tablet possible?
The development effort shows that it is possible to create a tablet VE
training simulation. Five of the ten Likert scale questions (Table 8) and all four of
the direct questions (Table 9) demonstrate participant preference for the tablet
system over the laptop system. This indicates that not only can a tablet VE
training simulation be created; but that the participants feel it is a superior
platform for CFF VE training when compared to desktop / laptop systems.
Most surprising is that in no way is this a fair comparison. The SAT-M
software was in an immature stage of development, with a far rougher VE when
compared to ObserverSim, the result of multimillion-dollar procurement.
2.
Is the “window to the world” paradigm seen as a valuable
addition to VE training?
Almost all comments to the open ended questions indicated a positive
response to the W2W paradigm. W2W on the tablet system makes the simulator
more than just a cognitive skill and specific knowledge trainer. As discussed in
Chapter III, the system has the potential to train those physical activities
necessary to execute a CFF mission.
The developers of the DVTE suit recognized that negative training could
occur when the user stares at a stationary monitor. Each DVTE suite includes a
head mounted display (HMD). With the HMD the user moves their head to look
around in the VE as they would in the real world. This places the user directly
into the VE. Unfortunately, this makes it hard to see anything in the real world,
including the CFF they carefully wrote down and the keyboard for typing
70
instructions. Due to these limitations, the HMD is only worn in the final portion of
the DVTE mission, when there is little need to double check notes and finger
placement. W2W does not have any of these issues; the multi-touch screen is
both the user’s view of the world and interface.
3.
Would military officers both trained and untrained in CFF see a
value in VE tablet CFF training?
As there is little response difference between the trained and untrained
participants, it appears that they both see potential in tablet VE CFF training. The
simplicity of the interface allows for the untrained to quickly grasp how the
devices are used, whereas W2W lets the trained work on both cognitive and
psychomotor skills.
4.
Further Discussion
Cognitive load theory describes three categories: intrinsic load, germane
load and extraneous load. Intrinsic load is “the mental work imposed by the
complexity of the content in your lessons and is primarily determined by your
instructional goals” (Clark, Nguyen, & Sweller, 2006). In CFF intrinsic load is the
base line mental tasking that is a result of using the equipment and planning and
executing the mission. Germane load is “mental work imposed by instructional
activities that benefit the instructional goal” (Clark et al.). In CFF training this is
represented by the specifics of the scenario designed to make the CFF either
simpler or more complex depending on the learning objective. Extraneous load
“imposes mental work that is irrelevant to the learning goal and consequently
wastes limited mental resources” (Clark et al.). In simulated CFF training
extraneous load is any effort the user spends figuring out how to use the
interface, navigate the systems screens and orient themselves in the VE.
Observations of the 32 participants executing the same mission on both tablet
and laptop systems leads the authors to conclude two important points:
71
•
System preference had nothing to do with software or system
fidelity
•
System preference was influenced by how the tablet system
reduces extraneous cognitive load, allowing the participants to
focus their mental efforts on executing the mission and not fighting
the interface
72
VIII. CONCLUSION
A.
GENERAL OBSERVATIONS
There were three areas that the authors explored in this research. We
looked at the software differences between SAT-M and ObserverSim, the use of
multi-touch touchscreens as an input device versus a mouse and key board, and
we explored the use of the W2W as a way to train psychomotor skills. In this
effort we reused and validated a previously developed CTA, applied a HARs
assessment to tablet systems to validate real world to VE action mapping,
assessed how the tablet system would be used, developed an experiment, and
analyzed the results. By incorporating new technology into the process and
leveraging existing work our feeling is “the whole is more than the sum of its
parts”. From the outset of this process we expected that the participants would
find using the multi-touch enabled touchscreen to be more intuitive, and that they
would also find that the W2W allowed them to train psychomotor skills.
B.
SUCCESS
The work in this thesis establishes a precedent for early adoption of new
technologies and a design process that leverages preexisting methodologies with
an emphasis on reuse of prior work. The inherent potential for quality VE training
in tablet system is exhibited throughout the process the authors followed. We
successfully created a VE trainer on a tablet and it was considered a viable way
to train CFF by both experienced and inexperienced research participants.
Further, our investigations into the W2W paradigm were reported by the
participants as a better way to train than using a traditional mouse and keyboard.
Perhaps the most unexpected finding, as well as most rewarding, was the
potential for W2W to increase a participants attention and interest in the training
and reduce the cognitive “overhead“ that results from training in a VE. Key
takeaways of this research from the authors’ perspective included:
73
•
Figure 9.
The W2W paradigm creates a new area for improving training in
VEs (Figure 9)
Improvement builds as new technology is adopted into training
system design process
•
Reuse of previous design process reaps positive results when
incorporating new technology
•
Tablet training creates new opportunities for end-to-end software
delivery and updates
•
Additional work in the area of training and simulation design for
emerging technologies can produce unexpected advantages, as
compared to maintaining the status quo
•
Potential of tablet-laptop hybrids to reuse existing software, with
minimal refactoring, to provide rapid deployment of tablet system
CFF trainers, see the end of Chapter IX
•
VEs need to be appropriately aligned with devices to produce
desired outcomes, and the use of HARs assessments can aid in
the design process
74
C.
LIMITATIONS
SAT-M as built is not production ready, and it will take significant effort to
make it so. As reported in Chapter VII there is the possibility of confounding in
the experiment. Like any military training, improper instruction leads to negative
training, SAT-M needs to be able to provide proper instruction for those times it is
used by an untrained user away from an instructor. Tablet systems are not an
appropriate solution for every training situation.
75
THIS PAGE INTENTIONALLY LEFT BLANK
76
IX.
A.
FUTURE WORK
IMPROVING SAT-M TRAINING SOFTWARE
1.
CFF
SAT-M requires further development before it can be introduced for
training. Currently it has enough functionality to execute the experiment
described in this thesis and some of this apparent functionality is just a façade.
For example, when the user brings up the DAGR, a still image appears with a
hard coded current location. If the user could move around in the virtual world the
DAGR would soon give an invalid location.
The following is a breakdown of what the authors deem necessary for
SAT-M to become a functional trainer, as described in the requirements
documents and use case scenarios in Chapter IV. There are three tiers. The first
tier is the ‘need to haves,’ that which is necessary in order for the software to
provide training without the user at risk of developing incorrect live CFF habit
patterns. The second tier encompasses the functionality needed for SAT-M to
become a viable CFF trainer when used in the presence of a trained instructor.
The software would not fulfill all the requirements from Chapter IV, but it would
start to realize the potential of tablet VE training. The third tier comprises the
‘nice to haves,’ those features that would make SAT-M a fully functional VE
trainer for both the expert and novice user.
a.
•
Tier One, The Need to Haves
SAT-M’s virtualized equipment should have a greater level of
functionality. This does not have to encompass everything that real
equipment does, but should include the function expected during
the course of a fire mission. For example, the DAGR does not need
to have all of its trouble shooting screens, but it does need to
provide present location and allow the user to see how many
satellites are being tracked.
77
•
The mission planning should forgo the drop down menus and auto
completion, requiring the user to remember or record the pertinent
information required to create a six part CFF transmission. The
mission planning should also allow the user to execute both grid
and polar missions, and make adjustments.
•
At this level of development SAT-M does not require more than one
scenario, as long as that scenario provides enough diversity to
allow for multiple training missions. The scenario needs a range of
targets in diverse terrain and at varying distances. SAT-M’s current
scenario is so trivial that the target set only requires rudimentary
skills.
b.
•
Tier Two, Viable Trainer
The virtualized equipment needs to have all the functionality of the
real equipment, including idiosyncrasies. For example, the user
should be able to set the magnetic variance into the Vector 21b and
enter waypoints into the DAGR.
•
To assist in mission planning and overall situation awareness SATM needs digitized 1:50,000 and 1:100,000 maps of the mission
area. Along with this map SAT-M should include a virtual protractor
and a pallet of appropriate operational graphics, as well as the
ability to easily plot the user’s present position, targets locations,
fire control measures, and friendly and enemy forces.
•
A single high quality scenario can be the backdrop to a diverse set
of missions, but eventually the user will become too familiar with it
and the training will not be as effective. At this tier SAT-M requires
a range of training missions along with the ability to modify them. A
scenario run at dusk or night from a different location provides new
challenges and training opportunities.
78
•
The mission planning capability should encompass the complete
set of CFF missions to include continuous illumination, immediate
suppression and suppression of enemy air defenses.
c.
•
Tier Three, Individual Training
The intelligent tutoring system is critical for allowing SAT-M to
operate as a standalone VE trainer. A great deal of work needs to
done in this area to ensure that the correct information is being
collected and relayed back to the user in a useful format.
Extraneous information is almost as bad as withholding useful
information, as a novice user will have difficulty determining what is
important.
•
A fully functional scenario creator will allow the user to train in a
wide range of missions in diverse environments. It will also enable
the instructor to create specific scenarios, allowing the stage to be
set for optimal training and the improvement of user weak points.
•
Networking is the final component of tier three. This permits users
to share scenarios and allows multiple users to execute missions in
the same VE. It is a prerequisite for the instructor mode, where the
instructor can get a feed from the student’s tablet, observing them
in real time.
2.
New Features
The following two features would greatly increase SAT-Ms ability to
provide high quality training.
a.
Voice Recognition
As the number one improvement asked for by the experiment
participants, voice recognition would improve the quality of training provided by
SAT-M. Speaking the mission, as one does when talking to the fire direction
79
center (FDC), greatly increase the immersion and transfer of training. Learning to
think before speaking and proper communications cadence are skills all Marines
must master.
b.
Map Data Downloaded of the Internet
Multiple technology companies provide high quality satellite
imagery and elevation data over the Internet. Google and Apple are examples of
two such companies. If SAT-M were able to obtain the licensing required, it could
hook into this data and users could create custom scenarios set in almost any
location. This has the potential to change SAT-M from a training tool to a mission
rehearsal tool.
3.
Other Applications
Just as the DVTE provides training in both CFF and close air support
(CAS) so should SAT-M. The addition of CAS will require including aircraft and
air borne ordinance along with the appropriate mission planning tools.
There are multiple websites offering land navigation courses. They require
that the user sits sitting at a computer. There are currently no land navigation
courses available for mobile devices. By taking advantage of the GPS in tablets
SAT-M could change this. Instead of executing the land navigation training in a
classroom, SAT-M could provide excellent training in the field, giving real time
feedback and advice.
B.
ADDITIONAL EXPERIMENTS
It is the authors’ perception that W2W changes the way the user interacts
with the training system. Additional research can be done to determine if that is
true and to exactly what degree.
•
Does W2W reduce extraneous cognitive load by a measurable
amount, and if so by how much?
80
•
Does standing and using W2W keep the user attention for longer
than sitting and doing the same tasks with a mouse and keyboard?
•
Does W2W appreciably improve a user’s psychomotor skills? Is
there a measurable difference between using W2W as an input and
using a mouse and a keyboard?
C.
NEW PLATFORM
The idea behind developing VE training applications for tablets was
conceived two years prior to the completion of this thesis, when the authors first
arrived at NPS in the summer of 2011. Since that time laptop / tablet hybrid
computers have become available. One example is the Intel UItrabook. The
Ultrabook is not technically a product; it is a standard that venders can follow
allowing them to market under the Ultrabook name. Accelerometers and
gyroscopes are not currently required by the Ultrabook standard, they are
however, recommended (Pinola, 2012).
Ultrabooks run Windows 8, so they should be able to run ObserverSim
and FOPCSim. It is possible to add W2W to both programs.
By placing W2W in ObserverSim of FOPCSim the experiment presented
in this thesis could be redone, controlling for both software and device. The
participants would perform the exact same scenario using the hybrid computer in
laptop mode, and then in tablet mode, or in the opposite order. Any difference in
preference would be solely due to input modalities. It should also be noted that a
W2W enabled ObserverSim or FOPCSim would get the technology to the fleet
faster than building SAT-M from the ground up.
81
THIS PAGE INTENTIONALLY LEFT BLANK
82
APPENDIX A. INTERFACE DESIGN TESTING
A.
BACKGROUND
SAT-M was heavily influenced by the validated processes used to produce
earlier CFF VE. The unique input modalities and user expectations of tablet
systems were taken into account during the design process. We stared with the
interface design due to how multi-touch enabled touchscreens change the way
users interact with the device. The interface was initially designed using an html
mock up and was evaluated in fulfillment of course requirements for CS3004
Human Computer Interfaces. Upon completion of the basic interface design the
authors worked with MOVES Institute engineering support team responsible for
the development of Delta 3D open source software.
B.
INTERFACE DESIGN STUDY
The intent was to create an effective user interface for SAT-M. SAT-M is
envisioned as a suite of software that brings the simulation-training center to the
Marine. It is an integrated and portable virtual training environment for JFOs and
JTACs. Creating a set of fires software that will run on a portable device will allow
small unit leaders to greatly increase the quality of the training that occurs in the
moments of daily down time. Two sets of instructions were created to
standardize the collection of data. The first set, packet A, is the handout provided
to the individual administering the usability test. The second set, packet B, is
given to the participant. The two sets of instructions work in concert, providing
specific instructions to the each of the individuals. The objective of our usability
test was to capture qualitative information that provides indication of user
satisfaction, interface effectiveness, and interface suitability.
1.
Success Criteria
From our design project we established the information in Table A1 to be
our criteria. However, three(1) of the criteria can only be evaluated with a fully
83
functional system, two(2) through opinion without a fully functional system, and
one(3) with our prototype. For the purposes of this project we have evaluated
three of the six original success criteria.
SAT-M interface will be successful if it achieves any two of the threshold
criteria outlined in the table below. The interface will be highly successful if it
meets any of the two objective criteria included in Table A1.
Training Transfer
Threshold
Objective
Positive partial task training for JTAC or JFO mission sets as
individuals. (2)
Full task training for JTAC or JFO mission sets as integrated team.
(2)
Ease Of Use
Threshold
Objective
A qualified JFO or JTAC is able to utilize the software without
requiring any assistance. (3)
A qualified JFO or JTAC is able to network multiple devices together
and run a multi-person scenario without any assistance. (1)
Feedback
Threshold
Objective
Based on system feedback only, an untrained user is able to make
correct adjustments to a CFF. (1)
Based on system feedback only, an untrained user is able to
conduct a complete CFF. (1)
Table A1.
2.
Interface design success criteria
Method
a.
Target participant population
The intended users are military personnel and can be broken down into
two broad categories, those who have been qualified for controlling of Joint Fires
84
and everyone else. JTAC’s and JFO’s characteristics vary from service to
service, so for the purposes of the study conducted we focus on United States
Marine Corps (USMC) eligibility requirements.
•
JTAC. May be a winged aviator, or ground combat arms officer, or
combat arms staff non-commissioned officer (E-6 and above).
•
JFO. May be officer or enlisted noncommissioned officer (E-3 and
above), but must come from the Military Occupational Specialty
(MOS) of the indirect fire support agency they will be observing.
This means, if the individual is an Artillery JFO, they shall be an
artillery officer or enlisted Marine who is in the artillery field.
b.
Proposed demographics
•
Age: 20–40.
•
Education: High School diploma—doctorate.
•
Gender: Male.
•
Cultural: U.S. citizen, though not necessarily naturally.
•
Winged naval aviator (no restriction on airframe).
•
Combat arms MOS designation (infantry, artillery, tanks).
c.
Actual demographics
As the system is meant for a trained user, the testing participants
were asked three questions, which a JFO should be able to answer. All five of
the participants were able to answer the three questions correctly. We believe
that the test participants represent the target population well.
•
Age: 32–38.
•
Education: bachelors—masters.
•
Gender: male.
•
Cultural: U.S. citizen.
•
Winged naval aviators.
•
Combat arms MOS designations—artillery.
85
3.
Procedures
a.
Tasks
The testing participants were given five minutes to explore the
prototype. If they felt that five minutes was not enough time to get comfortable
with the system they were given five more minutes. None of the participants
desired the extra five minutes. After getting familiar with the system they were
instructed to complete four tasks. The tasks were chosen based on a task
analysis conducted during an earlier project for CS3004 coursework. They are
typical JFO/JTAC tasks performed during the execution of a mission. The tasks
are as follows:
1.
Determine the bearing and distance to target.
2. Determine current radio frequency.
3. Determine present position.
4. Determine a 6 digit grid of a point plotted on the map.
Though the prototype is a static set of screen, the required
information to accomplish each of the above tasks in embedded in the screens.
4.
Likert Survey Results
After completing the tasks the participants filled out a survey consisting of
14 questions. 11 were Likert scale questions. A summary of the results is in
Table A2. The specific survey questions can be found in packet B.
86
Survey
Data
Question
1
2
3
4
5
Participant Strongly
Strongly
disagree disagree neutral
agree
agree
avg
Q1
5
4
4
5
5
4.6
Q2
5
5
5
5
5
5
Q3*
5
5
4
5
5
4.8
Q4
4
5
3
4
5
4.2
Q5
5
4
4
4
5
4.4
Q6
5
5
5
5
5
5
Q7
5
5
4
4
5
4.6
Q8
5
5
3
4
4
4.2
Q9*
5
4
4
5
5
4.6
Q10
5
5
4
5
5
4.8
Q11
4
5
2
4
4
3.8
avg
4.5
4.5
3.75
4.5
4.833333
* values have been converted to other end of Likert scale due to negative
phrasing of the survey question.
Table A2.
Likert survey results of interface testing
After completing the Likert portion of the survey the participants answered
three open ended questions followed by a structured interview.
a.
Open Ended Questions
The questions asked what the participants liked the most
about the interface, what they liked the least about the interface and if they had
any ideas for improvements.
•
What did you like the most about the interface?
The participants liked that there were not too many buttons and that
it was easy to understand what the buttons do.
•
What did you like the least about the interface?
What the participants disliked had more to do with the actual
content than the interface. One participant disliked the blurry map; others wanted
the devices to be fully functional. Unfortunately, due to the nature of the
prototype device functionality could not be implemented.
87
•
Do you have any ideas for improvements?
The best suggestion was to include a notepad so one could
manually write calculations and take notes. It was also recommended to include
a calculator tool. In addition the protractor was not the easiest to read.
b.
Structured Interview
The objective of the structured interview was to get the
participants creative input. By asking these questions in an interview process it
was the hope of the authors that they would get more imaginative results than if
the participants just wrote out their ideas.
•
In the simulation, what features are missing that you think
would improve the quality of training it can deliver.
Ideas to improve the quality of the training varied from using the
camera for augmented reality to adding more options for performing fire support.
•
Do you see any other potential uses for this sort of simulator
beyond CFF and CAS training?
The participants came up with novel ways to use the system; from
providing it to non JFO/JTACS to use for device and map training to making a
game out of it.
5.
Discussion
In Table A2, seven Likert scale questions (Q1, Q3, Q5, Q6, Q7, Q9, and
Q10) focused on ease of use and the interface. These seven questions average
score of 4.6 indicates that in general the testing participants found the interface
to be easy to use. This indicated success in the ease of use criteria category.
The lowest score of these questions was, Q5: The Map view interface was
intuitive. In this portion of the study many participants had some navigational
errors, which could explain the low rating for Q5.
Two questions, Q4 and Q11, were related to training transfer success
criteria. These two had the lowest scores, an average of 4.0. It is the opinion of
88
the authors that this is due to the system being only a prototype. With a fully
functional system we expect to have improved results. However, even an
average of “agree” means the interface is heading in the right direction.
89
PACKET A
For the experimenter, ensure that the SAT-M prototype is running on the
computer at the top level screen. Then ask the participant the following
demographic questions.
Service:_______________________
Age:_______
Are/were you a qualified JTAC? _______ JFO? _______
FAC(A)?______
Are/were you an artillery officer, artillery man? ________
How long, in years and months, has it been since you last conducted Call For
Fire or Close Air Support?
________________
Hand the participant the training packet, which consists of 5 pages and instruct
them to read and complete the questions on the first page.
The participant will inform you when they have completed reading the pages.
Show them the prototype and inform them what the “Map View” button does and
that the “home” functionality has been enabled. Then give them 5 minutes to
explore the system.
After 5 minutes, ask them:
“Do you feel comfortable enough to take part in the rest of the study?”
If they answer yes, inform them to go to the next page of their packet and
complete each task in order.
If they answer no, give them an additional 5 minutes to explore the system and
note how much time they take. Extra time taken: __________
When they are executing each task you are to time how long it takes them find
the appropriate page, collect the appropriate information, note the number of
navigational errors, and determine how accurate they were in collecting the
information.
After they have completed each task, or five minutes have elapsed, complete the
appropriate section on the next pages and have them move onto the next task.
Please do not let the participant see these sheets as the answer to the tasks can
be found here.
90
Task 1
When instructed to do so, please press the “home button” and then determine
the bearing and distance to target #12.
Screen: Vector 21b
Answer: Bearing 060, Distance 6000
Time to Vector 21b screen:__________
Time to determine Bearing and Distance: __________
Number of navigational errors:__________
Was bearing correct?_____ Distance correct?______
Task 2
When instructed to do so, please press the “home button” and then determine
what frequency the radio is currently set to.
Screen: Radio Handset
Answer: 036.625
Time to Radio Handset screen:__________
Time to determine Frequency: __________
Number of navigational errors:__________
Was the frequency correct?_____
Task 3
When instructed to do so, please press the “home button” and then determine
what your present position is in grid.
Screen: DAGR
Answer: 15T XG 11897E 53935N
Time to DAGR screen:__________
Time to determine location: __________
Number of navigational errors:__________
Was the location correct?_____
NOTE: Does not have to be in exact from, they can give just 8 digit grid or
something similar.
91
Task 4
When instructed to do so, please press the “home button” and then determine
the 6 digit grid of the point plotted on the map.
Screen: Active Pen
Answer: 845931
Time to Active Pen screen:__________
Time to determine grid: __________
Number of navigational errors:__________
Was the grid correct?_____
Once they participant has completed the tasks, inform them to complete the
survey found on pages 3 and 4 of training packet. Once they have completed the
survey, if they got any of the task question wrong, show them where and how to
find the correct information. Then ask them the following questions:
1. In the simulation, what features are missing that you think would improve the
quality of training it can deliver.
2. Do you see any other potential uses for this sort of simulator beyond CFF and
CAS training?
After they have answered the questions have them read the final paragraph in
their training packet. Once they have read it, ask if they have any final question
and thank them for their participation.
92
PACKET B
Welcome to the Supporting Arms Trainer - Mobile (SAT-M) usability analysis.
During the next 15–30 minutes you will be asked to work with a prototype of the
training simulator. The purpose of the SAT-M is to bring the simulation center to
the Marine. We are looking to develop training software that will allow Marines to
conduct immersive Call For Fire (CFF) training on a mobile device. You will work
with a prototype of the interface. None of the major functionality has been
implemented yet. The prototype is a series of linked web pages designed to
reflect the program in various states. The data that appears in the various
screens and devices will give the appropriate current values.
The information collected during this evaluation is confidential. We are not testing
you, we are testing the system. Any difficulties encountered are the systems
fault; we need your help to find these problems. Finally, you can stop at anytime
Please answer the following questions which are typically known by a joint
forward observer.
(1)
How many mils are in a circle?
______________
(2)
Name two Methods of target location.
__________________________________
__________________________________
(3)
A 6 digit grid is accurate to how many meters?
____________________
Once you have answered the questions please notify the experimenter. You will
be instructed to spend five minutes getting familiar with the system. Once the five
minutes has passed the experimenter will ask you to conduct a series of short
tasks.
Task 1
When instructed to do so, please press the “home button” and then determine
the bearing and distance to target #12.
Bearing ________
Distance ________
93
Task 2
When instructed to do so, please press the “home button” and then determine
what frequency the radio is currently set to.
Frequency ____________
Task 3
When instructed to do so, please press the “home button” and then determine
what your present position is in grid.
Location ___________________
Task 4
When instructed to do so, please press the “home button” and then determine
the 6 digit grid of the point plotted on the map.
Grid ___________________
You have completed the last task. Thank you. On the following pages you will
find 14 survey questions, please take the time to answer them. If you would like,
you can refer to the prototype while answering the questions. Once you have
answered them please inform the experimenter.
94
1. The overall interface is intuitive. 7 tier
strongly
disagree
disagree
neutral
agree
strongly
agree
3. It was difficult navigating through the device to find the appropriate information
while completing the tasks.
strongly
disagree
disagree
neutral
agree
strongly
agree
4. A fully implemented system would provide high quality partial task training for
a JFO.
strongly
disagree
disagree
neutral
agree
strongly
agree
6. The button icons provide intuitive inference of what would happen when they
are pressed.
strongly
disagree
disagree
neutral
agree
strongly
agree
7. It is easy to move though the screens without losing one’s place.
strongly
disagree
disagree
neutral
agree
strongly
agree
8. Having this software available at my unit would improve my Units ability to
perform their mission.
strongly
disagree
disagree
neutral
agree
strongly
agree
agree
strongly
agree
agree
strongly
agree
9. It was hard to understand what the buttons did.
strongly
disagree
disagree
neutral
10. The 3D view interface was intuitive.
strongly
11.
disagree
disagree
neutral
95
12. Does the device accurately represent the real world physical motion required
to conduct the task.
Training with this device on a regular basis will improve my ability to conduct CFF
in the field.
12. What did you like the most about the interface?
13. What did you like the least about the interface?
14. Do you have any ideas for improvements?
Thank you for participating in the usability evaluation of Joint Forward Observer
Training Suite—Mobile. The time you have taken today will help ensure the
lethality and survivability of Marines tomorrow. Based off of the valuable input
gathered during this usability evaluation we will redesign the user interface and
make recommended changes. As the usability evaluation is ongoing please do
not discuss this study with anyone else until Saturday, 10 June, 2012. If you have
any questions please ask the experimenter. Again, thank you for your time.
96
APPENDIX B.
C.
EXPERIMENTAL DOCUMENTATION
RESEARCHERS GUIDE
1.
Chronological Task Listing
Recruitment—(To be completed one week prior to execution of
experiment). The researchers will begin recruitment and selection process. Emails will be distributed soliciting participation. Flyers will be disseminated
throughout the NPS campus. When potential participants contact the
researchers, they will be informally pre-screened for experience in CFF training.
This will enable the researchers to determine initial groupings for IV #1
experience (trained or untrained). (Task duration: ~10 to 15 hours, location: NPS)
Equipment setup—(To be completed prior to scheduled arrival of
participant) The researcher will prepare the equipment. A tablet device with
sufficient battery power will be placed on the laboratory table. Researcher will
launch SAT-M software by tapping the appropriate icon. A standard U.S. Marine
Corps DVTE laptop will be placed on a desk, a chair will be set in front of it. The
researcher should ensure power is being supplied to the laptop, and that a
mouse is plugged into the laptop. The researcher will then log into the DVTE and
launch the Combined Arms Network software, select and launch Observer
Simulator. (Task duration: ~10 minutes, location: NPS, MOVES Lab)
Consent (page 8–9 below)–Researcher will provide a hard copy of the
NPS consent to research form, participant shall be allowed to read the form, and
choose whether to participate or not participate. The participant and researcher
obtaining consent will sign the form, which shall be collected by the researcher.
(Task duration: ~5 minutes, location: NPS, MOVES Lab)
Initial exposure period (page 10 below)–Prior to the conduct of the
initial exposure to the device interface the participant will receive a three question
survey assessing basic Forward Observer knowledge. The researcher will
instruct the participant they are allowed 3 minutes of “freeplay” in order to
97
familiarize themselves with the interface. All participants will be allowed this
opportunity regardless of experience level with software. (Task duration: 5
minutes, location: NPS, MOVES Lab)
Scenario reset—The researcher will reinitialize the scenario for the
participant. On the tablet running SAT-M the researcher will tap the reset button.
On the laptop running Observer Simulator the researcher will navigate to the file
menu and select reset scenario. (Task duration: ~30 seconds, location: NPS,
MOVES Lab)
Protocol “A” (between subjects experimental design)—In this protocol
participants will be evenly divided randomly by two our independent variables,
training and device. Two phases: Basic CFF process tasks, Execute CFF.
Researcher will begin timing the session when worksheet is provided. (Task
duration: ~20 minutes, location: NPS, MOVES Lab)
Basic CFF tasks (Tasks will be guided by worksheet):
Task #1–Participant is instructed via worksheet to determine their current
location through the use of GPS and record that location on the worksheet.
SAT-M (Tablet device)—Using a finger the participant will tap the DAGR
icon and record the location information from the device display on their
worksheet.
Observer Simulator (Laptop PC)— Using the mouse the participant will
navigate to the DAGR icon, click on it and record the location information from
the device display on their worksheet.
Researcher will note the elapsed time to complete task, and count any
navigational errors made by participant during task execution.
Task #2—Participant is instructed to determine the bearing to target for
the “technical vehicle” using the lensatic compass and record the information
displayed from the virtual compass on their worksheet.
98
SAT-M (Tablet device)—Using a finger the participant will tap the lensatic
compass icon, and then rotate the tablet device until the “technical vehicle” is
acquired.
Observer Simulator (Laptop PC)—Using the mouse the participant will
navigate to the lensatic compass icon, click the icon and then using the mouse to
rotate the view, locate the “technical vehicle”.
Researcher will note the elapsed time to complete task, and count any
navigational errors made by participant during task execution.
Task #3—Participant is instructed to determine the bearing and distance
to a second target, the “tank vehicle”, using the Vector-21b’s and record the
information on displayed from the virtual device on their worksheet.
SAT-M (Tablet device)—Using a finger the participant will tap on the
Vector-21b icon, and then rotate the tablet device until the “tank vehicle” is
acquired. The participant will then use a finger to tap on the bearing and distance
icons to generate the data in the display.
Observer Simulator (Laptop PC)—Using the mouse the participant will
navigate to the Vector-21b icon, double click the icon and then using the mouse
to rotate the view, locate the “tank vehicle”. The participant will then click on the
bearing and distance icons to generate the data in the display.
Researcher will note the elapsed time to complete task, and count any
navigational errors made by participant during task execution.
Task #4—Locate and identify the icon used for generating the CFF 6-line
brief.
SAT-M (Tablet device)—Using a finger the participant will visually locate
the icon used to generate and send the 6-line CFF, the participant will activate
the icon and the researcher will observe that they are complete.
99
Observer Simulator (Laptop PC)—Using the mouse the participant will
navigate to the icon used to generate and send the 6-line CFF, the participant will
click the icon and the researcher will observe that they are complete.
Researcher will note the elapsed time to complete task, and count any
navigational errors made by participant during task execution. The researcher will
reinitialize the scenario for the participant. On the tablet running SAT-M the
researcher will tap the reset button. On the laptop running Observer Simulator
the researcher will navigate to the file menu and select reset scenario.
Task #5—Execute CFF. Using tasks #1, #3, and #4 the participant will
generate all required information for a polar, fire for effect, fire mission, and enter
it into the CFF mission generation interface.
SAT-M (Tablet device)—Participants will repeat task #1, with the addition
of sending the POSREP from the GPS screen by tapping the send POSREP
icon. Participants will then repeat task #3. Upon completion of this task
participants will repeat task #4, with the addition of entering the 6-line brief. The
participant will fill and send line one of the CFF, by selecting “Fire For Effect” in
the warning order drop down dialog box, then “Polar” from the location method
drop down box. This message will be sent when the participant taps the
“checkmark” icon. The participant will then send the direction and distance
acquired during task #3 by tapping the “checkmark”. Next, the participant will
enter the target description, method of engagement, and method of control
information. This is accomplished by selecting quantity of targets, target
identification (tank, technical, etc.), level of protection (in open, dug in, etc.), fuse
type, and fire command. All tasks are completed by selecting from a drop down
dialog under each of the informational areas. When all informational fields are
filled, the participant will tap the “checkmark” to send the information. After the
message is sent the 6-Line CFF is received by the firing agency. They will
respond with a “message to observer”. This message to observer is then “read
back” by the participant selecting correct call sign, number of rounds, and target
identification number from drop down dialog boxes. The participant then sends
100
this information back to the firing agency by tapping the “checkmark” box. The
firing agency responds when shots are fired, and he participant acknowledges
this by tapping the “shot out” icon. After rounds impact, the participant ends the
mission by tapping the “end of mission” icon. This concludes the protocol.
Observer Simulator (Laptop PC)—Participants will repeat task #1, with the
addition of sending the POSREP from the radio screen by navigating with the
mouse to the POSREP entry box and typing the coordinates. Participants will
then repeat task #3. Upon completion of this task participants will repeat task #4,
with the addition of entering the 6-line brief. The participant will fill and send line
one of the CFF, by selecting “Fire For Effect” in the warning order drop down
dialog box, then “Polar” from the location method drop down box. This message
will be sent when the participant clicks the “K” icon. The participant will then send
the direction and distance acquired during task #3 by clicking in the box for each
and filling in the information with the keyboard, then click the “K” icon. Next, the
participant will enter the target description, method of engagement, and method
of control information. This is accomplished by selecting quantity of targets,
target identification (tank, technical, etc.), level of protection (in open, dug in,
etc.), fuse type, and fire command. All tasks are completed by selecting from a
drop down dialog under each of the informational areas. When all informational
fields are filled, the participant will click the “K” to send the information. After the
message is sent the 6-Line CFF is received by the firing agency. They will
respond with a “message to observer”. This message to observer is then “read
back” by the participant clicking the “K” icon. After rounds impact, the participant
ends the mission by clicking the “end of mission” icon. This concludes the
protocol.
Between subjects survey—Participants will be provided a short
questionnaire that will survey their subjective opinions about the software and
device that they have just used to complete the tasks requested. (Task duration:
~5 minutes)
101
Protocol “B”—In this protocol participants will repeat the previous list of
tasks in protocol “A”, but the device will be swapped for the one that was not
previously used (i.e. in protocol “A” if a tablet was used, then the participant will
use the laptop in protocol “B”). (Task duration: ~15 minutes, location: NPS,
MOVES Lab)
Final survey—Participants will be provided a short questionnaire that will
survey their subjective opinions about the software and device that they have just
used to complete the tasks requested. It will also solicit comparative information
between experiences with the initial device used and the other device. (Task
duration: ~5 minutes, location: NPS, MOVES Lab)
Post experimental tasks—Primarily consisting of data analysis. We
expect to use a two-way ANOVA to analyze results of quantitative testing. The
qualitative measures will be described through more pedestrian manners such as
mean values. Table 7 in Chapter VI displays the design. (Task duration: ~10
hours, location: NPS)
102
D.
RESEARCHERS PACKET
Virtual environment training experiment
(Researcher)
READ FIRST
If the participant has no knowledge of CFFprovide correct answers to the
questions below.
SUBJECT Number _____
Call for fire knowledge:
Please answer the following questions, which are typically known by a Joint Forward
Observer.
(1)
How many mils are in a circle? ____6400_____
(2)
Name two Methods of target location.
___________Grid_(method using a grid coordinate for location of target)___
___________Polar_(method of using direction and distance from known observers
location to the the target)__
(3)
A six digit grid coordinate is accurate to how many meters? _____100
meters______
103
Protocol “A”
READ FIRST (RESEARCHER)
Researchers guide, prior to having the participant begin the protocol using the
participant worksheet allow them three minutes of interface familiarization (freeplay).
There is no time limit for Protocol “A”.
Researcher’s guide: After the participant has completed the Virtual environment
training experiment sheet and is ready to execute Protocol “A” make sure they are
seated in front of the DVTE or standing in front of the bench with the Tablet, as
appropriate. Provide them with the Protocol “A” sheet.
SUBJECT Number _____
Protocol “A”:
Basic CFF tasks:
Task #1—Determine current location. Start time:_______ Finish time: _______
Researcher will note the start time and finish time to complete task, and count
any navigational errors made by participant during task execution.
Navigational Errors: ____________
Task #2—Determine the bearing to target for the “technical vehicle” using the lensatic
compass. Start time:_______ Finish time: _______
Researcher will note the start time and finish time to complete task, and count
any navigational errors made by participant during task execution.
Navigational Errors: ____________
Task #3—Determine the bearing and distance to the second target, the “tank vehicle”,
using the Vector-21b’s. Start time:_______ Finish time: _______
Researcher will note the start time and finish time to complete task, and count
any navigational errors made by participant during task execution.
Navigational Errors: ____________
104
Task #4—Locate and activate the icon used for transmitting the CFF brief.
Start time:_______ Finish time: _______
Researcher will note the elapsed time to complete task, and count any
navigational errors made by participant during task execution. The researcher will
reinitialize the scenario for the participant. On the tablet running SAT-M the researcher
will tap the reset button. On the laptop running Observer Simulator the researcher will
navigate to the file menu and select reset scenario.
Elapsed time: ___________ Navigational Errors: ____________
Execute CFF brief:
Task #5—Execute CFF
POSREP (use self location): (10 digit grid coordinate) .
Start time:_______ Finish time: _______
Navigational Errors: ____________
Transmission 1: Method of engagement. Start time:_______ Finish time: _______
Navigational Errors: ____________
Transmission 2: Target Location. Start time:_______ Finish time: _______
Navigational Errors: ____________
Transmission 3: Description of target, method of engagement, and method of fire and
control.
Start time:_______ Finish time: _______
Navigational Errors: ____________
MTO: Read back and acknowledge the message to observer (MTO)
Start time:_______ Finish time: _______
Navigational Errors: ____________
Shot Over—For tablet only: Navigational Errors: ____________
Select “end of mission”—Finish time: _______
105
Navigational Errors: _______________ Were rounds ‘on target?’ Yes / No
Protocol “A”
SUBJECT Number _____
Inform the participant that protocol “A” is complete and have them complete the
questionnaire for protocol “A.” While they are completing the questionnaire please note
anything specific challenges that the participant had with the system or anything unusual
or interesting that the participant did while executing the tasks below.
Once you have completed your note taking and the participant has completed the
questionnaire for protocol “A” provide them with the paperwork for protocol “B” and have
them switch devices.
106
Protocol “B”
READ FIRST (RESEARCHER)
Researchers guide, prior to having the participant begin the protocol using the
participant worksheet allow them three minutes of interface familiarization (freeplay).
There is no time limit for Protocol “B”.
Researcher’s guide: After the participant has completed the Protocol “A” qualitative
survey and is ready to execute Protocol “B” make sure they are seated in front of the
DVTE of standing in front of the bench with the Tablet, as appropriate. Provide them with
the Protocol “B” sheet.
SUBJECT Number _____ What device did you use during protocol “A”? Laptop /
Tablet
Protocol “B”:
Basic CFF tasks:
Task #1—Determine current location. Start time:_______ Finish time: _______
Researcher will note the start time and finish time to complete task, and count
any navigational errors made by participant during task execution.
Navigational Errors: ____________
Task #2—Determine the bearing to target for the “technical vehicle” using the lensatic
compass. Start time:_______ Finish time: _______
Researcher will note the start time and finish time to complete task, and count
any navigational errors made by participant during task execution.
Navigational Errors: ____________
Task #3—Determine the bearing and distance to the second target, the “tank vehicle”,
using the Vector-21b’s. Start time:_______ Finish time: _______
Researcher will note the start time and finish time to complete task, and count
any navigational errors made by participant during task execution.
Navigational Errors: ____________
107
Task #4—Locate and activate the icon used for transmitting the CFF brief.
Start time:_______ Finish time: _______
Researcher will note the elapsed time to complete task, and count any
navigational errors made by participant during task execution. The researcher will
reinitialize the scenario for the participant. On the tablet running SAT-M the researcher
will tap the reset button. On the laptop running Observer Simulator the researcher will
navigate to the file menu and select reset scenario.
Elapsed time: ___________ Navigational Errors: ____________
Execute CFF brief:
Task #5—Execute CFF, Start time:______
POSREP (use self location): (10 digit grid coordinate) .
Start time:_______ Finish time: _______
Navigational Errors: ____________
Transmission 1: Method of engagement. Start time:_______ Finish time: _______
Navigational Errors: ____________
Transmission 2: Target Location. Start time:_______ Finish time: _______
Navigational Errors: ____________
Transmission 3: Description of target, method of engagement, and method of fire and
control.
Start time:_______ Finish time: _______
Navigational Errors: ____________
MTO: Read back and acknowledge the message to observer (MTO)
Start time:_______ Finish time: _______
Navigational Errors: ____________
Shot Over—For tablet only: Navigational Errors: ____________
Select “end of mission”—Finish time: _______
Navigational Errors: _______________ Were rounds ‘on target?’ Yes / No
108
Protocol “B”
SUBJECT Number _____
Inform the participant that protocol “B” is complete and have them complete the
questionnaire for protocol “B.” While they are completing the questionnaire please note
anything specific challenges that the participant had with the system or anything unusual
or interesting that the participant did while executing the tasks below.
Once the participant has completed the questionnaire for protocol “B” provide them with
the Post-experiment Demographic Questionnaire.
109
E.
PARTICIPANT PACKET
Naval Postgraduate School Consent to Participate in Research
Introduction. You are invited to participate in a research study entitled Virtual
Environment Training on Mobile Devices, Supporting Arms Trainer-Mobile. United States
Marine Corps 2012 Science and Technology Plan identifies a critical Training and
Education gap in T&E STO-6: Warrior Simulation: “Marines need to train as they would
fight as small units, particularly for dismounted operations. However, live training
resources, facilities, ranges and training areas are limited. Simulation capabilities are
needed to provide real-time effects and realistically engage the senses during
challenging, rapidly reconfigurable scenarios to increase small units’ opportunities to
train when they do not have access to live resources. Develop capabilities to realistically
simulate munitions (friendly and enemy) effects within live, virtual, and constructive
training environments. Develop the ability to stimulate operational equipment used in live
training environments from virtual or constructive environments, to improve the capability
of simulations to augment and enhance live training opportunities and to reinforce
realistic training using actual equipment as often as possible in conjunction with
simulators and simulations”. The purpose of the research is to investigate mobile
devices as a platform for training simulations as it aligns with the above outlined science
and technology objective.
Procedures.
− Consent will be solicited.
− Experimental procedures will include standard Call For Fire (CFF) tasks, such as
determine self-location, determine bearing and distance to a target, and generate a
standard CFF brief.
− The expected duration in total is approximately 45 minutes:

Consent (five minutes)

CFF knowledge test (five minutes)

Protocol A (15 minutes)

Survey (five minutes)

Protocol B (10 minutes)
110

Final questionnaire and debrief (five minutes)
− Participants will be video recorded to ensure accurate data collection.
− We expect a minimum of 32 participants in the research, and anticipate as many as
64.
− All subjects will be exposed to the same experimental conditions.
Location. The interview/survey/experiment will take place at the MOVES Institute, Naval
Postgraduate School in the laboratory.
Cost. There is no cost to participate in this research study.
Voluntary Nature of the Study. Your participation in this study is strictly voluntary. If you
choose to participate you can change your mind at any time and withdraw from the study.
You will not be penalized in any way or lose any benefits to which you would otherwise be
entitled if you choose not to participate in this study or to withdraw. The alternative to
participating in the research is to not participate in the research.
Potential Risks and Discomforts. The potential risks of participating in this study are:
possibility of eye, hand, and arm strain typically associated with normal laptop or tablet
use. There is a potential for breach of confidentiality.
Anticipated Benefits. Anticipated benefits from this study include advances in virtual
training environments. This will enable DoD to provide unique and innovative new
interfaces for the user (military trainee) as well as new methods for training and
educational material delivery. You will not directly benefit from your participation in this
research.
Compensation for Participation. No tangible compensation will be given.
Confidentiality & Privacy Act. Any information that is obtained during this study will be
kept confidential to the full extent permitted by law. All efforts, within reason, will be
made to keep your personal information in your research record confidential but total
confidentiality cannot be guaranteed. All records will be stored securely at the MOVES
institute in locked storage container. Access to records will only be allowed to the
Primary Investigator and student researchers whom have completed required CITI
training. All personally identifiable information will be cleansed, and all participants will
remain anonymous. All data and consent will be forwarded to the IRB for long term
storage.
Points of Contact. If you have any questions or comments about the research, or you
experience an injury or have questions about any discomforts that you experience while
taking part in this study please contact the Principal Investigator, Dr. Joseph Sullivan,
831–656–7562, sullivan@nps.edu. Questions about your rights as a research subject or
any other concerns may be addressed to the Navy Postgraduate School IRB Chair, Dr.
Larry Shattuck, 831–656–2473, lgshattu@nps.edu.
111
Statement of Consent. I have read the information provided above. I have been given
the opportunity to ask questions and all the questions have been answered to my
satisfaction. I have been provided a copy of this form for my records and I agree to
participate in this study. I understand that by agreeing to participate in this research and
signing this form, I do not waive any of my legal rights.
Participant’s Signature
Date
Researcher’s Signature
Date
Virtual environment training experiment
(Participant)
READ FIRST
The following experiment and questionnaire are completely confidential. Nothing
you do or answer will be related back to you in any manner. Thank you for your
participation. Please answer all of the questions below and hand to the proctor when you
reach "STOP HERE.” You may ask the proctor questions at any time. There is no time
limit.
SUBJECT Number _____
Have you ever conducted Call for fire (real or simulated)? Yes / No
Have you ever attended a school dedicated to CFF or combined arms? Yes / No
Call for fire knowledge:
Please answer the following questions, which are typically known by a Joint Forward
Observer.
(1)
How many mils are in a circle? ______________
(2)
Name two Methods of target location.
__________________________________
__________________________________
(3)
A six digit grid coordinate is accurate to how many meters?
____________________
Once you have answered the questions please notify the experimenter. You will be
allowed three minutes to get familiar with the system. Upon completion of this
familiarization period the proctor will provide a series of short tasks.
"STOP HERE" Please get the Proctor's attention to continue
112
Protocol “A”
READ FIRST (PARTICIPANT)
SUBJECT Number _____
The following experiment is confidential. Nothing you do or answer will be related
back to you in any manner. Thank you for your participation. There is no time
limit.
Please spend the next three minutes getting familiar with the device and
software. The proctor will inform you when three minutes has expired.
Basic CFF tasks:
Task #1—Determine your current location using GPS and record the location.
Your current location: ________________________________.
Task #2—Locate the “technical vehicle” (pick-up like truck) and determine the
bearing to it using the lensatic compass. Record the information displayed on the
virtual compass below.
Bearing to “technical vehicle”: ________________________________.
Task #3—Locate and determine the bearing and distance to the second target,
the “tank vehicle”, using the Vector-21b’s, sometimes labeled as ‘rangefinders,’
and record the information on display.
Icon for bearing
Icon for range
Bearing to “tank vehicle” ___________ Distance to “tank vehicle” __________
Task #4—Locate and activate the icon used for transmitting the CFF brief.
Briefly describe the icon: _______________________________
Please turn this sheet over and follow the instructions on the other side.
113
Execute CFF brief:
You will now generate and execute a CFF, the target is the tank you located in
task #3.
Task #5—Execute CFF.
First: Transmit a POSREP (Position Report) to the FDC (Fire Direction Center)
using self-location.
Once the POSREP has been transmitted you are ready to create and transmit
the three transmissions for the CFF.
Transmission 1:
(if applicable to your device) select - Agency: “kilo btry”, Name:
“Obs”, Warning Order: “fire-for-effect”, Location Method: “polar”
Transmit (
or K)
Transmission 2:
Fill in the Polar Direction and Distance to the target, skip the
U/D dialog..
Transmit (
or K)
Transmission 3:
Select the target quantity, type and cover (I/O stands for “in
the open”). Then select the following:
Method of engagement (select “HE/Quick”)
Method of Control (select “when ready”)
Transmit (
or K)
MTO: You will receive a message to observer (MTO) from the FDC. You will
need to ‘read it back’ precisely as they sent it to you. Fill in your response
appropriately.
Transmit (
or K)
Shot Out: You may be asked to respond to this radio call with: Shot Over
Observe target for rounds impact
Select “end of mission”
114
READ FIRST
The following experiment and questionnaire are completely confidential. Nothing
you do or answer will be related back to you in any manner. Thank you for your
participation. Please answer all of the questions below and hand to the proctor when you
reach "STOP HERE.” You may ask the proctor questions at any time. There is no time
limit.
SUBJECT Number _____
Protocol “A” qualitative questionnaire: (a “4” means no strong opinion)
1. Training with this device on a regular basis will improve my ability to conduct CFF in
the field.
strongly
disagree
1
2
3
4
5
6
7
strongly
agree
2. It was difficult navigating through the device to find the appropriate information while
completing the tasks.
strongly
disagree
1
2
3
4
5
6
7
strongly
agree
3. The real world physical actions and conducting a task in the virtual environment are
the same.
strongly
disagree
1
2
3
4
5
6
7
strongly
agree
4. The button icons provide intuitive inference of what would happen when they are
pressed.
strongly
disagree
1
2
3
4
5
6
7
strongly
agree
5. It is easy to move though the screens without losing one’s place.
strongly
disagree
1
2
3
4
5
6
7
strongly
agree
6. Having this software available at my unit would improve my Units ability to perform
their mission.
strongly
disagree
1
2
3
4
5
6
7
strongly
agree
7. It was hard to understand what the buttons did.
strongly
disagree
1
2
3
115
4
5
6
7
strongly
agree
8. The 3D view interface was intuitive.
strongly
disagree
1
2
3
4
5
6
7
strongly
agree
9. The device accurately represents the real world physical motion required to conduct
the task.
strongly
disagree
1
2
3
4
5
6
7
strongly
agree
10. The overall interface is intuitive.
strongly
disagree
1
2
3
4
5
6
7
11. Please provide any additional comments about your experience with the device here:
"STOP HERE" Please get the Proctor's attention to continue
116
strongly
agree
Protocol “B”
SUBJECT Number _____
READ FIRST (PARTICIPANT)
What device did you use during protocol “A”? Laptop / Tablet
Please spend the next three minutes getting familiar with the device and
software. The proctor will inform you when three minutes has expired.
Basic CFF tasks:
Task #1—Determine your current location using GPS and record the location.
Your current location: ________________________________.
Task #2—Locate the “technical vehicle” (pick-up like truck) and determine the
bearing to it using the lensatic compass. Record the information displayed on the
virtual compass below.
Bearing to “technical vehicle”: ________________________________.
Task #3—Locate and determine the bearing and distance to the second target,
the “tank vehicle”, using the Vector-21b’s, sometimes labeled as ‘rangefinders,’
and record the information on displayed.
Icon for bearing
Icon for range
Bearing to “tank vehicle” ___________ Distance to “tank vehicle” __________
Task #4—Locate and activate the icon used for transmitting the CFF brief.
Briefly describe the icon: _______________________________
Please turn this sheet over and follow the instructions on the other side.
117
Execute CFF brief:
You will now generate and execute a CFF, the target is the tank you located in
task #3.
Task #5—Execute CFF.
First: Transmit a POSREP (Position Report) to the FDC (Fire Direction Center)
using self-location.
Once the POSREP has been transmitted you are ready to create and transmit
the three transmissions for the CFF.
Transmission 1:
(if applicable to your device) select - Agency: “kilo btry”, Name:
“Obs”, Warning Order: “fire-for-effect”, Location Method: “polar”
Transmit (
or K)
Transmission 2:
Fill in the Polar Direction and Distance to the target, skip the
U/D dialog.
Transmit (
or K)
Transmission 3:
Select the target quantity, type and cover (I/O stands for “in
the open”). Then select the following:
Method of engagement (select “HE/Quick”)
Method of Control (select “when ready”)
Transmit (
or K)
MTO: You will receive a message to observer (MTO) from the FDC. You will
need to ‘read it back’ precisely as they sent it to you. Fill in your response
appropriately.
Transmit (
or K)
Shot Out: You may be asked to respond to this radio call with: Shot Over
Observe target for rounds impact
Select “end of mission”
118
READ FIRST
The following experiment and questionnaire are completely confidential. Nothing
you do or answer will be related back to you in any manner. Thank you for your
participation. Please answer all of the questions below and hand to the proctor when you
reach "STOP HERE.” You may ask the proctor questions at any time. There is no time
limit.
SUBJECT Number _____
PART III:
Protocol “B” qualitative questionnaire: (a “4” means no strong opinion)
1. Training with this device on a regular basis will improve my ability to conduct CFF in
the field.
strongly
disagree
1
2
3
4
5
6
7
strongly
agree
2. It was difficult navigating through the device to find the appropriate information while
completing the tasks.
strongly
disagree
1
2
3
4
5
6
7
strongly
agree
3. The real world physical actions and conducting a task in the virtual environment are
the same.
strongly
disagree
1
2
3
4
5
6
7
strongly
agree
4. The button icons provide intuitive inference of what would happen when they are
pressed.
strongly
disagree
1
2
3
4
5
6
7
strongly
agree
5. It is easy to move though the screens without losing one’s place.
strongly
disagree
1
2
3
4
5
6
7
strongly
agree
6. Having this software available at my unit would improve my Units ability to perform
their mission.
strongly
disagree
1
2
3
4
5
6
7
strongly
agree
7. It was hard to understand what the buttons did.
strongly
disagree
119
1
2
3
4
5
6
7
strongly
agree
8. The 3D view interface was intuitive.
strongly
disagree
1
2
3
4
5
6
7
strongly
agree
9. The device accurately represents the real world physical motion required to conduct
the task.
strongly
disagree
1
2
3
4
5
6
7
strongly
agree
10. The overall interface is intuitive.
strongly
disagree
1
2
3
4
The questionnaire continues on the next page.
120
5
6
7
strongly
agree
Protocol “B” qualitative questionnaire continued
Circle one:
11. Which device was more intuitive to use:
Laptop / Tablet
12. If the software on both devices were about equivalent I would prefer to use:
Laptop / Tablet
13. If each device had the same feature set I would prefer to use:
Laptop / Tablet
14. This device is more convenient to train with:
Laptop / Tablet
15. Please provide any additional comments that you think would be useful to
researchers about your experience with the devices here:
"STOP HERE" Please get the Proctor's attention to continue
121
READ FIRST
The following experiment and questionnaire are completely confidential. Nothing
you do or answer will be related back to you in any manner. Thank you for your
participation. Please answer all of the questions below and hand to the proctor when you
reach "STOP HERE.” You may ask the proctor questions at any time. There is no time
limit.
SUBJECT Number _____
PART IV:
Post-experiment Demographic Questions:
1. What is your primary military specialty? (Provide name of specialty)
________________
2. Have you been school-trained in conducting artillery call for fire (CFF)?
YES
NO
3. Have you held the billet of or performed the duties of a forward observer?
YES
NO
4. Have you held the billet of or performed the duties Artillery Liaison Officer?
YES
NO
5. Have you conducted artillery call for fire with live rounds?
YES
NO
5.a If so, approximately, how long has it been since the last time you conducted
live CFF?
_______________________________________________________________
6. For how many hours do you use a computer on a daily basis?
_______________________
7. For how many hours do you use a tablet device on a daily basis?
___________________
9. Have you ever used a virtual environment for training or entertainment (i.e. first
person shooter games, VBS2, America’s Army, etc.)
YES NO
10. Have you ever used a virtual environment for forward observer training (i.e. TSFO,
FOPC, CAN, etc.)?
YES NO
122
a. What was the name(s) of the virtual environment(s)?
a. _____________________
b. _____________________
c. _____________________
11. When you were at your most proficient with CFF, how would you rate that
proficiency?
Untrained
Novice
Average
Advanced
Expert
12. Given many duties of a forward observer are perishable, how would you rate your
current proficiency in call-for-fire?
Untrained
Novice
Average
Advanced
Expert
13. During the course of your military career, while you were deployed or in any other
field environment:
a. Did you or your unit have a computer available for general use?
YES
b. Did you or your unit have a tablet device (iPad or andriod) available for use?
YES
NO
"STOP HERE" Please get the Proctor's attention to continue
123
NO
THIS PAGE INTENTIONALLY LEFT BLANK
124
LIST OF REFERENCES
American Forces Press Service. (2013). Secretary details results of
sequestration uncertainty. Retrieved August 23, 2013, from
http://www.defense.gov/news/newsarticle.aspx?id=119421
Apple Inc. (2013). iOS developer library. Retrieved September 15, 2013, from
https://developer.apple.com/library/ios/documentation/general/conceptual/
devpedia-cocoacore/MVC.html
Associated Press. (2012). Number of iPads sold by apple by quarter. Retrieved
July 22, 2013, from http://finance.yahoo.com/news/number-ipads-soldapple-quarter-201153619.html
Bilbruck, J. (2009). Supporting arms virtual trainer (SAVT). Orlando, FL.: Marine
Corps Systems Command PM TRASYS Individual APM.
Brannon, D., & Villandre, M. (2002). The forward observer personal computer
simulator (FOPCSIM).Master's thesis, Naval Postgraduate School,
Monterey, CA.
Brown, B. (2010). A training transfer study of simulation games. Master's thesis,
Naval Postgraduate School.
Clark, R. C., Nguyen, F., & Sweller, J. (2006). Efficiency in learning: Evidencebased guidelines to manage cognitive load. San Francisco: Jossey-Bass.
Cockayne, W., & Darken, R. (2003). The application of human ability
requirements to virtual environment interface design and evaluation. In
Handbook of Task Analysis for Human-Computer Interaction. Mahwah,
New Jersey: Lawrence Erlbaum Associates
Deputy Commandant for Combat Development and Integration. (2012). Marine
Corps science & technology strategic plan. Washington, DC:
Headquarters United States Marine Corps.
DVTE Development Team. (2010). ObserverSim user’s guide. Orlando, FL:
United States Marine Corps.
Grain, J. (2012). Deployable virtual training environment (DVTE) flyer.
Unpublished material.
Headquarters Department of the Army. (1960). Subcaliber Mortar Trainer M32
With 25-MM Training Projectile M379 (technical manual no. 9-6920-21214). Washington, DC: Department of the Army.
125
Headquarters Department of the Army. (1976). operator, organizational, and
direct support maintenance manual (including repair parts and special
tools list) field artillery trainer kits (with field artillery trainer M31) (technical
manual no. 9-6920-361-13&P). Washington DC: Department of the Army.
Headquarters Department of the Army. (1991). Field Manual (Fm) 6-30, Tactics,
techniques, and procedures for observed fire. Washington, DC:
Department of the Army.
Kroemer, J. (2006). Artillery soldiers adapt to infantry role in Iraq. Retrieved July
22, 2013, from http://www.defense.gov/News/newsarticle.aspx?id=14659
Maples, M. (2003). Relevant and ready: The FA now and in the future. Field
Artillery, (6), 1–5.
McDonough, J., & Strom, M. (2005). The forward observer personal computer
simulator (FOPCSIM) 2. (). Monterey: Naval Post Graduate School.
Mitchell, S. (2005). Call-for-fire trainer and the joint fires observer. FA Journal,
(March/April), 16–17.
Naval Air Systems Command, Training Systems Division. (1998). Summary of
forward observer training system (FOTS). Orlando, FL: United States
Navy.
Norr, H. (2006). PowerBook G4s. Macworld, 23(1), 36.
Pinola, M. (2012). Windows 8 ultrabooks and tablets to feature new sensors.
PCWorld
Program Manager Training Systems. (2013). Product and Services Catalog.
Orlando, Florida: United States Marine Corps.
Shimpi, A. L. (2012). iPad 4 GPU performance analyzed: PowerVR SGX
554MP4 under the hood. Retrieved July 22, 2013, from
http://www.anandtech.com/show/6426/ipad-4-gpu-performance-analyzedpowervr-sgx-554mp4-under-the-hood
Unity Technologies. (2013). Unity license comparisons. Retrieved September,
10, 2013, from http://unity3d.com/unity/licenses.html
U.S. Army Program Executive Office for Simulation, Training, & Instrumentation.
(2003). Guard unit armory device full-crew interactive simulation trainer
(GUARDFIST II). Retrieved August 8, 2013, from
http://web.archive.org/web/20040308184317/http://www.peostri.army.mil/P
RODUCTS/GUARDFISTII/
126
United States Army Field Artillery School. (1989). Field artillery training devices,
software and special texts. Field Artillery. (August)
Walker, D. (2013). Trends in U.S. military spending. New York, New York:
Council on Foreign Relations.
127
THIS PAGE INTENTIONALLY LEFT BLANK
128
INITIAL DISTRIBUTION LIST
1.
Defense Technical Information Center
Ft. Belvoir, Virginia
2.
Dudley Knox Library
Naval Postgraduate School
Monterey, California
129