A Top Domain Ontology For Software Testing

A Top Domain Ontology For Software Testing
A Top Domain Ontology
For
Software Testing
Ahmad Asman
Rakesh Maurani Srikanth
EXAM WORK 2015
INFORMATICS
Postadress:
Box 1026
551 11 Jönköping
Besöksadress:
Gjuterigatan 5
Telefon:
036-10 10 00 (vx)
This exam work has been carried out at the School of Engineering in Jönköping
in the subject area Informatics, specialization Information Engineering and
Management. The thesis is a part of the two-year university’s Master of
Science programme.
The authors take full responsibility for opinions, conclusions and findings
presented.
Examiner: Anders Adlemo
Supervisor: He Tan
Scope: 30 credits (second cycle)
Date: November 2015
Abstract
Abstract
In software testing process a large amount of information is required and
generated. This information can be stored as knowledge that needs to be
managed and maintained using principles of knowledge management.
Ontologies can act as a bridge by representing this testing knowledge in an
accessible and understandable way.
The purpose of this master thesis is to develop a Top domain ontology (TDO)
which represents general software testing knowledge. This can be achieved by
unifying the domain vocabularies that are used in the software testing. This top
domain ontology can be used to link existing software testing ontologies. It can
act as an interface between top-level and domain ontologies and guide the
development of new software testing ontologies.
The standards of ISTQB were used after careful consideration as the main
source of knowledge, other sources such as existing software testing ontologies
were also used to develop the ontology. The available ontologies for software
testing were collected and evaluated against a list of evaluation criteria. The
study shows that the available software testing ontologies do not fulfill the
purpose for a TDO. In this work, we developed a TDO by using a combination
of two ontology development methods: Ontology 101 and Methontology. The
resources used for gaining knowledge and reusing the concepts from available
ontologies made it possible for this TDO to have a better coverage in the field
of software testing. The ontology was evaluated by using two methods:
Competency questions and Ontology expert’s evaluation. The evaluation based
on competency questions focuses on the structure of the ontology and shows
that the ontology is well formed and delivers expected result. The evaluation by
ontology experts was done against a set of quality criteria which represented
the quality and coverage of ontology. The results shows that the ontology
developed can be used as a TDO after fixing some comments from the
evaluators. The evaluators agree that the ontology can be adapted to different
application of software testing and that it fulfils the main purpose of top
domain ontology.
The developed ontology could be made better by evaluating and reusing the
ontologies that are not published (e.g. STOWS). Ontology maintenance is an
ongoing process. Ontology needs to be updated with new knowledge of
software testing that emerges with research.
i
Acknowledgements
Acknowledgements
We would like to thank our supervisor He Tan, for her guidance and valuable
suggestions throughout the course of our thesis. Your guidance helped us to
find the way out when we were stuck. Your sharing thoughts and knowledge
was there throughout the process of writing of thesis and development of
ontology; and this made possible completion of our thesis.
Our acknowledgement would not be complete without thanking Vladimir
Tarasov for his valuable suggestions at different stages of the thesis.
We owe our gratitude to the evaluators of our ontology, for their time to
complete the evaluation. The comments and results helped us analyse and
verify our work.
Last but not the least we would like to thank our family, friends for their
support.
-
Ahmad Asman and Rakesh Maurani Srikanth
ii
Keyywords
Keywords
Top Domain Ontology (TDO), Top-level Ontology, Ontology, Ontology
Development, software testing, domain, software testing ontology, ontology
reuse, ontology evaluation
iii
Contents
Contents
1
Introduction ............................................................................... 1
1.1
1.2
1.3
1.4
2
BACKGROUND ............................................................................................................................. 1
PURPOSE AND RESEARCH QUESTIONS .......................................................................................... 2
DELIMITATIONS .......................................................................................................................... 3
OUTLINE ..................................................................................................................................... 3
Theoretical background ............................................................. 4
2.1 SOFTWARE TESTING.................................................................................................................... 4
2.2 ONTOLOGY ................................................................................................................................. 5
2.2.1
Top-level Ontology and Top-Domain Ontology ............................................................... 5
2.2.2
Domain Ontologies for Software Testing ......................................................................... 6
2.3 ONTOLOGY DEVELOPMENT METHODS ........................................................................................ 7
2.3.1
Methontology .................................................................................................................... 8
2.3.2
Ontology 101 .................................................................................................................... 9
2.3.3
eXtreme Design with Content Ontology Design Patterns (XD) ...................................... 11
2.3.4
Ontology Reuse ............................................................................................................... 12
2.3.5
Tools used ....................................................................................................................... 13
2.3.6
Language used ................................................................................................................ 13
3
Method and Implementation .................................................... 14
3.1 RESEARCH METHOD ................................................................................................................. 14
3.1.1
Information Collection.................................................................................................... 15
3.1.2
Design Science Research Methodology .......................................................................... 16
3.2 ONTOLOGY DEVELOPMENT METHOD ....................................................................................... 20
3.2.1
Mixed method for TDO development .............................................................................. 21
3.2.2
Method for Evaluation of Ontology developed ............................................................... 22
4
Findings and analysis ............................................................... 23
4.1
RQ1. WHAT ARE THE EXISTING SOFTWARE TESTING ONTOLOGIES OR FRAMEWORKS, THEIR
PURPOSE AND ITS EVALUATION? ......................................................................................................... 23
4.1.1
Ontology for Software testing ......................................................................................... 23
4.1.2
Available Ontology evaluation ....................................................................................... 26
4.1.3
Evaluation results ........................................................................................................... 28
4.2 RQ2. WHAT ARE THE RELEVANT CONCEPTS, RELATIONS AND CONSTRAINTS THAT ARE NEEDED
TO DESCRIBE KNOWLEDGE OF SOFTWARE TESTING IN GENERAL? ....................................................... 29
4.3 TOP DOMAIN ONTOLOGY FOR SOFTWARE TESTING .................................................................... 29
4.3.1
Classes in TDO ............................................................................................................... 30
4.4 EVALUATION ............................................................................................................................ 32
4.4.1
Competency Questions.................................................................................................... 32
4.4.2
Validation by ontology experts ....................................................................................... 33
5
Discussion and conclusions ....................................................... 36
5.1 DISCUSSION OF METHOD ........................................................................................................... 36
5.1.1
Literature review ............................................................................................................ 36
5.1.2
Mixed method for TDO development .............................................................................. 36
5.1.3
Mixed method for Evaluation ......................................................................................... 37
5.2 DISCUSSION OF FINDINGS .......................................................................................................... 37
5.3 CONCLUSIONS ........................................................................................................................... 38
5.4 FUTURE SCOPE OF THE PROJECT ................................................................................................ 39
6
References ................................................................................ 40
iv
Contents
7
Appendices ............................................................................... 46
7.1 APPENDIX 1: CLASSES AND RELATIONS USED IN ‘A TOP DOMAIN ONTOLOGY FOR SOFTWARE
TESTING’ ............................................................................................................................................ 46
7.1.1
Defects ............................................................................................................................ 46
7.1.2
Formal review steps ........................................................................................................ 46
7.1.3
Human involved in testing .............................................................................................. 48
7.1.4
Static testing techniques ................................................................................................. 50
7.1.5
Test design techniques .................................................................................................... 52
7.1.6
Test methods ................................................................................................................... 53
7.1.7
Test objects ..................................................................................................................... 55
7.1.8
Test strategies ................................................................................................................. 56
7.1.9
Testing artifacts .............................................................................................................. 58
7.1.10
Requirements ............................................................................................................. 60
7.1.11
Testing teams ............................................................................................................. 63
7.1.12
Tools for testing ......................................................................................................... 63
v
List of Figures
List of Figures
FIGURE 1 A MODEL OF THE SOFTWARE TESTING PROCESS....................................................................................... 4
FIGURE 2 ONTOLOGY LAYER PYRAMID ............................................................................................................... 6
FIGURE 3 TIMELINE FOR ONTOLOGY DEVELOPMENT METHODS ................................................................................ 7
FIGURE 4 ONTOLOGY DEVELOPMENT PROCESS OF METHONTOLOGY ........................................................................ 9
FIGURE 5 STAGES OF ONTOLOGY DEVELOPMENT 101 METHOD............................................................................. 10
FIGURE 6 THE XD ITERATIVE WORKFLOW ......................................................................................................... 12
FIGURE 7 RESEARCH DESIGN FOR THE THESIS ..................................................................................................... 14
FIGURE 8 LITERATURE REVIEW PROCESS ........................................................................................................... 15
FIGURE 9 DESIGN SCIENCE RESEARCH METHODOLOGY PROCESS MODEL................................................................ 17
FIGURE 10 DSRM PROCESS FOR A TOP DOMAIN ONTOLOGY FOR SOFTWARE TESTING............................................. 20
FIGURE 11 ONTOLOGY LIFE CYCLE FOR MIXED METHOD ....................................................................................... 21
FIGURE 12 ONTOLOGY METRICS AS SHOWN ON PROTÉGÉ.................................................................................... 30
FIGURE 13 HIGH LEVEL ONTOLOGY MODEL FOR SOFTWARE TESTING ..................................................................... 31
FIGURE 14 EXAMPLE OF DOCUMENTATION LEAF AND ITS RELATIONS ...................................................................... 32
FIGURE 15 SPARQL QUERY EXAMPLE ............................................................................................................... 32
FIGURE 16 DEFECTS ..................................................................................................................................... 46
FIGURE 17 FORMAL REVIEW STEPS ................................................................................................................. 47
FIGURE 18 RELATIONS FOR FORMAL REVIEW STEPS ............................................................................................ 48
FIGURE 19 HUMANS INVOLVED IN TESTING ...................................................................................................... 49
FIGURE 20 RELATIONS FOR HUMAN INVOLVED IN TESTING ................................................................................... 49
FIGURE 21 STATIC TESTING TECHNIQUES .......................................................................................................... 51
FIGURE 22 RELATIONS FOR STATIC TESTING TECHNIQUES ..................................................................................... 51
FIGURE 23 TEST DESIGN TECHNIQUES .............................................................................................................. 52
FIGURE 24 RELATIONS FOR TEST DESIGN TECHNIQUES ......................................................................................... 53
FIGURE 25 TEST METHODS ........................................................................................................................... 54
FIGURE 26 RELATIONS FOR TEST METHODS ....................................................................................................... 55
FIGURE 27 RELATIONS FOR TEST OBJECTS ......................................................................................................... 56
FIGURE 28 TEST STRATEGIES ......................................................................................................................... 57
FIGURE 29 RELATIONS FOR TEST STRATEGIES..................................................................................................... 58
FIGURE 30 TESTING ARTIFACTS ...................................................................................................................... 59
FIGURE 31 RELATIONS FOR TESTING ARTIFACTS ................................................................................................. 60
FIGURE 32 RELATIONS FOR DOCUMENTATION ................................................................................................... 60
FIGURE 33 RELATIONS FOR REQUIREMENTS ...................................................................................................... 61
FIGURE 34 RELATIONS FOR TEST CASE.............................................................................................................. 62
FIGURE 35 RELATIONS FOR TEST CONDITION ..................................................................................................... 62
FIGURE 36 RELATIONS FOR TESTING TEAM........................................................................................................ 63
FIGURE 37 TOOLS FOR TESTING ...................................................................................................................... 64
FIGURE 38 RELATIONS FOR TOOLS FOR TESTING ................................................................................................. 65
vi
List of Tables and Abbreviations
List of Tables
TABLE 1. DOMAIN ONTOLOGIES FOR SOFTWARE TESTING .................................................................................... 25
TABLE 2. QUALITY CRITERIA EVALUATION OF AVAILABLE SOFTWARE TESTING ONTOLOGIES.......................................... 28
TABLE 3 ONTOLOGY EVALUATION BY EXPERTS ................................................................................................... 33
List of Abbreviations
KBS
KM
GT
DS research
DSRM
TDO
Knowledge base systems
Knowledge Management
Glossary of Terms
Design Science Research
Design Science Research Methodology
Top-Domain Ontology
vi
Introduction
1 Introduction
This chapter describes the importance of research, its purpose, research
problem, its background, and delimitations of our research. The focus here is to
address the purpose of the research, explain the problem and address research
questions as an aim to solve the problems encountered.
1.1 Background
The Software engineering life cycle consists of many steps and one of them
comprises of the discipline of testing which is devoted to prevent
malfunctioning and check the performance of the software based on its
requirements. Software products today are far more complex than a decade ago
and the competition to deliver a quality software product on time is very
important. This has pushed software testing to new heights; due to demand of
high quality, software organizations are trying to find new ways to come up
with a faster and better testing process. Testing is usually performed at different
levels like Unit Testing, Integration Testing, System Testing. The testing
process consists of stages like test planning, test case design, test execution and
test result analysis. Software testing requires knowledge of requirement
specification document, design documents, domain knowledge. This
knowledge could be in different formats like structured document, semistructured document, tables and diagrams.
There is a lot of knowledge required for software testing and a lot of
information generated during testing process. Therefore, computer support
becomes important to manage these testing knowledge for reuse. In this
context, testing knowledge should be captured and represented in an accessible
and understandable way, and therefore, principles of Knowledge Management
(KM) can be applied [1, p. 71]. Ontology can act as a bridge in representing
testing knowledge. Ontologies are about describing the types of objects,
properties of objects, and relations between objects that are possible in a
specified domain of knowledge [2, p. 1]. Ontologies are best used for sharing
information as they are mostly used in specifying the vocabulary and
relationship of concepts in a domain of interest [3, p. 1]. Ontology is used for
three general purposes in KM systems [4] “(i) to support knowledge search,
retrieval, and personalization; (ii) to serve as basis for knowledge gathering,
integration, and organization; and (iii) to support knowledge visualization”.
“Ontologies can be used for establishing a common conceptualization to be
used in the KM system in order to facilitate communication, integration,
search, storage and representation of knowledge” [5, p. 58]. Having this as the
main criteria there have been several initiatives to implement the use of
ontologies in testing, as ontology can represent information in a machine
understandable way [6].
A top domain ontology is an ontology that contains general concepts that are
same across different domains. Its main function is to provide the semantic
integration of domain ontologies and also guide the development of new
1
Introduction
ontologies [7]. For example in BioTopLite [8], the ontology discussed is a top
domain ontology that covers a broad range of categories relevant for
application in the Life Science domain. Its main goal is to provide different
ontologies from life sciences domain with ontologically sound layer that will
help in integrating and linking these ontologies. There are multiple advantages
if the ontologies are integrated and linked. Some of the advantages include
saving time and human effort. The reused ontology are already tested and
evaluated which supports its quality. Over the time the development and
research in the field of ontology development has been increasing. Currently
there are multiple levels of ontology developed: Top-level, Top-Domain and
domain ontologies in the field of life sciences [9]. There are multiple ontologies
developed for software testing that are domain specific like software testing
ontology [10] which is used for test case reuse and MND-TMM [11] for
weapon software system development They do not provide integration with
other domains. There is a need of top domain ontology in software testing
which will act as a general ontology that covers all domain and knowledge.
And also, the software testing domain is very complex and one of the main
problems in the software testing literature is that there is no uniformity in the
vocabulary used [1]. “In several cases, authors create and recreate concepts,
using different terms” [1]. To elevate this problem Top-Domain ontologies can
act as a guide for developers with a sound framework they can rely on and reuse [8].
1.2 Purpose and research questions
The motivation of ontology development is generally from a particular problem
that the developers are attempting to solve [12, p. 80]. As Software testing
processes generate a large volume of information [1, p. 71], ontologies can be
used to manage these information. “Ontologies can be used for establishing a
common conceptualization to be used in the KM system in order to facilitate
communication, integration, search, storage and representation of knowledge”
[5, p. 58]
The goal of this thesis is to develop a Top-Domain ontology (TDO) for
software testing by unifying the vocabularies that are used in the testing
domain. The use of this TDO can be implemented to:
1. Link existing domain software testing ontologies,
2. Act as an interface between top-level and domain ontologies (section 2.2.1)
3. Guide the development of new software testing ontologies.
Research Questions
We have two research questions for our thesis:
RQ1: What are the existing software testing ontologies or frameworks, their
purposes and their quality?
Here we evaluate the existing ontologies/framework of software testing that are
available and reach a conclusion of what has been done in the area of
ontologies in software testing.
2
Introduction
RQ2: What are the relevant concepts, relations and constraints that are needed
to describe knowledge of software testing in general?
This task mainly deals with knowledge acquisition, where different sources for
software testing knowledge is searched, studied and evaluated to be used as
standards, which is used to develop the ontology.
After knowledge acquisition we build a top domain ontology for software
testing based on the standards of International Software Testing Qualification
Board (ISTQB) [13]. Some existing ontologies for software testing are also
used as resources. After completing this task, evaluation of the ontology is
carried out to test the quality of ontology.
1.3 Delimitations
The scope of the thesis is limited to the area of software testing and how to
represent knowledge needed for software testing in ontology. We have used
standards from ISTQB to build a general ontology and also reused some part
from some other software testing ontologies. This work does not go into details
of any domain, like Software Test Ontology (SWTO) [14] which is aimed at
testing focused on Linux platform. This work focuses on finding software
testing knowledge and their relations that can act as Top domain ontology, also
what we call ontology that represents knowledge in general.
1.4 Outline
In the current chapter we provided the background of the subject that
familiarizes the reader to the subject of the thesis. The purpose of the thesis was
defined followed by the formation of research question, and delimitation of the
research conducted was discussed.
In order to help the reader to understand the structure of the thesis and follow
the contents likewise, an overall explanation is presented in the form of a
chapter for each section in the report.
In Chapter 2 “Theoretical background” the reader is provided with theories
related to the research problem and its intended solution. It covers the topics
which will act as the knowledge base for implementation, evaluation, and
motivation for this thesis work.
In Chapter 3 “Method and Implementation” we explain how the research
study has been designed in terms of methods and how are they implemented.
This section also motivates the choice of method and its usage in terms of the
current research problem.
In Chapter 4 “Finding and Analysis” delivers the findings following the
research tasks that are based on research questions. This section answers the
research tasks and results of the ontology evaluation mentioned in chapter 3.
In Chapter 5 “Discussion and conclusion” we discuss the method and findings
with accordance to its pros and cons. We also provide a conclusion of our work
along with recommendations for future research.
3
Theoretical background
2 Theoretical background
This section is an introductory chapter for the reader to understand the terms
and knowledge used to solve the research questions discussed in the Section
1.2. This chapter introduces readers to the domain of software testing, its
process, ontologies and its usage so far in this domain, familiarizes with Toplevel ontology, Top-Domain ontology and its need, and then talks about
ontology development methods. These theories would be used as a tool to
understand the problem and also will be used in Section 3 to develop
methodology and discuss them, and in Section 4 to help in evaluating and
discussing the findings.
2.1 Software Testing
“In a software product “Verification and Validation” (V&V) activities intend to
ensure the satisfaction of the intended user needs by conformance of the system
with its specification” [15]. “The general aim of testing is to affirm the quality
of software systems by systematically exercising the software in carefully
controlled circumstances” [16].
The Testing process has been divided into three stages where system
components are tested, the integrated system is tested and, finally, the system is
tested with customer’s data. This process starts with defects in the components
being discovered early followed by interface problems when the system is
integrated and later in system testing component errors may be discovered. As
the problems need to be debugged and tested again, this may require other
stages of testing to be repeated.
Software testing process has two distinct goals [17]:
1. Demonstrating developer and the customer that the software built or
being built meets its requirements.
2. Discovering faults or defects in the software in terms of incorrect,
undesirable behaviour of the software or if it does not conform to its
specification.
Figure 1 shows a general model of the testing process.
Figure 1 A model of the software testing process [17]
4
Theoretical background
Testing is an expensive and laborious process and as solution to this, testing
tools were developed [17]. These tools are referred to as “test automation”
where these integrated set of tools support the testing process and with the help
of facilities provided by these tools, the cost of testing can be reduced.
A significant volume of information is generated during the testing process.
“Such information may turn into useful knowledge to potentially benefit future
projects from experiences gained from past projects” [1]. In this context,
Knowledge Management (KM) has emerged as one of the important means to
manage software testing knowledge, but one of the main problems is how to
represent this knowledge. “KM system must minimize ambiguity and
imprecision in interpreting shared information. This can be achieved by
representing the shared information using ontologies.” [18] As pointed out in
[5], “Ontologies are particularly important for KM. They bind KM activities
together, allowing a content-oriented view of KM”. “Ontologies can be used
for establishing a common conceptualization to be used in the KM system in
order to facilitate communication, integration, search, storage and
representation of knowledge” [5, p. 58]. Therefore, there is a need for software
testing ontology for managing software testing knowledge.
2.2 Ontology
“An ontology is a formal, explicit specification of a shared conceptualization”
[19, p. 1]. “Ontologies have been widely recognized as an important instrument
for supporting Knowledge Management” [1, p. 1].
Ontology defines a common vocabulary to share information in a domain by
defining basic concepts of the domain and relations among them in machine
interpretable format. The importance of ontology can be seen in many
disciplines where standardized ontologies have been developed which can be
used by domain experts to share knowledge in their field [20].
Adapting from [20], some purposes to develop our ontology are:
 Helps in general understanding of contents or information among
software experts or common people.
 Reuse of knowledge from a particular domain
 From the declared specified terms domain knowledge can be analyzed.
2.2.1
Top-level Ontology and Top-Domain Ontology
Top-level Ontologies contains general categories which are applicable across
multiple domains. “Their main purpose is to facilitate the semantic integration
of domain ontologies and guide the development of new ontologies” [7].
Ontologies have been distinguished in three basic types in [9] [21] (shown in
Figure 2) in terms of biomedical domain, and we take them as a reference for
expressing the absence of TDO in the field of software testing.
 Top-level ontology: This ontology provide semantics for very general
terms and contains a very small and restricted set of high-level, general
5
Theoretical background
classes which are not related to any domain like “Continuant”,
“Function” or “Object” Examples for this kind of ontologies are BFO
[22] and DOLCE [23].
 Top-domain ontology: The main purpose of this ontology is to integrate
with both top-level and domain ontologies and thus they contain core
domain classes [21]. “A top-domain ontology can also include more
specific relations and further expand or restrict the applicability of
relations introduced by the top ontology” [9]. An example is BioTop
[24] for biomedical domain, and this ontology type is absent in software
testing.
 Domain ontology: This ontology is used to describe a specific domain
by providing semantics for the terminology used and represents specific
classes and concepts. An example in the case of biomedical “Gene
Ontology” [25] and for software testing “STOWS” [26].
Our
Ontology
Figure 2 Ontology Layer Pyramid (adopted from [19])
Why we need Top domain ontology?
According to [8], ontology engineers should possess in-depth knowledge of the
domain and as well master the representation formalism, they need to be skilled
and follow design specifications for building and maintaining modular software
artefacts. Top-Domain ontologies can act as a guide for developers and provide
them with a sound framework which they can rely on and re-use. TDO have a
standardizing nature as they present domain knowledge generally and hence
can guarantee for real interoperability of ontology on class and relation levels.
2.2.2
Domain Ontologies for Software Testing
Ontology for software testing was developed based on their domain or the
project from which the knowledge can be extracted. After a careful literature
review (e.g. STOWS [26], SWTO [14], TaaS [27]) we concluded that there
were no quality ontology developed to describe the knowledge of software
testing in general. Though there were two ontologies: SWTOI and OntoTest
6
Theoretical background
[28], that described the knowledge of software testing, they either miss out
some important concepts or do not follow the quality standards for ontology
development. This will be discussed in detail in section 4.1.1 . In our work we
intent to overcome these shortcoming and represent the knowledge according
to the available resources.
2.3 Ontology Development Methods
“The ontology development process refers to what activities are carried out
when building ontologies” [29]. There is no one “correct” way or methodology
for developing ontologies.
There is a growing number of methodologies that specifically address the issue
of the development and maintenance of ontologies. Some of them are
mentioned here:
Ontolingua [30] [31] [32], Tove [33] [34] [35] [36], PLINIUS [37],
CommonKADS and KACTUS [38] [39], Guarino et al. [40], MENELAS [41]
[42], PHYSSYS [43] [44], Enterprise model approach [45] [46] [47],
Mikrokosmos [48] [49], ONIONS [50] [51], SENSUS [52], KBSI IDEFS [53],
Methontology [54] [29], Ontology 101 [20], xP [55].
Figure 3 Timeline for ontology development methods
After a brief study on different ontology development methodology we have
chosen two relevant methods for building TDO for this project: Ontology 101
and Methontology. We have chosen these two methods as Ontology 101
methodology provides the best practical and detailed guidelines for building an
7
Theoretical background
ontology while Methontology provides good guidelines for organizing the
activities during ontology development. We will also briefly discuss about
eXtreme Design (XD) [55] method, which is the latest ontology development
method to cover the latest trend of ontology development. However XD is not
used in our research, as it deals with a lot of activities that does not fit in the
context of developing ontology from available resources. After that we talk
about ‘ontology reuse’ [56] [57] which is an activity of ontology development
method. We also discuss ontology reuse and its application on our ontology.
2.3.1
Methontology
Methontology is a well-structured methodology used for development of
ontology from a scratch. In development of TDO we implement this
methodology as we had to develop our ontology using the resources found
through our literature review. This methodology was developed within the
Laboratory of Artificial Intelligence at the Polytechnic University of Madrid.
This methodology constructs ontologies at the knowledge level and includes a
life cycle model of ontology, an ontology development process and techniques
for each of these activities. According to [29] the following activities are
involved. From Figure 4 the following can be described as process of ontology
development.
1. Specification: The purpose of this step is to produce a specification
document in natural language. It must include information like purpose
of the ontology including its intended use, users, etc.; level of formality
needed; scope of the ontology which includes the set of terms to be
represented, their characteristics and the required granularity.
2. Knowledge acquisition: This is an independent activity which uses the
source of knowledge to elucidate knowledge using techniques like
interviews, etc. Most of its usage is in the specification phase when
‘requirement document’ is constructed and also used in other phases.
3. Conceptualization: Domain knowledge is structured in conceptual
model in this activity. A Glossary of Terms (GT) is built which is
grouped as concepts and verbs. A conceptual model is produced as the
outcome, this helps in a) Determining whether the ontology is useful and
usable for the application without inspecting its source code, b) To
compare the scope and completeness of the ontologies by analysing the
knowledge expressed by GT.
4. Integration: This activity has a goal of speeding the construction of
ontologies by reusing the ontologies from already built ontologies. This
activity will also provide some uniformity across ontologies.
5. Implementation: In this phase ontology codified in a formal language
(Prolog, C++ or in any language) by using an ontology development
environment.
6. Evaluation: This activity ensures the correctness of an ontology which
is carried with respect to the requirement specification document.
8
Theoretical background
Certain guidelines are mentioned in [58] for carrying out validation and
verification by looking for incompleteness, inconsistencies and
redundancies
7. Documentation: This step includes documentation as an activity to be
performed over different phases of ontology development process. It
gives documents after each activity like requirements specification
document, knowledge acquisition document, conceptual model
document,
formalization
document,
integration
document,
implementation document, and evaluation document.
The life cycle of ontology identifies which stages ontology moves during its
life time and defines activities related to each stages. According to [59] these
can be divided into three categories:
a. Project planning: This category include planning, control and quality
assurance.
b. Development - Oriented Activities: This activity include specification,
conceptualization, formalization and implementation.
c. Support Activities: This include knowledge acquisition, evaluation,
integration, documentation and configuration.
Figure 4 Ontology development process of Methontology [27]
2.3.2
Ontology 101
Ontology 101 [20] has an iterative method of ontology development. Since our
ontology’s development has to be started using available resources through our
literature review, it was necessary to keep it under iterative process for it to
reach its final stage. Another important reason to implement this methodology
is the need for reuse. Though no ontologies were entirely used we had to reuse
9
Theoretical background
some concepts and relations from the existing ontologies and the reuse phases
are clearly described in this methodology.
The steps in which the ontology has to be shaped are more detailed and clear
than in Methontology. Ontology101 can be classified into seven steps that
helps in building our ontology from scratch.
Figure 5 Stages of ontology development 101 method [18]
1. Determine the domain and scope of the ontology: To determine the
scope of the ontology is the first initiative for creating ontology. We
have to know the domain it focuses on, the use of ontology, answers it
provides for the competency questions and the end users of the
ontology.
2. Consider reuse of existing ontology: It is always best to check if there
is already an existing ontology that can refined and extended for a
particular domain. There are libraries of ontologies online that can be
reused based on the domain. Some of them are Ontolingua ontology
library [60] or the DAML ontology library [61].
3. Enumerate important terms in the ontology: The next part will be to
write down all the important terms that could match the ontology. It is
evident that there will be development in the later stage by addition of
terms that may sound more suitable, this can be a topic of further
development in the ontology, but it is necessary to find suitable terms at
the beginning for better understanding of classes and its properties.
4. Define the classes and class hierarchy: The next step will be the
approach by which these terms can be implemented in the ontology. As
explained in [20] there are three kinds of approaches.
Top-Down: This approach starts with developing the general concepts
in the domain and later specializing them. The concepts here are given
are classes which can be further divided as subclasses.
Bottom-up: The specific class is defined first and the relevant terms or
leaves of hierarchy are developed after which the class is grouped under
specific concept. This is done for every proposed concept that is defined
as a class.
Combination: This involves development based on both top down and
bottom up approach. This kind of approach is mostly used by ontology
developers in recent times.
10
Theoretical background
5. Determine the properties of classes-slots: After defining the classes it
is necessary to define the properties of the class which helps in
answering the competency questions (section 4.4.1). For example to
define a tiger we need its properties like its weight, its category, food it
consumes etc. These can be valued when the properties answers all the
competency questions.
6. Define the facets of the slots: The usage of slot can be seen in Protégé
[62] but is not used in tools like top braid composer. Slots define the
value type of different facets. The slots has to be filled in with suitable
value types like string, number, Boolean, Enumerated and Instance-type.
This classification helps in defining the facets of the slots.
7. Create instances: The final step will be creating the instances.
Instances define the actual meaning of the entire ontology and these are
obtained for the requirement specification documents. The documents
will be in text, UML, semi structured document etc. The important part
will be to decode the information from the document and creating
instances from the results.
2.3.3
eXtreme Design with Content Ontology Design Patterns (XD)
We have mentioned about this ontology development method as this is the
latest ontology development method. This method is not used in this project as
it does not suit our project, but we want the readers to know something about
the latest ontology development method.
“Ontology design patterns (ODPs) [63] are an emerging technology that favors
the reuse of encoded experiences and good practices.” [55, p. 83]
“XD is partly inspired by software engineering eXtreme Programming (XP)
[64], and experience factory [65]” [55]. XP is an agile software development
methodology which aims to minimize the impact made by changes at different
stages of the development, and producing incremental releases based on
customer requirements and their prioritization. “Experience factory is an
organizational and process approach for improving life cycles and products
based on the exploitation of past experience know-how” [55].
While designing the ontology, the project can be divided into two parts:
ontology project’s solution space and problem space [55]. Generic Use Cases
(GUC) compose the ontology project’s solution space that contains the main
knowledge source for addressing ontology design. “Local Use Cases” (LUC)
compose the ontology project’s problem space provides descriptions of the
actual issues
Assuming that that GUCs and LUCs are represented in a compatible way, we
compare LUCs to GUCs. If there is a match, “the ODPs associated with the
matching GUCs are selected and reused for building the final solution” [55].
11
Theoretical background
Figure 6 The XD iterative workflow [55]
Figure 6 shows the workflow of XD with CPs. In XD the designers are
organized in pairs and they work in parallel.
2.3.4
Ontology Reuse
Ontology reuse is a part of most of the ontology development method and also
a part of Methontology and Ontology 101 methods. Reuse plays a vital role in
our ontology development as for development of TDO we reuse concepts and
axioms from other software testing ontology. The application of reuse on this
TDO will be to provide guidance for development of domain ontologies.
Therefore, this section explains more about the concept of ontology reuse and
its benefits.
Creating ontologies from scratch is costly and time taking and reuse of
ontology can relieve us from both these problems. In literature ontology reuse
[57] have been defined according to two different points of view for building of
ontologies:
1. By activities like assembling, extending, specializing and adapting, from
other ontologies which are parts of the resulting ontology.
2. By merging different ontologies that are of same or similar subject into
one that unifies all of them.
In literature [56] several advantages of reusing ontologies have been justified:
a. It reduces human labour as the ontology need not to be constructed from
scratch
12
Theoretical background
b. As the existing ontology is tested, the reused components increase the
quality of the new ontology, and mapping between the two ontologies
becomes simpler as mappings between their shared components are
trivial.
c. Ontology maintenance overhead is reduced as multiple ontologies can
be simultaneously updated by updating their commonly shared
components.
2.3.5
Tools used
Protégé: “Protégé is a free, open-source platform that provides a growing user
community with a suite of tools to construct domain models and knowledgebased applications with ontologies” [62]. We have used Protégé in our work as
it is available for free over internet and also because it is widely used for
development of ontology in research groups.
Top Braid Composer: “It is the premier tool in the world of sematic web and
linked data development for working with RDF schemas, OWL ontologies,
RDF data and SPARQL queries. It is built on popular Eclipse IDE so it follows
all Eclipse convention like resizing, maximizing and customizing the views of
IDE” [66].
2.3.6
Language used
OWL: “The Web Ontology Language OWL is a semantic markup language for
publishing and sharing ontologies on the World Wide Web. OWL is developed
as a vocabulary extension of RDF (the Resource Description Framework) and
is derived from the DAML+OIL Web Ontology Language” [67]. “The OWL
ontology describes the hierarchical organization of ideas in a domain, in a way
that can be parsed and understood by software. OWL has more facilities for
expressing meaning and semantics than XML, RDF, and RDF-S, and thus
OWL goes beyond these languages in its ability to represent machine
interpretable content on the Web” [68].
13
Method and implementation
3 Method and Implementation
A good research method is very important to carry out in a good research work.
As mentioned in [69] a research can be defined in context of the current
research as:
a. “Research is a systematic investigation to find answers to a problem”
[70, p. 1]
b. “Research is an organized, systematic, data-based, critical, scientific
inquiry or investigation into a specific problem, undertaken with an
objective of finding answers or solution to it ” [71, p. 4].
A research method can be seen as a process of collecting information and data
for the purpose of using to solve the problem and produce a possible result. It
describes how the research was carried out and its intended outcome.
This section has been divided into two segments:
a. Research method, which will define and discuss the research
methodologies which are used to answer the research questions
proposed in this paper
b. Ontology development method, which will partly use the knowledge
gained from research methodologies and implement it into developing a
TDO.
Figure 7 Research design for the thesis
3.1 Research Method
There are many research methods mentioned in [69] like Survey, case study,
experimental, action research, ethnography, historical and Delphi method. The
methods used in this research are dependent on the topic and research problem
itself. The first research method for information collection uses literature
review as the aim of this thesis is to develop a TDO for software testing
14
Method and implementation
domain, which would require a lot of study as well as comparative study which
could be best provided by literature review.
The second method was chosen for implementing the knowledge gained from
literature review to develop an ontology. After careful consideration of the
methods mentioned above, we shortlisted two best suited research methods to
choose from: a. System development research [72], and b. Design science
research [73]. Both of the methods favour development as research. System
development was suited as it followed as well as supports the iterative
development plan of ontology and its maintenance, but development of
ontology would had to be considered under system/prototype development
which is not feasible. Design science research was chosen as it favours the
research direction of inspiration from other domains where a similar artifact is
present. We did not use system development research as it was not able to
justify ontology development as an artifact, and also lacked the inspiration
factor from other domains.
3.1.1
Information Collection
The techniques used for collecting information here is literature review. This
method of research is used to acquire knowledge of the topic, previous work in
it, how it was researched and what are the key issues in the topic.
The research is done on academic papers which peer-reviewed like research
paper, thesis, journal article, etc. It has no specific methodology or specified
steps.
This is a repetitive process, whenever a new question or problem arises and
more information is needed, the procedure described in Figure 8 is repeated
until a reliable data is obtained.
Figure 8 Literature review process
The steps involved in this process are as follows:
General Idea: This idea here is about what is the topic of the research or the
related area of the topic. This idea could be developed at this time, or may need
to be refined later in the process.
15
Method and implementation
Search: This step is for definition of the search that is to be carried out by
giving the keywords. There can be defining of keywords with and/or criteria.
e.g. software and ontology, software or ontology. The first one will only give
papers that have both the words and the second one reverts the papers that have
either of them.
-
Google scholar: This is a web search engine especially for scholarly
literature. These papers are research papers, journal articles, etc.
Google search engine: Google is a web search engine used on World
Wide Web, to search for all the information available publically. The
search can be done in a general way in form of sentence or words to get
information. The retrieved information is ranked based on relevance.
Selection criteria: This step results in the selection of the information that has
to be used. It decides if the information found is relevant or not, by identifying
the usage of the knowledge for the domain. For scholarly literature, the
introduction and conclusion can be studied to find if the paper has relevant
information. For general search engine, the content has to be thoroughly
studied to find if the information is relevant or not.
Documentation: The information that is relevant is well documented to be
found easily under relevant topics, headings, sub headings or separate folders.
Use in thesis: This process uses the documented information for carrying on
the research work.
Refined questions or more information needed: After having developed an
overview of the topic, there is a refined set of question that can be investigated
further, or some more queries are needed in related area, to find which we have
to go through the whole process again. This process is basically for gaining
more information where needed,
In this paper we have done literature review for the following purposes:
1. Get familiar with the research problem.
2. Identify gaps in knowledge.
3. Formulation of research question.
The knowledge gained by this literature review has been combined to produce
a more reliable result and to fulfil the gap in the domain of software testing.
3.1.2
Design Science Research Methodology
DS research in Information System is still evolving but now we have an good
understanding what it means: “Design science…creates and evaluates IT
artifacts intended to solve identified organizational problems” [74, p. 77].
Its focus is to design artifacts which can solve observed problems, to make
research contributions, to evaluate the designs, and to communicate the results
to appropriate audiences. [74]
16
Method and implementation
The development of a methodology for DS research was done by introducing a
DS process model by fulfilling three objectives [73, p. 8]; it will (1) provide a
nominal process for the conduct of design science research; (2) build upon
prior literature about design science in IS and reference disciplines; and (3)
provide researchers with a mental model or template for a structure for research
outputs.
3.1.2.1
Methodology (Design of DSRM process)
DSRM process model consists of six activities that are described later and
graphically in Figure 9. These activities are adapted from common process
elements from seven representative papers [75] [76] [77] [72] [78] [79] [74].
The DSRM process is presented in a sequential order but in reality, the
researchers can start at almost any step. This greatly is influenced by the type
of the research that follows. This has been divided into four possible entry
points in DSRM process, which are mentioned below [73]:
Problem-centered initiation starts with activity one. This is mostly inspired
by the suggested future research or from observation of the problem.
Objective-centered solution starts with activity two. This is generally by an
industry or research for which an artifact is developed.
Design and development-centered initiation approach would start with
activity three. “It would result from the existence of an artifact that has not yet
been formally thought through as a solution for the explicit problem domain in
which it will be used. Such an artifact might have come from another research
domain, it might have already been used to solve a different problem, or it
might have appeared as an analogical idea. [73, p. 14]”
Client/context initiated approach starts with activity four. It may be based on
consulting experience where practical solution that worked were observed.
Figure 9 Design Science Research Methodology Process Model [65, p. 44]
17
Method and implementation
The activities carried out in DSRM are as follows:
Activity 1: Problem identification and motivation
This activity focus on defining the research problem by capturing the
complexity of the problem and also by justifying the value of the solution.
Justifying the value of a solution has two advantages: a) It motivates the
researcher and the audience of the research to pursue the solution and to accept
the results. b) It helps to understand the reasoning associated with the
researcher’s understanding of the problem.
Activity 2: Define the objectives for a solution
Infer the objectives of a solution from the problem definition and knowledge of
what is possible and feasible. The objectives can be quantitative, e.g., terms in
which a desirable solution would be better than current ones, or qualitative,
e.g., a description of how a new artifact is expected to support solutions to
problems not addressed.
Activity 3: Design and development
This activity focuses on creation of the artifact by determining its desired
functionality and architecture. Conceptually, a design research artifact can be
any designed object in which a research contribution is embedded in the design.
Activity 4: Demonstration
This activity demonstrates the use of the artifact to solve the problem.
Activity 5: Evaluation
Observe and measure how well the artifact supports a solution to the problem.
This activity involves comparing the objectives of a solution to actual observed
results from use of the artifact in the demonstration. Depending on the nature of
the problem and the artifact, evaluation could take many forms ranging from
comparison of the artifact’s functionality with the solution objectives to client
feedback. At the end of the activity, iteration back to step three can be made to
try and improve the effectiveness of the artifact.
Activity 6: Communication
This activity focus is communication of the problem and its importance, the
artifact, its utility and novelty, the rigor of its design, and its effectiveness to
researchers and other relevant audiences, when appropriate. The structure of
this process may be used in scholarly research publications to structure the
paper.
3.1.2.2
Entry point for the project
We have chosen Design and development-centered initiation approach for
this research as our research was inspired from another research domain to
develop a top domain ontology.
18
Method and implementation
Building an ontology based on the standards supported by ISTQB and other
available ontologies, the project aims at developing a Top-domain ontology for
software testing that can unify the vocabulary and omit ambiguity in domain
ontologies already developed and to be developed. In terms of this design
science research artifact was to be developed and published so that it can be
used openly by anyone.
Problem Identification and Motivation
The problem was inspired by the biological domain where the ontology
representation and layers are much more developed and maintained than in
other disciplines. The top-domain ontologies act as a bridge between top-level
ontology and domain ontologies and also guide development of new domain
ontologies. There is no Top-domain ontology present in the software testing
domain.
Objectives of the solution
The objective of the solution is to provide a precise and complete description
about software testing with a top-domain ontology that contains the
fundamental entities of the domain. The aim is to provide a semantic
standardization across the domain, and guide the development of new domain
ontologies and act as aid for aligning or improving existing ones.
Design and Development
This process followed the design and development plan of an IS development
research project. It started with the idea to fill up the gap between top-level
ontology and domain ontologies. A mixture of two ontology development
methods Ontology 101 and Methontology (Mixed method for TDO
development) was followed to develop the top domain ontology. The
supporting knowledge to build concepts for ontology was taken from the
standards ISTQB and some concepts were reused from the available domain
ontologies.
Demonstration
The implemented artifact includes a top-domain ontology which has
unambiguous set of entities that can unify the domain of software testing and
act as top domain ontology for software testing.
Evaluation
The evaluation of the ontology was taken into parts. One internal evaluation
done by developers through a set of competency questions which were run in
SPARQL query to demonstrate that the structure of ontology was well formed.
Secondly, it was evaluated by ontology experts in terms of certain quality
criteria according to the requirements of ontology as well as top domain
ontology.
19
Method and implementation
Communication
The outcome of this ontology is the first version of “A Top Domain Ontology
for Software Testing”, which will be published as a Master thesis at Jönköping
University, Sweden.
Contribution
The research will result in an artifact that will enhance the domain of software
testing in ontology by forming a top domain ontology layer, which can align
and improve the quality of already existing ontologies and guide the
development of new ontologies as it has complete and precise description of
the fundamental entities of this domain.
Figure 10 DSRM process for A Top Domain Ontology for Software Testing
3.2 Ontology Development Method
There are many research methodologies proposed for developing ontologies
as discussed in section 2.3. A few research groups have also proposed a
series of steps and methodologies for development of ontologies, but each
group employs its own methodology, which is mainly because Ontological
Engineering is still a relatively immature discipline.
“At present the construction of ontologies is very much an art rather than a
science. This situation needs to be changed, and will be changed only through
an understanding of how to go about constructing ontologies.” [80]
None of the methodologies are fully mature as they don’t have proper
evaluation to check their reliability. However, during literature review on
available ontologies we concluded that most ontologies were developed with
either Methontology or Ontology 101. Methontology is the most mature;
however, some activities and techniques such as stages of defining classes
and properties should be specified in more detail.
20
Method and implementation
3.2.1
Mixed method for TDO development
This o n t o l o g y d e v e l o p m e n t method as the name suggests has been mixed
and adopted f r o m two methodologies, Methontology a n d Ontology 101 which
has been discussed in Section 2.3 It is used to develop a Top-Domain ontology
for software testing.
Figure 11 Ontology life cycle for mixed method [18] [27]
Methontology has the most promising structure to develop ontology from
scratch. This includes all the steps like documentation of ontology during
developing process but, it does not give any details about the development of
21
Method and implementation
ontology itself for which we have adopted Ontology 101. This method includes
steps that are defined for the development of the ontology itself. Since we need
an iterative way of developing our ontology this method suits our requirement
and give the completeness to the whole mixed method of ontology
development.
Figure 11 is adapted from Methontology which is shown in Figure 4. Here we
placed the detailed ontology development method of Ontology 101 shown in
Figure 5 inside of Methontology during technical phase, where it will cover
ontology development phases as mentioned in Ontology 101.
3.2.2
Method for Evaluation of Ontology developed
There are two methods that have been adopted for the validation of ontology
developed to ensure its quality and that all the requirements are met. They
are described below:
Competency Questions
The evaluation of the ontology can be performed by using SPARQL query,
which will answer the competency questions from the developed ontology. The
competency questions are sketched to determine the scope of the ontology,
which the knowledge base of the ontology should be able to answer. [35, p. 3]
1. Decision testing comes under which testing technique?
2. What are the specifications based test design techniques?
3. What are the categories for static testing techniques?
4. What are the tools that support Test Management?
5. Which tools provide support for System testing?
6. What are the levels of testing (test strategies) in software testing?
7. What human involvement can a test team have?
8. Documentation is maintained for which process/artifact?
Validation by ontology experts
This evaluation activity is performed by ontology experts who have a good
understanding of software testing. These experts were handed a set of
questions mentioned in Table 3 which represented quality criteria that was
evaluated against the ontology.
22
Findings and analysis
4 Findings and analysis
This chapter presents the findings and its analysis by using the ontology
development method which was discussed in the previous chapter. The chapter
is formulated in such a way that the solution for the research question 1 is
provided first followed by the solution of research question 2 which presents
the TDO.
4.1 RQ1. What are the existing software testing ontologies
or frameworks, their purpose and its evaluation?
In this section we will first present and analyse software testing ontologies and
its implementation and further refine four of these ontologies to evaluate
against a set of quality criteria.
4.1.1
Ontology for Software testing
Table 1 shows a list of ontologies developed in purpose of software testing.
Some ontologies were used in real time projects and most of them were given
as framework and are still under development. Most of the ontologies that are
mentioned have gone through a process of modular development based on their
field of implementation. Modular development in ontology is where a concept
can be developed as a separate ontology and later can be integrated with other
ontology that matches the same domain and purpose. This process of modular
development is applied for developing any ontology.
Based on these analyses for software testing domain we come to a conclusion
that there is a need to develop a Top domain ontology. The ontology should
unify most of the concepts and vocabularies in the domain of software testing,
and should not target any specific domain. The main idea of our research will
be to include the concepts and relations which are absent in the ontologies that
are mentioned in table 1. For example most of the ontologies in table 1 are
more domain specific, in our suggestion we come up with more general
ontology that can cover most of the domains in software testing. Some of this
work provides a guideline in the form of framework or UML to create more
complete software testing ontologies, our goal is to implement these guidelines
and make a Top-domain ontology for software testing domain. As the language
used in these ontologies are mostly OWL and also it is most widely accepted in
the field of ontology development, we intend to follow the same in our project.
23
Findings and analysis
Ontology Papers Name
Name
1
STOWS
2. SWTOI
Implementation in the
projects
Ontology Language
available? Used
Content of
Ontology.
Ontology for Service Oriented Testing of Web Services
Web Services [26]
Not
available
OWL-S
General
knowledge of
Software Testing.
SwtoI (Software Test Ontology
Integrated) and its Application in Linux
Test [14]
Yes
OWL
General
Knowledge of
Software testing
Software testing
with focus on test
case reuse.
Linux Test Project (LTP)
3
Software Test Case Reuse Based on Ontology [10] Guideline for construction of Not
Testing
software testing ontology
available
ontology.
based on SWEBOK [81]and
classification of it based on
software quality model.
N/M
4
TaaS
Framework Four layers of
ontology for Taas
framework.
A Framework of Testing as a Service
[27]
Improve the efficiency of
Software quality assurance.
24
Not
available
Findings and analysis
5
MNDTMM
A Strategic Test Process Improvement
Approach Using an Ontological
Description for MND-TMM [11]
6
OntoTest Towards the Establishment of an
Ontology of Software Testing. [28]
7
N/M
8
N/M
Weapon software system
development.
Not
available
OWL.
Software testing
according to
military
standards.
Built for support acquisition, Yes
organization, reuse and
sharing of testing
knowledge.
OWL
General
Knowledge of
Software testing.
Development of Ontology-based
Intelligent System For Software Testing
[82]
Classification of
programming and testing
using Protege
Not
available
OWL
Using ontology
to teach Software
testing.
An Ontology Based Approach for Test
Scenario Management [83]
Perform billing, Create
Purchase order etc.
Not
available
UML
Test scenario
Management.
Table 1. Domain ontologies for software testing
N/M: Not mentioned in the paper.
UML: Unified Modelling language.
OWL: Web Ontology language.
Yes: Available in public domain.
25
Findings and analysis
4.1.2
4.1.2.1
Available Ontology evaluation
Evaluation criteria
The evaluation criteria described below covers the main concepts that are
described in [84]. We also cover a few more criterias that were taken from [85]
which evaluates some characteristics of quality ontology.
1. Reuse of ontologies: The reuse of ontology means the developers may
use any available ontologies that are relevant to their domain or suits the
purpose of their ontology. This is mostly followed by ontology
developers to make a good quality ontology. This criteria checks if the
ontology reuse has been done without any overlapping or
misunderstanding of concepts.
2. Designed Pattern: Some ontologies are developed based on certain
patterns. Some patterns that concern software process domain are SPOPL (Software process – Ontology pattern language) which includes
PAE (Process and Activity Execution), WPPA (Work product
participation) and 30 other patterns [86]. This criteria checks if the
ontology follows any of such patterns.
3. Evaluation methodology: Ontology evaluation can be done on different
criteria. For example the consistency can be checked to make sure that
there are no repeated concepts or to check if the ontology is complete.
4. Modular Development: Ontology can be either built from a scratch or
can be built from reuse of other ontologies. This reuse of other ontology
includes implementing some of their concepts or the entire ontology.
Modular development is defined as the layered development of ontology
by reusing already existing ontology which matches its domain and
purpose.
5. Domain coverage: This criteria checks if the ontology has covered the
relevant knowledge from the specified domain.
6. Implementation of international standard: The development of
ontology comes with implementation of standards. For example in
software testing the standards implemented are mostly from ISTQB
and/or SWEBOK [81]. This criteria check if such standards have been
implemented based on its target domain.
7. Usage of axioms: This criteria check if the axioms used helps in
describing their formal relations between ontology entities or their
classes [87].
8. Naming Convention: The naming conventions describes certain word
usage such as if the used axiom is in plural form, Usage of meaningful
URI, underscore or CamelCase can be checked using this criteria.
26
Findings and analysis
4.1.2.2
Ontologies in the evaluation
From the available ontologies described in Table 1 we have filtered out four
ontologies for our evaluation. The reason for choosing these four ontologies is
that they mostly describe the general knowledge of software testing with
certain flaws which are explained under each heading below. Ontologies like
SWTOI and OntoTest were available in public domain for us to be able to
perform in-depth analysis using our evaluation criteria. Whereas ontologies like
STOWS and Taas were not available in public domain but our analysis was
based on the research papers which we have collected during our literature
review. The results of these analyses are described in Table 2 under section
4.1.3.
1. STOWS:
The ontology was developed based on the concepts from [88]. STOWS [26]are
a set of taxonomies that define the basic concepts in software testing. They do
not provide any important relation that will help us to reuse most of the
concepts such as testing phases that include “Component testing, System
testing, Integration testing and Acceptance testing were not mentioned. This
makes it unfit for our purpose as it is incomplete or is described in more
general depending upon the environment or domain in which its used.
2. SWTOI:
SWTOI is one of many ontology that have Linux test as target domain [14].
The ontology was developed based on BLO (Basic Linux Ontology) and
SWEBOK [81] (Software engineering book of knowledge) [14]. “SWTO
integrated” is the latest version of the three that was developed. First version
was OSOnto (Operating system ontology) and the second version was SWTO.
We have chosen the latest version for analysis as it contains all the upgrades
from the previous versions.
The reuse was from BLO which is a formal ontology and SWEBOK [81] which
on the other hand is an informal ontology. The knowledge acquired from these
was reused in the form of classes, properties and instances. The research papers
that we found during our literature review do not provide the information on
the development pattern followed by “SWTO Integrated”. It is formally
rigorous when taking into account the concepts of software testing though it
was developed for Linux test as target domain. The evaluation of SWTOI was
based on two criteria: quantitative (concepts, instances and attributes) and
qualitative (consistency, completeness and conciseness). SWTOI was
developed in a modular way as it covers individual concepts such as test
activity and test techniques. Though SWTOI covers most of the concepts in
software testing domain it can be considered as incomplete due to certain flaws
like for e.g.: naming conventions were not followed and annotation for some
classes was missing.
27
Findings and analysis
3. TaaS:
Taas [27] is a framework that is represented in UML diagrams. Like STOWS,
TaaS does not specify many relations that are crucial for representing the
testing knowledge as a whole. The core concepts are modelled for certain
concepts like for example Test Task are composed by Test activity, TestType,
TargetUnderstest, TestEnvironment and TestSchedule [86] .
4. OntoTest:
OntoTest has been built on the purpose of acquisition, organization, reuse and
sharing of testing knowledge [28]. It has been built on the basis of ISO/IEC
12207 standards and most of the testing concepts in Huo’s work [89] and it is a
modular ontology. OntoTest could be considered as a top-domain ontology as it
covers general knowledge of software testing but it is not complete. Though the
ontology covered major concepts in software testing it did not comply with the
international standard of Ontology development [42] as it missed out a lot of
relations e.g: The class “TestTeam” could have been linked to class “Testing
phases” and the annotations in the classes and properties that had to explain the
concepts were missing. These relations are very important to make software
testing conceptualization explicit. Some of the important concepts such as
testing tools were not included in the ontology, however research was done to
know about testing tool development [28].
4.1.3
Evaluation results
The following table below shows the quality criteria of each ontology. It was
difficult to assess them as some criteria for certain ontology did not have
enough information for its assessment.
Table 2. Quality criteria evaluation of available software testing ontologies.
SWTOI
OntoTest
STOWS
Taas
Reuse of
ontology
BLO (Formal
Ontology) and
SWEBOK
[81](Informal
ontology)
Most of
the testing
concepts
in Huo’s
work [89].
Reuse of
concepts
from [88].
Not
mentioned
Design Pattern
Not mentioned
Not
mentioned
Not
mentioned
Not
mentioned
Evaluation
methodology
Quantitative and
Qualitative.
Not
mentioned
Not
mentioned
Not
mentioned
Is it modular
Modular
development
Not
Modular
Not
mentioned. development mentioned
28
Findings and analysis
Domain
Coverage
LINUX Test
Software
testing.
Software
testing
Software
testing
Implementation BLO (Basic Linux
of standards
Ontology) and
SWEBOK
[81](Software
Engineering book
of knowledge
ISO/IEC
12207
Not
mentioned
Uses SWRL
(Semantic
web rule
language) to
represent test
service [27].
Usage of
axioms.
Not mentioned
Not
mentioned
Not
mentioned
Not
mentioned
Naming
conventions
Follows standards
mentioned in [87]
Does not
follow
standards
mentioned
in [87]
Not
mentioned
Not
mentioned
4.2 RQ2. What are the relevant concepts, relations and
constraints that are needed to describe knowledge of
software testing in general?
The research was carried out to obtain the knowledge that is needed for testing,
such as test case design, testing type, testing documentation, testing level and
the relations between different levels of testing. The study started by analyzing
software testing ontologies to get an idea what knowledge already exists in the
form of ontology. The study over several research papers and some systematic
literature review shows that many ontologies have taken international
standards into account. The most appropriate standards found were “Standard
glossary of terms used in Software Testing” of the ISTQB; and Guide to the
Software Engineering Body of Knowledge (SWEBOK guide). After carefully
assessing both of them we came to the conclusion that for top domain ontology
construction “ISTQB foundation syllabus” can give the main concepts and the
information provided by them can also help to extract relations between them.
Attempts to find software testing ontologies were also made, but only 2 of
them were found on the public domain to be reused “SWTOI” and “OntoTest”,
which were examined and some concepts were reused.
4.3 Top domain ontology for software testing
In this section we introduce our ontology for software testing (TDO) in which
we have described different classes and also defined different properties that
describes relations between them for the users understanding.
29
Findings and analysis
This proposed ontology for software testing can act as a top domain ontology
for software testing, it can be a solution for linking the ontologies of software
testing and also guide the development of new ontologies.
Since the ontology we developed cannot be discussed in detail in this report we
have given a figure of the metrics in our TDO. Figure 12 below shows a screen
shot from Protégé which shows metrics or counts for different elements in
ontology.
Figure 12 Ontology metrics as shown on Protégé
4.3.1
Classes in TDO
The first level classes in TDO and their relations are shown in the Figure 13.
The first level classes involved in this model are:
1. Human involved in testing: Indicates the human (man power) involved
in the testing process.
2. Static testing techniques: A testing technique that improves the quality
of software product and assists the developers to identify defects at early
stage.
3. Test design techniques: These are procedure used to derive and/or
select test case. One of the aims of testing is to reveal as much potential
for failure as possible.
4. Test methods: There are four types of testing methods which are
discussed later in section 7.1.6. Depending on what the objectives are,
different test efforts will be organised differently.
5. Test objects: The individual element to be tested. There usually is one
test object and many test items.
6. Test strategies: This class describes different level of testing.
7. Testing artifacts: It is one of many tangible by-products produced
during development of software.
8. Testing teams: Teams composed of member from project, involved or
not involved in software construction.
30
Findings and analysis
9. Tools for testing: Software product that supports one or more test
activities.
10. Defects: A defect is an error or a bug, in the application.
11. Formal review steps: These are steps followed in a formal review
process.
We have only 9 of classes in Figure 13 as defects and formal review do not
have any relations between the first level classes. Their relations are with the
subclasses of these first level classes are not part of the figure.
Figure 13 High level ontology model for Software Testing
Example:
Below we have given an example of documentation process in our ontology.
Figure 14 shows how the documentation of test case, test condition, and formal
review is maintained in documentation. There can be different documents for
each concept mentioned, that are maintained separately.
31
Findings and analysis
Figure 14 Example of documentation leaf and its relations
4.4 Evaluation
The focus of the evaluation is to assess the ontology structure that the ontology
covers the knowledge domain of software testing. The structure is evaluated
with the help of competency questions. An expert evaluation is carried out
according to some quality criteria that assess the full ontology with its content,
structure and its quality.
4.4.1
Competency Questions
The competency questions are designed to display the structure in accordance
with the sub classes and relations between classes. We have used SPARQL
query to answer the competency questions.
Examples of Competency questions are shown below and rest of the questions
are shown in 3.2.2.
Query: What are the categories for static testing techniques?
Answer:
Figure 15 Sparql query example
32
Findings and analysis
The scope of the query is to check what are the ‘testing techniques’ that come
under ‘static test techniques’. The query retrieves the expected result of
subclasses of static testing techniques (categories of static testing techniques is
shown in section 7.1.4)
4.4.2
Validation by ontology experts
As discussed in 3.2.2, the evaluation was carried out by ontology experts who
also have knowledge about software testing. They answered a set of questions
based on quality criteria and gave their view. The criteria for finding result is a
1 to 5 scale, where 1 corresponds to 'totally disagree' and 5 to 'fully agree'.
In the evaluation we considered six ontology quality criteria: accuracy,
consistency, completeness, clarity, adaptability and expandability [87]. These
criteria assess context coverage, semantic richness, level of detail, popularity
and relevance. They are defined below:
Accuracy: “is a criteria that states if the axioms of the ontology comply to the
knowledge of the stakeholders about the domain. A higher accuracy comes
from correct definitions and descriptions of classes, properties, and
individuals.”
Consistency: “capturing both the logical consistency (i.e. no contradictions can
be inferred) and the consistency between the formal and the informal
descriptions (i.e. the comments and the formal descriptions match)”
Completeness: “All the knowledge that is expected to be in the ontology is
either explicitly stated or can be inferred from the ontology.”
Clarity: “An ontology should effectively communicate the intended meaning
of defined terms.”
Adaptability: “It measures how far the ontology anticipates its uses. An
ontology should offer the conceptual foundation for a range of anticipated
tasks.”
Expandability: “Refers to the required effort to add new definitions without
altering the already stated semantics.”
Table 3 Ontology evaluation by experts
Results
Evaluator Evaluator
1
2
Evaluation criteria
Accuracy
1. Are [owl:Class]s well-structured for a top domain
ontology and do they properly represent the entities ?
2. Are [owl: Properties]s well-structured for a top
domain ontology and do they properly represent the
entities ?
3. Does the axioms follow the annotations for the
software testing domain?
33
3
3
3
4
3
4
Findings and analysis
Consistency
1. Do the axioms used deliver the intended meaning of
the concepts?
Completeness
1. Does the ontology cover essential concepts needed
for software testing process?(can be compared with
own knowledge or refer to ISTQB foundation
syllabus)
2. Can the ontology act as an interface between topdomain and domain ontologies?
3. Can the ontology be linked to the existing domain
software testing ontologies?
4. Can the ontology guide the development of new
software testing ontologies?
Clarity
1. Are the annotations of classes sufficient?
2. Are the annotations of classes unambiguous?
3. Are the annotations of properties sufficient?
4. Are the annotations of properties unambiguous?
Adaptability
1. Can the ontology be used for different applications
(e.g.: Medical Applications, Banking Applications,
Mobile application etc.) of software testing?
2. Does it allow for methodologies for extension,
integration and adaption?
Expandability
1. Can the ontology be extended by introducing new
terms without altering the existing axioms?
4
2
4
4
3
4
3
3
3
3
4
4
4
4
4
4
4
4
3
4
3
4
4
4
Expert’s Assessment:
The assessment shows that there are different views for consistency of ontology
from the experts. One has a score of 2 and other of 4, this can be explained by
different view of the experts with accordance to ontology and also software
testing. One expert thinks the overall ontology is consistent but there is still
room for improvement which made him provide a score of 4, whereas other
expert has provided with specific comments for classes and relations where the
ontology shows its inconsistency and provided the score of 2. With regard to
accuracy the scores from the experts vary, but it shows that accuracy criteria
are satisfied. According to one expert different concepts can be merged into
group for easy understanding. The clarity with regard to annotations are well
formed, but the overall clarity is not good and names of concept should be
singular rather than plural are the comments from the experts. The scores varies
for adaptability as one expert thinks that some parts of the ontology can be used
in other ontologies, and the other expert has not provided any comments but his
score shows that the ontology can be used in different applications. For
expandability the experts agree that it is possible to extend the ontology
34
Findings and analysis
without altering the axioms, but also suggest that more axioms should be
included in class definition. For completeness both the experts thinks the
ontology covers most of the concepts of ISTQB, but still the ontology cannot
be termed as fully complete as software testing is a large domain and there is
always new concepts emerging. However, the main purposes of top domain
ontology as mentioned under section 1.2 can be fulfilled by this ontology.
35
Discussion and conclusions
5 Discussion and conclusions
This chapter represents the discussions of method used in the research and its
findings. The discussion focuses on the good and bad points of the process
from personal perspective of the authors. It also represents the final conclusion
of the project and discusses future research.
5.1 Discussion of method
5.1.1
Literature review
After reviewing many research papers in the area of software testing ontology
and other ontology publications we were able to find the gap in software testing
domain that needed to be answered, and which we adapted as our research. The
literature review was focused only on reliable sources of published scientific
study such as research papers, journals, conference papers and some books. As
the focus of the research shifted after finding the gap, we were able to narrow
down the research area of the thesis in the pre-study period in which we
collected information about the research and the methods (Research method
and ontology development method). Though the requirement of the literature
review was not limited to only pre-study period as also for developing ontology
we had to review literature about software testing process. This method has
been the backbone of the thesis from finding the gap to development of
ontology.
5.1.2
Mixed method for TDO development
The method that we have implemented in our project is an integration of two
other ontology development methods namely Methontology and Ontology 101.
The choices of these were because of the fact that Methontology was a standard
method for ontology development from scratch but the detailed steps were not
discussed in this method. Ontology 101 on the other had a step by step
procedure for development of ontology, this method also supported iterative
development which was very much required in our project. The combination of
these two ensured that we had a firm method for TDO development.
There is no evaluation method available for ontology development method. We
believe the combined method is effective and highly precise for the ontology
development, however there is no evaluation method available for the quality
or productiveness of an ontology development method.
36
Discussion and conclusions
5.1.3
Mixed method for Evaluation
The evaluation of the ontology was done by two methods which combined
provided the results of the quality of the ontology. The first method was chosen
to validate the structure and semantics of the ontology by running SPARQL
query on a set of competency questions. A set of competency questions were
thought and discussed to assume which of the questions can justify and validate
the class structure and its relations. This produced expected answers and
verified the structure. This method is good for performing evaluation from the
developer’s side, as they also hold the knowledge of the ontology structure.
The second method was to form a set of questions based on certain quality
criteria. Two ontology experts who also hold the knowledge of software testing
performed the ontology evaluation and gave the results marked on 1 to 5 scale,
where 1 corresponds to 'totally disagree' and 5 to 'fully agree'. They also
provided comments that can help us improve the ontology before publishing it.
This method was good to get an outer and expert perspective about the
ontology to verify the result and also improve the ontology.
5.2 Discussion of findings
The literature review resulted in finding of most of the ontologies in the domain
of software testing. Though there was not much software testing ontology, the
available ones were limited to the projects that they were implemented. In our
project we reused some of the concepts from OntoTest [28] and SWTOI [14]
which are more general ontology for software testing. The rest of the ontologies
were not available in public domain and the papers related to them describe
their usage in the projects used. Ontologies like STOWS [26] and [83] could
have been very useful for our purpose as they describe their implementation in
software testing domain. Some ontology like Taas [27] was described as
framework and software testing ontology [83] were described as UML
notations. We have tried to implement some of their concepts as they were
relevant to our purpose.
In this project the ontology was developed without any target implementation
but instead can be implemented in any software testing domain. The evaluation
of the ontology suggests that the ontology structure is well formed and covers
most of the concepts of software testing, the experts suggests that further
improvements are possible which are discussed later in section 5.4.
Implementation of TDO
Software testing includes testing process to rectify bugs and make the software
better in quality. TDO can be used for storing this process of software testing
irrelevant to the domain in which the software will be implemented. For
example if a static testing technique is performed on software then this
information can be stored in TDO under the class “static testing technique”. If
any references are required then the information can be retrieved by using
SPARQL query which was discussed in section 4.4.1.
37
Discussion and conclusions
Evaluation of ontology
The goal of the thesis is to develop a Top domain software testing ontology,
which contains knowledge of software testing in general. Its purposes are to
unify the domain vocabularies so that already existing ontologies can be linked
to it as well it can guide development of domain ontologies. So the focus of our
evaluation is to verify and validate the structure and semantics of the ontology,
and to verify that the knowledge of software testing in general is delivered in
the ontology.
The first evaluation to validate the structure and semantics of the ontology is
performed by forming competency questions and running them in SPARQL
query to validate. Before running the queries we checked the ontology by
running the automated reasoners in Protégé to check for inferences and fix it.
The results from the query under section 4.4.1 show that the ontology does not
result into any contradictions between classes and relations i.e. the desired
results are retrieved.
The second method used is the evaluation performed by domain experts against
a set of criteria. In the evaluation we considered six ontology quality criteria:
accuracy, consistency, completeness, clarity, adaptability and expandability.
The results were marked on 1 to 5 scale, where 1 corresponds to 'totally
disagree' and 5 to 'fully agree'. The results are shown in Table 3. ‘A top domain
ontology for software testing’ was handed to the experts for evaluation.
The experts agree that the ontology is good for a first version and its standard
can be improved with minor improvements in naming and annotations.
However they state still more concepts and relations that can be added to the
ontology, and also the class hierarchy can be made more complete.
We agree according to the expert’s comments, and as this is the first version of
the ontology we did not want to make the ontology too complex. We wanted to
express the main concepts and relations by studying the domain ontologies and
other knowledge sources for this version. By judging from the evaluation and
comments from the experts we conclude this ontology after fixing some
comments/suggestions is fit to act as top domain ontology for software testing.
5.3 Conclusions
In this research we have studied research papers about software testing
ontology and evaluated available ontologies. The study was based for general
knowledge of software testing ontology, which gave the idea of Top Domain
Ontology that has general terms for the domain.
After considering many software testing ontologies, we concluded that most of
the ontologies were developed for certain research projects. Some ontology
reflects general knowledge for software testing but lacks the standards that
should be available for top domain ontology. Most of the ontologies that are
mentioned in research papers are part of some research project. These research
papers mention how the ontology was developed, or how the ontology was
38
Discussion and conclusions
used, but the ontology is not available on public domain. None of the ontology
was developed for general use of software testing.
We concluded four ontologies that are near to providing general software
testing knowledge, but out of them only two ontologies were available for
evaluation. We evaluated all four ontologies based on structural and conceptual
coverage, though the reliable evaluation can be considered only for SWTOI and
OntoTest due to its availability in public domain. After analysing Table 1 and
Table 2 we can conclude that SWTOI is the closest one to provide general
knowledge for software testing.
We conducted research over many sources to acquire general software testing
knowledge and considered ISTQB foundation syllabus as standards to use in
development of top domain ontology. Other sources such as available
ontologies were also considered for reuse and develop a better ontology. The
developed ontology was evaluated by ontology expert who gave their
comments and evaluation results. The evaluation results suggest the ontology
provides general knowledge of software testing and can be used as Top domain
ontology. Evaluators also suggest the ontology can be improved. Some
corrections were improved based on expert’s comments before publishing the
ontology, whereas others are left for future research and for maintenance of
ontology.
5.4 Future scope of the project
The project was started based on the concept of representing knowledge in the
field of software testing using Ontology. For this we had to make use of
available resources that are reliable and which could be used for referring
during future development. The ontology developed in the project can be used
for testing in any software domain as it covers general concepts of software
testing. This work could possibly be made more complete by reusing the
ontologies that were not available in public domain. This is because each of the
ontology was specified for a particular project that included a particular set of
methods, techniques or tools; by extracting the concepts from these ontologies
we could represent a more detailed software testing knowledge.
Since there is always a room for improvement by coming up with new concepts
in software testing, this ontology can be reused for representing its knowledge.
The future scope of this project depends upon the improvements in techniques,
tools, methods etc. in the field of software testing and ontology development.
Each of the software testing concepts will be created with regards to the
domain in which software will be implemented. We believe this will prove as a
good work for reuse and implementation of all the developments and findings.
39
References
6 References
[1] É. F. Souza, R. A. Falbo and V. NL, “Ontologies in Software Testing: A
Systematic Literature Review,” in VI Seminar on Ontology Research in Brazil,
Citeseer, 2013, p. 71.
[2] B. Chandrasekaran, J. R. Josephson and V. R. Benjamins, “What are ontologies,
and why do we need them?,” IEEE Intelligent systems., vol. 14, no. 1, pp. 20--26,
1999.
[3] D. Dou, D. McDermott and P. Qi, “Ontology translation on the semantic web.,”
in On the Move to Meaningful Internet Systems 2003: CoopIS, DOA, and
ODBASE, Springer, 2003, pp. 952--969.
[4] A. Abecker and L. van Elst, “Ontologies for knowledge management,” in
Handbook on ontologies, Springer, 2009, pp. 713--734.
[5] D. E. O'Leary, “Enterprise knowledge management,” Computer, vol. 31, no. 3,
pp. 54--61, 1998.
[6] H.-J. Happel and S. Seedorf, “Applications of ontologies in software
engineering.,” in Proc. of Workshop on Sematic Web Enabled Software
Engineering (SWESE) on the ISWC, 2006.
[7] R. Hoehndorf, “What is an upper level ontology?,” 2010. [Online]. Available:
http://ontogenesis.knowledgeblog.org/740.
[8] S. Schulz and M. Boeker, “BioTopLite: An Upper Level Ontology for the Life
SciencesEvolution, Design and Application.,” in GI-Jahrestagung, 2013, pp.
1889--1899.
[9] H. Stenzhorn, E. Beibwanger, S. Schulz and others, “Towards a Top-Domain
Ontology for Linking Biomedical Ontologies,” 2007.
[10] L. Cai, W. Tong, Z. Liu and J. Zhang, “Test case reuse based on ontology,” in
Dependable Computing, 2009. PRDC'09. 15th IEEE Pacific Rim International
Symposium on, 2009, pp. 103--108.
[11] H. Ryu, D.-K. Ryu and J. Baik, “A strategic test process improvement approach
using an ontological description for mnd-tmm,” in Computer and Information
Science, 2008. ICIS 08. Seventh IEEE/ACIS International Conference on, 2008,
pp. 561--566.
[12] A. T. McCray, “An upper-level ontology for the biomedical domain,”
Comparative and Functional Genomics, vol. 4, pp. 80--84, 2003.
[13] [Online]. Available: http://www.istqb.org/. [Accessed 9 Feburary 2016].
[14] D. Bezerra, A. Costa and K. Okada, “swtoI (Software Test Ontology Integrated)
and its Application in Linux Test,” in International Workshop on Ontology,
Conceptualization and Epistemology for Information Systems, Software
Engineering and Service Science, 2009, pp. 25--36.
[15] I. S. 1012, IEEE Standard for Software Verification and Validation., New York,
NY, USA., 2004.
[16] E. Miller, “Introduction to software testing technology,” Tutorial: Software
Testing \& Validation Techniques, Second Edition, IEEE Catalog No. EHO, pp.
180--0, 1981.
[17] I. Sommerville, Software Engineering: (Update) (8th Edition) (International
40
References
Computer Science), Boston, MA, USA: Addison-Wesley Longman Publishing
Co., Inc., 2006.
[18] H. M. Kim, “Developing ontologies to enable knowledge management:
integrating business process and data driven approaches,” in AAAI Workshop on
Bringing Knowledge to Business Processes, 2000, p. 72.
[19] T. R. Gruber, “A translation approach to portable ontology specifications,”
Knowledge acquisition, vol. 5, no. 2, pp. 199--220, 1993.
[20] N. F. Noy and D. L. McGuinness, “Ontology Development 101: A Guide to
Creating Your First Ontology,” Stanford knowledge systems laboratory technical
report KSL-01-05 and Stanford medical informatics technical report SMI-20010880, 2001.
[21] H. Stenzhorn, S. Schulz, E. Beißwanger, U. Hahn, L. Van Den Hoek, E. M. van
Mulligen and others, “BioTop and ChemTop-Top-Domain Ontologies for
Biology and Chemistry.,” in International Semantic Web Conference (Posters &
Demos), 2008.
[22] B. Smith and P. Grenon, “Basic formal ontology,” Draft. Downloadable at
http://ontology. buffalo. edu/bfo, 2002.
[23] A. Gangemi, N. Guarino, C. Masolo, A. Oltramari and L. Schneider,
“Sweetening ontologies with DOLCE},” in Knowledge engineering and
knowledge management: Ontologies and the semantic Web, Springer, 2002, pp.
166--181.
[24] E. Beisswanger, S. Schulz, H. Stenzhorn and U. Hahn, “BioTop: An upper
domain ontology for the life sciences,” Applied Ontology, vol. 3, no. 4, pp. 205212, 2008.
[25] M. Ashburner, C. A. Ball, J. A. Blake, D. Botstein, H. Butler, J. M. Cherry, A. P.
Davis, K. Dolinski, S. S. Dwight, J. T. Eppig, M. A. Harris, 5. David P. Hill4 and
a. others, “Gene Ontology: tool for the unification of biology,” Nature genetics,
vol. 25, no. Nature Publishing Group, pp. 25-29, 2000.
[26] Y. Zhang and H. Zhu, “Ontology for service oriented testing of web services,” in
Service-Oriented System Engineering, 2008. SOSE'08. IEEE International
Symposium on, IEEE, 2008, pp. 129--134.
[27] L. Yu, L. Zhang, H. Xiang, Y. Su, W. Zhao and J. Zhu, “A Framework of
Testing as a Service,” in Proceedings of the Conference of Information System
Management, 2009.
[28] E. F. Barbosa, E. Y. Nakagawa and J. C. Maldonado, “Towards the
Establishment of an Ontology of Software Testing.,” in SEKE, 2006, pp. 522-525.
[29] M. Ferndndez, A. Gómez-Pérez and N. Juristo, “Methontology: from ontological
art towards ontological engineering,” AAAI, 1997.
[30] A. Farquhar, R. Fikes, W. Pratt and J. Rice, “Collaborative ontology construction
for information integration,” Technical Report KSL-95-63, Stanford University
Knowledge Systems Laboratory, 1995.
[31] A. Farquhar, R. Fikes and J. Rice, “The ontolingua server: A tool for
collaborative ontology construction,” International journal of human-computer
studies, vol. 46, no. 6, pp. 707--727, 1997.
[32] A. Farquhar, R. Fikes and J. Rice, “Tools for assembling modular ontologies in
41
References
Ontolingua,” in Proc. of Fourteenth American Association for Artificial
Intelligence Conference (AAAI-97), 1997, pp. 436--441.
[33] M. Gruninger and M. S. Fox, “The design and evaluation of ontologies for
enterprise engineering,” in {Workshop on Implemented Ontologies, European
Workshop on Artificial Intelligence, Amsterdam, The Netherlands, 1994.
[34] M. Grüninger and M. S. Fox, “The role of competency questions in enterprise
engineering,” in Benchmarking—Theory and Practice, Springer, 1995, pp. 22-31.
[35] M. Gruninger and M. S. Fox, “Methodology for the Design and Evaluation of
Ontologies,” Citeseer, 1995.
[36] M. Uschold and M. Gruninger, “Ontologies: Principles, methods and
applications,” The knowledge engineering review, vol. 11, no. 02, pp. 93-136,
1996.
[37] N. J. Mars, W. Ter Stal, H. de Jong, P. E. van der Vet and P.-H. Speel, “Semiautomatic knowledge acquisition in Plinius: an engineering approach,” in Proc.
8th Banff Knowledge Acquisition for Knowledge-based Systems Workshop, 1994,
pp. 4--1.
[38] G. Schreiber, B. Wielinga and W. Jansweijer, “The KACTUS view on the
‘O’word,” in IJCAI workshop on basic ontological issues in knowledge sharing,
1995, pp. 159--168.
[39] B. Wielinga, A. Schreiber, W. Jansweijer, A. Anjewierden and F. van Harmelen,
Framework and Formalism for Expressing Ontologies (version 1). ESPRIT
Project 8145 KACTUS, Free University of Amsterdam deliverable, DO1b. 1,
1994}.
[40] N. Guarino, M. Carrara and P. Giaretta, “An Ontology of Meta-Level
Categories.,” KR, vol. 94, pp. 270-280, 1994.
[41] J. Bouaud, B. Bachimont, J. Charlet and P. Zweigenbaum, “Acquisition and
structuring of an ontology within conceptual graphs,” in Proceedings of ICCS,
1994, pp. 1--25.
[42] J. Bouaud, B. Bachimont, J. Charlet and P. Zweigenbaum, “Methodological
principles for structuring an “ontology”,” in Proceedings of the IJCAI’95
Workshop on “Basic Ontological Issues in Knowledge Sharing, 1995, pp. 19--25.
[43] P. Borst, J. Benjamin, B. Wielinga and H. Akkermans, “An application of
ontology construction,” in Proc. of ECAI-96 Workshop on Ontological
Engineering. Budapest, 1996.
[44] P. Borst and H. Akkermans, “An ontology approach to product disassembly,” in
Knowledge Acquisition, Modeling and Management, Springer, 1997, pp. 33--48.
[45] M. Uschold, “Converting an informal ontology into ontolingua: Some
experiences,” TECHNICAL REPORT-UNIVERSITY OF EDINBURGH
ARTIFICIAL INTELLIGENCE APPLICATIONS INSTITUTE AIAI TR, 1996.
[46] M. Uschold, “Building ontologies: Towards a unified methodology,”
TECHNICAL REPORT-UNIVERSITY OF EDINBURGH ARTIFICIAL
INTELLIGENCE APPLICATIONS INSTITUTE AIAI TR, 1996.
[47] M. Uschold and M. King, Towards a methodology for building ontologies,
Citeseer, 1995.
[48] K. Mahesh, Ontology development for machine translation: Ideology and
methodology, Citesheer, 1996.
42
References
[49] K. Mahesh and S. Nirenburg, “A situated ontology for practical NLP,” in
Proceedings of the IJCAI-95 Workshop on Basic Ontological Issues in
Knowledge Sharing, 1995, p. 21.
[50] A. Gangemi, G. Steve and F. Giacomelli, “ONIONS: An ontological
methodology for taxonomic knowledge integration,” in ECAI-96 Workshop on
Ontological Engineering, Budapest, 1996.
[51] G. Steve and A. Gangemi, “ONIONS methodology and the ontological
commitment of medical ontology ON8. 5,” in Proceedings of Knowledge
Acquisition Workshop, 1996.
[52] B. Swartout, R. Patil, K. Knight and T. Russ, “Toward distributed use of largescale ontologies,” in Proc. of the Tenth Workshop on Knowledge Acquisition for
Knowledge-Based Systems, 1996, pp. 138-148.
[53] KBSI, “The IDEF5 Ontology Description Capture Method Overview,” KBSI
report, Texas, 1994.
[54] A. Gómez-Pérez, M. Fernández and A. d. Vicente., Towards a method to
conceptualize domain ontologies, European Coordinating Committee for
Artificial Intelligence (ECCAI), 1996.
[55] V. Presutti, E. Daga, A. Gangemi and E. Blomqvist, “eXtreme design with
content ontology design patterns,” in Proc. Workshop on Ontology Patterns,
Washington, DC, USA, 2009.
[56] D. Lonsdale, D. W. Embley, Y. Ding, L. Xu and M. Hepp, “Reusing ontologies
and language components for ontology generation,” Data & Knowledge
Engineering, vol. 69, no. 4, pp. 318--330, 2010.
[57] H. S. Pinto and J. Martins, “Reusing ontologies,” in AAAI 2000 Spring
Symposium on Bringing Knowledge to Business Processes, vol. 2, AAAI, 2000,
p. 7.
[58] GOMEZ-PEREZ, “A Framework to Verify Knowledge Sharing Technology,” in
Expert Systems with, 1997, pp. 519-529.
[59] M. Fernández López, Overview of methodologies for building ontologies, CEUR
Publications , 1999.
[60] [Online]. Available: http://www.ksl.stanford.edu/software/ontolingua/.
[61] [Online]. Available: http://www.daml.org/ontologies/.
[62] [Online]. Available: http://protege.stanford.edu/.
[63] A. Gangemi and V. Presutti, “Ontology design patterns,” in Handbook on
ontologies, Springer, 2009, pp. 221--243.
[64] J. Shore and others, The art of agile development, " O'Reilly Media, Inc.", 2007.
[65] C. G. VON WANGENHEIM, K.-D. ALTHOFF and R. M. BARCIA, “Goaloriented and similarity-based retrieval of software engineering experienceware,”
Lecture notes in computer science, pp. 118--141, 2000.
[66] “www.w3.org,”
9
February
2016.
[Online].
Available:
https://www.w3.org/2001/sw/wiki/TopBraid.
[67] S. Bechhofer, “OWL: Web ontology language,” in Encyclopedia of Database
Systems, Springer, 2009.
[68] [Online].
Available:
http://www.webopedia.com/TERM/O/Ontology_Web_Language.html.
43
References
[69] K. Williamson, Research methods for students, academics and professionals:
Information management and systems, Elsevier, 2002.
[70] R. B. Burns, “Introduction to research methods in education,” Longman
Cheshire, 1990.
[71] U. Sekeran, Research methods for business. 2nd edn., New York: John Wiley &
Sons, 1992.
[72] J. Nunamaker, M. Chen and T. Purdin, “System development in information
system research,” Journal of Management Information Systems, vol. 7(3), pp. 89106, 1990-1991.
[73] K. Peffers, T. Tuunanen, M. A. Rothenberger and S. Chatterjee, “A Design
Science Research Methodology for Information Systems Research,” Journal of
Management Information Systems, vol. 24, no. 3, pp. 45-78, Winter 2007-8.
[74] R. H. von Alan, S. T. March, J. Park and S. Ram, “Design science in information
systems research,” MIS quarterly, vol. 28, no. 28, pp. 75--105, 2004.
[75] L. Archer, “Systematic method for designers,” Developments in design
methodology, pp. 57-82, 1984.
[76] H. Takeda, P. Veerkamp and H. Yoshikawa, “Modeling design process,” AI
magazine, vol. 11, no. 4, p. 37, 1990.
[77] J. Eekels and N. F. Roozenburg, “A methodological comparison of the structures
of scientific research and engineering design: their similarities and differences,”
Design Studies, vol. 12, no. 4, pp. 197 - 203, 1991.
[78] J. G. Walls, G. R. Widmeyer and O. A. El Sawy, “Building an information
system design theory for vigilant EIS,” Information systems research, vol. 3, no.
1, pp. 36-59, 1992.
[79] M. Rossi and M. Sein, “Design Research Workshop: A Proactive Research
Approach. Presentation delivered at IRIS 26, August 9-12 (2003),” 2003.
[80] D. Jones, T. Bench-Capon and P. Visser, “Methodologies for ontology
development,” 1998.
[81] P. Bourque, R. E. Fairley and others, Guide to the software engineering body of
knowledge (SWEBOK (R)): Version 3.0, IEEE Computer Society Press, 2014.
[82] A. Anandaraj, P. Kalaivani and V. Rameshkumar, “Development of OntologyBased Intelligent System For Software Testing,” International Journal of
Communication, Computation and Innovation., vol. 2, no. 2, 2011.
[83] P. Sapna and H. Mohanty, “An Ontology Based Approach for Test Scenario
Management,” in Information Intelligence, Systems, Technology and
Management, Springer, 2011, pp. 91--100.
[84] M. d'Aquin and A. Gangemi, “Is there beauty in ontologies?,” Applied Ontology,
vol. 6, pp. 165--175, 2011.
[85] E. Montiel-Ponsoda, D. V. Suero, B. Villaz, G. Dunsire and E. Escolano, “Style
guidelines for naming and labeling ontologies in the multilingual web.,”
Informatica, 2011.
[86] E. F. Souza, R. Falbo and N. L. Vijaykumar, “Using Ontology Patterns for
Building a Reference Software Testing Ontology,” in Enterprise Distributed
Object Computing Conference Workshops (EDOCW), 2013 17th IEEE
International, IEEE, 2013, pp. 21--30.
[87] D. Vrandecic, “Ontology Evaluation,” Phd. Thesis, 2010.
44
References
[88] H. Zhu, “A framework for service-oriented testing of web services,” in Computer
Software and Applications Conference, 2006. COMPSAC'06. 30th Annual
International, vol. 2, IEEE, 2006, pp. 145--150.
[89] Q. Huo, H. Zhu and S. Greenwood, “A multi-agent software engineering
environment for testing Web-based applications,” in Computer Software and
Applications Conference, 2003. COMPSAC 2003. Proceedings. 27th Annual
International, IEEE, 2003, pp. 210--215.
45
Appendices
7 Appendices
7.1 Appendix 1: Classes and relations used in ‘A Top
Domain Ontology for software Testing’
This section presents a detailed version of the developed ontology. We have
defined classes mentioned in section 4.3.1 as ‘concept’ and their relation as
‘implementation’. The definition and content used to define the concepts are
mostly taken from ISTQB [13].
7.1.1
Defects
Concept
A defect is an error or a bug, in the application. A programmer while designing
and building the software can make mistakes or error. These mistakes or errors
mean that there are flaws in the software. We have identified two types of
defects:
1. Bug: A software bug is an error in a computer program or system that
causes it to produce an incorrect or unexpected result, or to behave in
unintended ways.
2. Failure: If under certain environment and situation defects in the
application or product get executed then the system will produce the
wrong results causing a failure.
Figure 16 Defects
7.1.2
Formal review steps
Concept
The following steps consist of different steps followed in a formal review
process.
46
Appendices
1. Kickoff: The document(s) are distributed with a clear understanding of
the objectives, process and time scales. Entry criteria are checked for the
more formal review.
2. Individual preparation: In this people prepare by individually
checking the documents before the review meeting. They make notes
about defects, questions, comments and process improvements. The
preparation or individual checking is usually where the greatest value is
gained from a review process. Each person spends time on the review
document (and related documents), becoming familiar with it and/or
looking for defects.
3. Planning: It is an important part of the review process where people are
selected and associated roles given, entry and exit criteria is set for a
more formal type of review such as Inspection. Also the various parts of
the document are selected (or all of it) for review.
4. Review Meeting: In this step the meeting can take various forms from a
discussion meeting to a more formal logging meeting. Typically a
physical meeting of reviewers but a meeting achieved with electronic
communication (virtual meetings). Participants may document defects
found, make recommendations or make decisions as to the best way
forward. Record issues, defects, decisions and recommendations made.
5. Follow up: Follow-up is the final stage of the process where checks are
made as to whether the defects found and logged have been adequately
addressed. The more formal review techniques collect metrics on cost
(time spent) and benefits achieved and exit criteria are checked. The
more formal review techniques include follow-up of the defects or
issues found to ensure that action has been taken on everything raised.
6. Rework: In rework the author (typically) makes changes to the original
document based on the comments and defects found.
Figure 17 Formal Review Steps
Implementation
The properties related to this class are as follows:
1. hasSteps: This relation shows that Formal review has Formal review
steps.
47
Appendices
2. isStepOf: This is the inverse function of hasSteps.
Figure 18 Relations for formal review steps
7.1.3
Human involved in testing
Concept
The class describes about the range of human involvement in software testing
and their roles and responsibilities [13].
1. Customers: The customers here are usually referred to the stakeholders
or the end users.
2. Developers: A developer is an individual that builds and create software
and applications. He or she writes, debugs and executes the source code
of a software application. A developer is also known as a software
developer, computer programmer, programmer, software coder or
software engineer.
3. Test authors: The person who is responsible for scripting down the
procedure involved in testing.
4. Test moderators: Test moderators need to play several roles, such as
ensuring the technology is working properly, ensuring the session runs
to schedule and schmoozing the client. But the moderator’s most
important role is to ensure that the participant is heard.
5. Test reviewers: The professionals that work on the review process are
test reviewers.
6. Testers: A Software tester (software test engineer) should be capable of
designing test suites and should have the ability to understand usability
issues. Such a tester is expected to have sound knowledge of software
test design and test execution methodologies. It is very important for a
software tester to have great communication skills so that he can interact
with the development team efficiently.
48
Appendices
7. Testing mangers: The person responsible for project management of
testing activities and resources, and evaluation of a test object. The
individual who directs, controls, administers, plans and regulates the
evaluation of a test object.
Figure 19 Humans involved in Testing
Implementation
The properties related to this class are as follows:
1. isPerformed: This relation states that static testing techniques are
performed by humans.
2. isInvolvedIn: This relation states that humans are involved in
performing test strategies.
3. isPartOf: This relation states that humans are part of testing teams.
Figure 20 Relations for human involved in testing
49
Appendices
7.1.4
Static testing techniques
Concept
These techniques perform testing of a software development artifact, e.g.,
requirements, design or code, without execution of these artifacts.
They are identified as following types:
1. Formal review: A review characterized by documented procedures and
requirements, e.g., inspection.
2. Informal review: A review not based on a formal (documented)
procedure. This is very much an ad hoc process. Normally it simply
consists of someone giving their document to someone else and asking
them to look it over. A document may be distributed to a number of
people, and the author of the document would hope to receive back
some helpful comments.
3. Inspection: A type of peer review that relies on visual examination of
documents to detect defects, e.g., violations of development standards
and non-conformance to higher level documentation. An Inspection is
the most formal of the formal review techniques. There are strict entry
and exit criteria to the inspection process, it is led by a trained leader or
moderator ( not the author), and there are define roles for searching for
defects based on defined rules and checklists.
4. Technical review: A peer group discussion activity that focuses on
achieving consensus on the technical approach to be taken. A technical
review may have varying degrees of formality. This type of review does
focus on technical issues and technical documents. A technical review
should exclude senior managers because of the adverse effect they may
have on some people(some may keep quiet, some may become too
vocal, either way this is undesirable).
5. Walkthrough: A step-by-step presentation by the author of a document
in order to gather information and to establish a common understanding
of its content. A walkthrough is typically led by the author of a
document, for the purpose of educating the participants about the
content so that everyone understands the same thing, finding defects is
also an objective - but could be classed as a secondary objective. A
walkthrough may include "dry runs" of business scenarios to show how
the system would handle certain specific situations. For technical
documents, it is often a peer group technique. It tends to be an openended session and may vary in formality.
50
Appendices
Figure 21 Static testing techniques
Implementation
The properties related to this class are as follows:
1. isPerformedOn: This relation states that static testing techniques are
performed on testing artifacts.
2. isPerformedBy: This relation states that static testing techniques are
performed by humans.
Figure 22 Relations for static testing techniques
51
Appendices
7.1.5
Test design techniques
Concept
These are procedure used to derive and/or select test case. One of the aims of
testing is to reveal as much potential for failure as possible, and many
techniques have been developed to do this. The leading principle underlying
such techniques is to be as systematic as possible in identifying a representative
set of program behaviours. Sometimes, test techniques are confused with test
objectives. Test techniques are to be viewed as aids which help to ensure the
achievement of test objectives.
Types of test design technique are as follows:
1. Experience based techniques: Experience-based testing is where tests
are derived from the tester's skill and intuition and their experience with
similar applications and technologies.
2. Specification based black box testing: Testing, either functional or
non-functional, without reference to the internal structure of the
component or system.
3. Structure based white box testing: Structure-based or white-box
testing is based on an analysis of the internal structure of the software or
the system
Figure 23 Test design techniques
Implementation
The properties related to this class are as follows:
1. canBeCombinedWith: This relation states that test design technique
can be combined with test strategies.
52
Appendices
2. hasTestDesignTechnique: Equivalent relation as hasTestApproach.
3. hasTestApporach: This relation states that test strategies have test
approach/ test design technique.
Figure 24 Relations for test design techniques
7.1.6
Test methods
Concept
There are four types of objectives for testing; these are described as test types
or targets of testing. Depending on what the objectives are, different test efforts
will be organised differently.
1. Functional testing: Functional tests are based on functions and features
{described in documents or understood by the testers) and their
interoperability with specific systems, and may be performed at all test
levels (e.g., tests for components may be based on a component
specification).
2. Non-functional testing: Non-functional testing includes, but is not
limited to, performance testing, load testing, stress testing, usability
testing, maintainability testing, reliability testing and portability testing.
It is the testing of "how" the system works.
3. Structural testing: Structural testing is often referred to as "white box"
or "glass box" because we are interested in what is happening "inside the
box". It is most often used as a way of measuring the thoroughness of
testing through the coverage of a set of structural elements or coverage
items. Structural testing can occur at any test level.
53
Appendices
4. Testing related to changes: This category is slightly different to the
others because if you have made a change ot the software, you will have
changed the way it functions, the way it performs (or both) and its
structure. However we are looking here at the specific types of tests
relating to changes, even though they may include all of the other test
types.
Figure 25 Test Methods
Implementation
The properties related to this class are as follows:
1. haveTestMethod: This relation links up the two classes and proposes
that Test strategies have test method.
2. isTestMethodOf: this is an inverse function of haveTestMethod
54
Appendices
Figure 26 Relations for test methods
7.1.7
Test objects
Concept
The individual element to be tested. There usually is one test object and many
test items.
Implementation
The properties related to this class are as follows:
1. hasTestObjects: This relation states that test strategies have test
objects.
2. isPerformedOn: This relation states that static testing techniques is
performed on test objects.
55
Appendices
Figure 27 Relations for test objects
7.1.8
Test strategies
Concept
Describes different levels of testing: major objectives, typical objects of testing,
typical targets of testing and related work products, people who test, types of
defects and failures to be identified.
They are as follows:
1. Component testing (also known as unit, module or program testing)
searches for defects in, and verifies the functioning of, software
modules, programs, objects, classes, etc., that are separately testable. It
may be done in isolation from the rest of the system, depending on the
context of the development life cycle and the system. Stubs, drivers and
simulators may be used.
2. Integration testing tests interfaces between components, interactions
with different parts of a system, such as the operating system, file
system and hardware, and interfaces between systems.
3. System testing: Testing an integrated system to verify that it meets
specified requirements.
4. Acceptance testing: Formal testing with respect to user needs,
requirements, and business processes conducted to determine whether or
not a system satisfies the acceptance criteria and to enable the user,
customers or other authorized entity to determine whether or not to
accept the system.
56
Appendices
Figure 28 Test Strategies
Implementation
The properties related to this class are as follows:
1. haveTestMethod: This relation links up the two classes and proposes
that test strategies have test method.
2. isTestMethodOf: this is an inverse function of haveTestMethod
3. hasAnalysisTechnique: This relation states that test strategies have
analysis techniques from static testing techniques.
4. canBeCombinedWith: This relation states that test design technique
can be combined with test strategies.
5. hasTestDesignTechnique: Equivalent relation as hasTestApproach.
6. hasTestApporach: This relation states that test strategies have test
approach/ test design technique.
7. isInvolvedIn: This relation states that testing teams are involved in
performing test strategies
8. hasGeneralToolSupport: This relation states the use of the tool support
generally provided to the test technique.
9. providesGeneralToolSupport: This is the inverse function of
hasGeneralToolSupport
10. hasSpecificToolSupport: This relation states the use of the tool support
specifically provided to a certain test technique.
11. providesSpecificToolSupport: This is the inverse function of
hasSpecificToolSupport
12. has_objective: This relation states that each test strategy has test
objective.
13. hasTestObjects: This relation states that test strategies have test objects
57
Appendices
14. hasTestBasis: This relation states that test strategies has their test basis
at testing artifacts
15. findsDefectsIn: This relation states that test strategies finds defects in
testing artifacts.
Figure 29 Relations for test strategies
7.1.9
Testing artifacts
Concept
An artifact is one of many kinds of tangible by-products produced during the
development of software. Some artifacts (e.g., use cases, class diagrams, and
other Unified Modelling Language (UML) models, requirements and design
documents) help describe the function, architecture, and design of software.
Other artifacts are concerned with the process of development itself-such as
project plans, business cases, and risk assessments.
The types of artifacts identified are as follows:
1. Code: Computer instructions and data definitions expressed in a
programming language or in a form output by an assembler, compiler or
other translator.
2. Documentation: Testing documentation involves the documentation of
artifacts that should be developed before or during the testing of
Software.
3. Requirements: Software requirements specification (SRS) is a
description of a software system to be developed, laying out functional
and non-functional requirements, and may include a set of use cases that
describe interactions the users will have with the software.
58
Appendices
4. Test case: A set of input values, execution preconditions, expected
results and execution post conditions, developed for a particular
objective or test condition, such as to exercise a particular program path
or to verify compliance with a specific requirement.
5. Test condition: An item or event of a component or system that could
be verified by one or more test cases, e.g., a function, transaction,
feature, quality attribute, or structural element.
6. Test plan: A document describing the scope, approach, resources and
schedule of intended test activities. It identifies amongst others test
items, the features to be tested, the testing tasks, who will do each task,
degree of tester independence, the test environment, the test design
techniques and entry and exit criteria to be used, and the rationale for
their choice, and any risks requiring contingency planning. It is a record
of the test planning process.
7. Test schedule: A list of activities, tasks or events of the test process,
identifying their intended start and finish dates and/or times, and
interdependencies.
Figure 30 Testing Artifacts
Implementation
1. hasTestBasis: This relation states that test strategies has their test basis
at testing artifacts.
2. findsDefectsIn: This relation states that test strategies finds defects in
testing artifacts.
3. isPerformedOn: This relation states that static testing techniques is
performed on testing artifacts.
59
Appendices
Figure 31 Relations for testing artifacts
7.1.9.1
Documentation
1. isDocumentedIn: This is the inverse function of isDocumentationOf.
2. isDocumentationOf: This relation states that test case, test condition
and static testing techniques are documented in documentation.
Figure 32 Relations for documentation
7.1.10 Requirements
1. hasTracibility: This relation states that test cases have their traceability
to requirements.
60
Appendices
Figure 33 Relations for requirements
7.1.10.1
Test case
1. hasTracibility: This relation states that test cases have their traceability
to requirements.
2. isDevelopedFor: This relation states that test conditions are developed
for test cases.
3. isDocumentedIn: This is the inverse function of isDocumentationOf.
4. isDocumentationOf: This relation states that test case, test condition
and static testing techniques are documented in documentation.
5. hasTestID: This relation states that each test case has an unique test id.
6. hasSeverity: This relation states that each test case has its severity.
7. hasPriority: This relation states that each test case has its priority.
61
Appendices
Figure 34 Relations for test case
7.1.10.2
Test condition
1. isDevelopedFor: This relation states that test conditions are developed
for test cases.
2. isDocumentationOf: This is the inverse function of isDocumentedIn.
3. isDocumentedIn: This relation states that test cases and test conditions
are documented in documents.
Figure 35 Relations for test condition
62
Appendices
7.1.11 Testing teams
Concept
The test team can be composed of members that is, on the project team,
involved or not in software construction.
Implementation
The implementation of the testing teams is performed by two relations
1. isPartOf: This relation states that humans are part of testing teams.
2. isInvolvedIn: This relation states that testing teams are involved in
performing test strategies.
Figure 36 Relations for testing team
7.1.12 Tools for testing
Concept
A software product that supports one or more test activities, such as planning
and control, specification, building initial files and data, test execution and test
analysis.
The types of tools are listed below:
1. Performance testing tool are able to provide a lot of very accurate
measures of response times, service times and the like. Other tools in
this category are able to simulate loads including multiple users, heavy
network traffic and database accesses.
2. Static analysis tools analyse source code without executing it. They are
a type of super compiler that will highlight a much wider range of real
or potential problems than compilers do. Static analysis tools detect all
of certain types of fault much more effectively and cheaply than can be
achieved by any other means.
3. Test specification tools: These tools help to specify the tests in form of
test case designing including the data needed for test case execution.
4. Tool support for management of testing: These tools are used to store
information on how testing is to be done.
63
Appendices
5. Tool support for test execution: A type of test tool that is able to
execute other software using an automated test script, e.g.,
capture/playback.
Figure 37 Tools for testing
Implementation
The implementation has two types of properties in this
1. hasGeneralToolSupport: This relation states the use of the tool support
generally provided to the test technique.
2. providesGeneralToolSupport: This is the inverse function of
hasGeneralToolSupport
3. hasSpecificToolSupport: This relation states the use of the tool support
specifically provided to a certain test technique.
4. providesSpecificToolSupport: This is the inverse function of
hasSpecificToolSupport
64
Appendices
Figure 38 Relations for tools for testing
65
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Related manuals

Download PDF

advertisement