dspace cover page - ETH E
Research Collection
Doctoral Thesis
YOUR: Robot programming tools for architectural education
Author(s):
Lim, Jason
Publication Date:
2016
Permanent Link:
https://doi.org/10.3929/ethz-a-010748012
Rights / License:
In Copyright - Non-Commercial Use Permitted
This page was generated automatically upon download from the ETH Zurich Research Collection. For more
information please consult the Terms of use.
ETH Library
DISS. ETH NO. 23626
YOUR: Robot programming tools for architectural education
A thesis submitted to attain the degree
DOCTOR OF SCIENCES of ETH ZURICH
(Dr. sc. ETH Zurich)
Presented by
JASON LIM
M.Eng. Stevens Institute of Technology
B.Arch. Cornell University
Born on 24.02.1980
Citizen of Singapore
Accepted on the recommendation of
Professor Matthias Kohler
Professor Fabio Gramazio
Dr. Robert Aish
2016
Abstract
The introduction of industrial robots to the architectural domain has catalysed the development of
novel approaches to design and production. However, these machines are difficult to control,
especially if they are to be applied for non-standard fabrication. Users need to have robotics domain
specific knowledge and be conversant in a formal language to program instructions for the robot.
Yet the vast majority of architects do not have this technical background. Unless robot programming
can be made more accessible to them, the use of such machines will be restricted to an expert
minority. Hence, this research addresses the question: how should a novice robot programming
system be developed for use in the architecture domain? Considering that the act of robot
programming is currently not well understood, it is unclear what the requirements of such a system
are beyond accessibility.
This research is conducted from an architectural perspective and embedded within a pedagogic
setting. It adopts an empirical, case-study based methodology. A custom robot programming
solution named YOUR was developed by the author for this research. It is designed to support a
hybrid visual-textual approach to robot programming and to be easily extensible by end-users. Two
cases—a Design Research Studio (DRS) and a workshop—were set up to study how architecture
students carried out fabrication-based robot programming tasks using YOUR. Data was collected
from several sources—interviews, observation and students’ computer programs. It provided the
basis for evaluating YOUR’s design and informed its further development. This research contributes,
first of all, a robot programming solution (YOUR) that can be readily deployed in architectural
education. It also fills a current research gap by providing detailed empirical studies of the robot
programming process. Finally, it discusses pedagogic issues involved in teaching robot programming
to architectural students, and identifies novel approaches to design and production for future study.
i
Die Einführung von Robotern in die Architektur eröffnet vollkommen neue Entwurfs- und
Produktionsm öglichkeiten für das Bauen. Allerdings ist die Programmierung von Robotern,
besonders im Hinblick auf nicht-standardisierte Fertigungsverfahren und Bewegungsabläufe für
unerfahrene Programmierer nur schwer zu erlernen. Der Benutzer braucht fachspezifische
Kenntnisse aus der Robotik und muss mit dem Programmieren grundsätzlich vertraut sein. Die
meisten Architektinnen und Architekten sowie die Studierenden haben dieses Wissen allerdings
nicht. Umgekehrt heisst das: Nur wenn die Programmierung von Robotern einfacher und intuitiver
werden kann, kann diese Technologie einer grösseren Anwendergruppe zugänglich gemacht werden.
Damit wirft die vorliegende Arbeit die Fragestellung auf, wie sich für Architekturstudentinnen und studenten ein leicht zugängliches Programmierumfeld für die Steuerung von Industrierobotern
entwickeln lässt und was demgegenüber grundlegende Anforderungen für ein solches System sind.
Damit bezieht sich die Arbeit im wesentlichen auf pädagogische Aspekte der Architekturausbildung
und fokussiert auf mehrere empirische Fallstudien. Hierzu wurde die Programmierumgebung
namens YOUR entwickelt, die speziell auf die Anwendung durch Architekturstudentinnen und studenten abzielt. YOUR basiert auf der Kombination von sowohl textlichen als auch visuellen
Programmierkomponenten und kann durch den Benutzer auf unterschiedliche Art und Weise
erweitert und modifiziert werden. Konkret wurde die Anwendung der Programmierumgebung YOUR
– und deren Komponenten – in zwei Entwurfsstudios und einem Seminar getestet. Die Daten zur
Evaluation von YOUR stammen aus vergleichenden Interviews und Befragungen, aus der
Beobachtung der Studierenden und deren Fortschritte im Umgang mit YOUR und schlussendlich aus
der Analyse der von den Studierenden vorgenommenen Modifikationen und Erweiterungen der
Programmierumgebung. Die Erkenntnisse aus diesen Fallstudien dienten darüber hinaus der
gezielten Weiterentwicklung der YOUR Komponenten. Über die Entwicklung einer spezifischen
Programmierumgebung zur Steuerung von Industrierobotern hinaus, bietet die Arbeit zugleich einen
detaillierten Einblick in die Wechselseitigkeit zwischen architektonischen Entwurfsansätzen und
Programmiervorgängen. Schlussendlich werden zentrale pädagogische Fragen bezüglich zur
Roboterprogrammierung diskutiert und neue Ansätze für digitale Entwurfs- und
Fabrikationsmethoden identifiziert.
ii
Acknowledgements
Many people have helped me immeasurably over the course of this PhD research.
First and foremost, I would like to thank my family: Asami for providing constant words of
encouragement, critiquing the thesis, and making sacrifices so that I could focus on the research; as
well as Aiko and Yuki for understanding why I was sometimes too busy to play with you, and
reminding me of what is truly important.
Next, I would like to thank my supervisors—Fabio and Matthias—for introducing me to the world of
robotic fabrication, giving me the space to develop the research, and providing intellectual guidance
throughout. I would also like to express my gratitude to Robert for advising me on this thesis, and
sharing your experiences with conducting software development related research with me.
I am also indebted to Jan for reading numerous drafts of the thesis, from the first proposal to the
final manuscript, and providing feedback that was always constructive and insightful. I would like to
thank Silke for also providing valuable feedback and much-needed encouragement during the
research proposal writing phase.
I would like to thank the members of module 2 at the Future Cities Laboratory in Singapore—
Michael, Willi, Norman, Raffael and Selen—for your insights and friendship, as well as for working
with me. I would also like to thank the extended team at the Chair of Architecture and Digital
Fabrication in Zurich—in particular Ammar, Ena, Luka, Michael, and Volker, who at one point or
another offered some feedback that made a difference, as well as Ralph, for providing the basis for
this work and being a source of inspiration.
Finally, I am extremely grateful to the students who were part of this research. I would like to thank
those of you from the Design Research Studios—Sebastian, Pascal, Patrick, Sylvius, Sven, Silvan,
Michael, Martin, Florence, Alvaro, Fabienne, and Tobias in the first year; as well as Petrus, Pun, Kai
Qi, Yuhang, David, Lijing, Ping, Jean-Marc, and Andre Wong in the second—for your input on the
programming tools and incredible work, which has been truly inspiring. I also want to thank the
assistants and participants of the workshop—Lennard, Clover, Xia Tian, Amanda Yeo, Eileen, Yiqian,
Amanda Mak, Clifford, Jiehao, Leon, and William—for your feedback, as well as infectious
enthusiasm for learning.
iii
Contents
Abstract
i
Acknowledgements
iii
Contents
iv
1
1
2
Introduction
1.1
Background
1
1.2
Research question
4
1.3
Scope
6
1.4
Thesis structure
7
State of the Art
2.1
Programming in architectural design
9
2.1.1
Text programming systems
10
2.1.2
Visual programming systems
15
2.1.3
Summary: Programming in architectural design
17
2.2
Industrial robot programming
20
2.2.1
On-line robot programming
20
2.2.2
Off-line robot programming
21
2.2.3
Summary: Industrial robot programming
26
2.3
3
9
Robot programming in architecture
27
2.3.1
Combining existing solutions
28
2.3.2
Custom visual based programming solutions
29
2.3.3
Custom text based programming solutions
35
2.3.4
Summary: Robot programming in architecture
36
Methodology
38
3.1
Choice of approach
38
3.2
Case studies
39
3.3
Research instrument
39
3.4
Data collection
43
3.5
Data interpretation and representation
43
iv
4
5
Case study: Design Research Studio
47
4.1
Design Research Studio setup
47
4.2
Robot programming setup: 2012 spring semester
49
4.3
Results: 2012 spring semester
53
4.3.1
Tiong Bahru Tower
53
4.3.2
Lakeside Tower
59
4.3.3
Rochor Tower
66
4.4
Robot programming setup: 2012 fall semester
72
4.5
Results: 2012 fall semester
76
4.5.1
Nested Voids
77
4.5.2
Bent Stratifications
84
4.5.3
Undulating Terraces
90
4.6
Interview: 2012 Design Research Studio
99
4.7
Robot programming setup: 2013 spring semester
102
4.8
Results: 2013 spring semester
107
4.9
Robot programming setup: 2013 fall semester
112
4.10
Results: 2013 fall semester
115
4.10.1
Sequential Frames
115
4.10.2
Mesh Towers
125
4.10.3
Vertical Avenue
134
4.11
Interview: 2013 Design Research Studio
145
4.12
Pedagogic issues
148
Case Study – Workshop
151
5.1
Workshop setup
151
5.2
Robot programming setup
154
5.3
Data collection
161
5.4
Results: Group 1
162
5.5
Results: Group 2
169
5.6
Results: Group 3
176
5.7
Results: Group 4
183
5.8
Interview results
188
v
5.9
6
7
9
191
Discussion
195
6.1
A problem of scale
195
6.2
Extending YOUR
200
6.3
Challenging the dichotomy between text and visual programming
202
6.4
Flipping the digital design to physical production chain
205
6.5
The limits of automation and the promise of collaborative building
207
Conclusion
7.1
8
Pedagogic issues
210
Outlook
211
Bibliography
213
8.1
Publications
213
8.2
Online resources
220
Appendix
224
9.1
Interview: 2012 Design Research Studio fall semester
224
9.2
Interview: 2013 Design Research Studio fall semester
225
9.3
Interview – Workshop
226
10
Project credits
229
11
List of figures
230
vi
1 Introduction
Background
The industrial robot 1 was designed to be a general-purpose machine. It can freely position and orient
its tip in space. Instead of a hand, it has an end-effector which is replaceable; this allows it to
perform different physical tasks. Sensors may be wired to the robot’s controller, giving it the
capacity to sense the external world. Though originally developed for industrial purposes, robots
have been appropriated for use in other domains 2 by virtue of their versatility.
Architects emerged as a new class of robot end-user in the past decade. 3 They have experimented
with using robots to fabricate digitally designed artefacts ranging from models to building
components. Robotic fabrication offers multiple advantages. Architects can leverage the robot’s
ability to add, subtract or form material freely in space to realise geometrically complex designs 4;
develop custom end-effectors to explore an extended range of fabrication processes; and integrate
sensor feedback in order to work with unconventional material systems that exhibit dynamic
behaviour. 5 Perhaps most importantly, the use of such technology obliges architects to directly
The Robotics Industries Association (RIA) defines the industrial robot as a “reprogrammable, multifunctional
manipulator designed to move material, parts, tools, or specialised devices through variable programmed
motions for the performance of a variety of tasks”. Joseph Jablonowski and Jack Posey, “Robotics Terminology,”
in Handbook of Industrial Robotics, 2nd edition, ed. Shimon Nof (New York: John Wiley & Sons, 1999), 1271.
2
For example, robots have also been appropriated for use in art (Diaz) and music (Bökesoy & Adler).
Frederico Diaz, “Outside Itself: Interactive Installation Assembled by Robotic Machines Untouched by Human
Hands,” in Robotic Fabrication in Architecture, Art and Design, ed. Sigrid Brell-Cokcan and Johannes Braumann
(Vienna: Springer, 2012), 180–183.
Sinan Bökesoy and Patrick Adler, “1city1001vibrations: Development of a interactive sound installation with
robotic instrument performance,” in Proceedings of the International Conference on New Interfaces for
Musical Expression, ed. Alexander Jensenius et al. (Oslo: University of Oslo and Norwegian Academy of Music,
2011), 52–55.
3
In 2005, the Chair of Architecture and Digital Fabrication at ETH Zurich built the “world’s first robotic
laboratory for the research of architectural design and fabrication processes.” Fabio Gramazio et al., The
Robotic Touch: How Robots Change Architecture (Zurich: Park Books, 2014), 10.
4
For example, the Spatial Aggregations elective courses, taught at the Chair of Architecture and Digital
Fabrication, dealt with geometrically complex assemblies. “Spatial Aggregations 1”, accessed January 1st 2016,
http://gramaziokohler.arch.ethz.ch/web/e/lehre/228.html
5
For example, the Procedural Landscapes elective courses, taught at the Chair of Architecture and Digital
Fabrication, investigated sand as a reusable moulding material for concrete formwork. “Procedural Landscapes
1,” accessed January 1st 2016, http://gramaziokohler.arch.ethz.ch/web/e/lehre/208.html
1
1
address constructive and material issues, which have arguably been marginalised, or at least deemphasised, over the course of the digital movement in architecture. 6
Standard digital fabrication machines, such as Computer Numeric Controlled (CNC) mills and lasercutters, are designed to carry out specific processes. These processes are well-defined since their
constraints and parameters are known in advance. 7 As a result, specific Computer Aided
Manufacturing (CAM) software can be developed that allow users to set up the fabrication process
through a simple user interface. The software automatically generates control instructions, thus
freeing users from having to do so themselves. In comparison, the robot is designed to be a general
purpose machine. CAM solutions cannot be implemented for all robotic processes since they are
potentially unlimited. 8 Hence, users are responsible for authoring instructions.
Programming is one means of instructing the robot. It commonly involves writing instructions in a
text programming language—such as RAPID 9, KRL 10 and URScript 11—provided by the robot
manufacturers. Here, the end-user controls the robot by means of an abstract notation, rather than
by manually leading 12 it or using a teach pendant 13. In comparison to CAM software, programming
introduces a “level of indirection” 14 that separates users from the machine, thus making the process
of control less intuitive. However, it also introduces a level of generality that is directly suited to that
of the robot; as well as different levels of abstraction. The latter allows users, for example, to control
the machine more precisely by specifying individual low-level instructions, or more efficiently by
developing higher level constructs specific to their application.
There are two immediate challenges facing the novice robot programmer. The first is to become
conversant in a programming language. This involves learning what the rules are for assembling
primitives (syntax), and the meanings of the resultant constructs (semantics). 15 Beyond acquiring
Jan Willmann et al., “Digital by Material: Envisioning an extended performative materiality in the digital age
of architecture,” in Robotic Fabrication in Architecture, Art and Design, ed. Sigrid Brell-Cokcan and Johannes
Braumann (Vienna: Springer, 2012), 12.
7
Christoph Schindler, “Ein architektonisches Periodisierungsmodell anhand fertigungstechnischer Kriterien,
dargestellt am Beispiel des Holzbaus” (PhD diss., ETH Zurich, 2009).
8
Tobias Bonwetsch, “Robotic Assembly Processes as a Driver in Architectural Design,” Nexus Network Journal
14 no. 3 (2012): 484–485.
9
RAPID is a proprietary domain specific language for programming ABB robots.
10
KUKA Robot Language (KRL) is a proprietary domain specific language for programming KUKA robots.
11
URScript is a proprietary domain specific language for programming Universal Robots.
12
Lead-through programming is discussed in greater detail in Chapter 2.2.1.
13
A teach pendant is a handheld control terminal. Teach pendant programming is discussed in greater detail in
Chapter 2.2.1.
14
Robert Aish, “From Intuition to Precision,” in Digital Design: The Quest for New Paradigms 23rd eCAADe
Conference Proceedings, ed. José Duarte et al. (Lisbon: Technical University of Lisbon, 2005), 11.
15
Matthias Felleisen et al., How to Design Programs: An Introduction to Programming and Computing
(Cambridge/MA: MIT press, 2001), 97–114.
6
2
such knowledge, novices also have to learn strategies to design, generate and evaluate programs. 16
This is mainly gained through practical experience.
Figure 1-1 An example of movement function in KUKA Robot Language (KRL).
Second, novices have to acquire domain specific knowledge. This can be illustrated through an
example. The expression shown in Figure 1-1 calls a standard motion procedure that accepts a
pose 17 as its first argument. To understand what the procedure refers to, novices have to learn
about the different types of robotic motion and principles of kinematics 18. To understand what a
pose means, they have to learn how position and orientation information is mathematically
represented. 19 Since motion related procedures are the main building blocks of robot programs, it is
Anthony Robins et al., “Learning and Teaching Programming: A Review and Discussion,” Computer Science
Education 13 no. 2 (2003): 164.
17
A pose describes the position and orientation of the robot’s tip.
18
Kinematics concerns the motion of bodies in a robotic mechanism. It is fundamental to the control and
simulation of robots. Kenneth Waldron and James Schemiedeler, “Kinematics,” in Springer Handbook of
Robotics, ed. Bruno Siciliano and Oussama Khatib (Berlin: Springer, 2008), 9–33.
19
Orientation can be represented mathematically in several formats: rotation matrices, euler-angles, axisangles and quaternions. James Verth and Lars Bishop, Essential Mathematics for Games and Interactive
Applications: A Programmer’s Guide, 2nd edition (Burlington: Morgan Kaufmann, 2008), 173–202.
16
3
important to understand the kinematics and mathematical concepts which underlie these
abstractions.
A wider audience of architects 20 is prevented from being able to utilise robots for fabrication if the
task of programming these machines is too difficult. In recent years though, software solutions have
been developed that address this problem of accessibility. 21 The early results of their use have
shown that robot programming is indeed feasible for architects and a promising field of research.
Research question
While research addressing robotic fabrication in architecture has advanced significantly in the past
decade, the act of robot programming itself has been a neglected subject of study. 22 To date, there
has been no empirical research conducted that focuses on how architects learn and carry out the
task of programming robotic fabrication processes; hence it remains poorly understood. As a
consequence of this gap, it is unclear what kind of robot programming system is most appropriate
for architects.
Yet a current trend is to develop solutions based on a visual programming paradigm. In the last four
years alone, five different robot programming plugins 23 were created for Grasshopper —a graphical
editor for generative design 24. Visual dataflow programming lets programmers express algorithmic
The Australian Institute of Architects defines architects as professionally trained designers who “combine
creative design with a wide range of technical knowledge to provide integrated solutions for built and natural
environments.” The Singapore Institute of Architects declare that an “architect combines the practical
considerations of the site, the clients' needs and costs with a creative understanding of materials, aesthetics,
and the cultural and physical contexts” in order to implement a design solution.
“Becoming an architect,” Australian Institute of Architects, accessed January 1st 2016,
http://www.architecture.com.au/architecture/national/becoming-an-architect
“What is an architect,” Singapore Institute of Architects, accessed January 1st 2016,
http://www.sia.org.sg/who-is-an-architect.html
21
These solutions will be discussed in further detail in Chapter 2.3.
22
In contrast, researchers in the field of computer science education have studied the novice programming
process extensively. The results of their studies have informed the development of programming systems
oriented towards computer science education. Anthony Robins et al., “Learning and Teaching Programming,”
137–172.
23
These plugins are: KUKA|prc, HAL, Godzilla, Crane and Scorpion. They will be discussed in further detail in
Chapter 2.
24
Grasshopper is a visual programming language and environment that is integrated into the Rhinoceros 3D
modelling software. It was originally developed by David Rutten and is geared towards algorithmic design.
“Grasshopper: Algorithmic modelling for Rhino,” Grasshopper, accessed January 1st 2016,
http://www.grasshopper3d.com. “Rhinoceros,” Robert McNeel and Associates, accessed January 1st 2016,
http://www.rhino3d.com
20
4
logics by assembling graphical icons/components that represent data or operations. In text
programming, the same logics have to be expressed as sequences of commands and abstract
symbols. 25 It is commonly assumed that because of these simplifications, visual programming is
easier for novices than text programming. 26 Using these solutions, architects can assemble standard
Grasshopper components to create a design, and then connect them with the plugin components to
set up the robotic fabrication process. Ostensibly, this is a promising approach for making robot
programming accessible.
However, if the ultimate aim is to strengthen the impact of deploying robots for architectural
production, 27 then accessibility may not be the only, and indeed, most significant requirement for
the robot programming system. An immediate concern is that visual programs, while possibly easier
to create, are especially prone to the “scaling-up-problem.” 28 This means that when a visual program
is expanded in terms of size or applicability to different problems, 29 it becomes considerably less
comprehensible and more difficult to maintain. Another concern is that several of the visual
Grasshopper-based robot programming solutions provide a hard-coded library of components that
end-users cannot modify. These components represent a fixed vocabulary of robot programming
primitives that if not sufficiently rich, could hinder architects from experimenting with novel robotic
fabrication processes.
Consequently, this research addresses the question: how should a novice robot programming system
be developed for use in the architecture domain? Considering the lack of research literature
addressing the act of robot programming in architecture, it is unclear what the requirements of such
a system are beyond accessibility. If it is to be based on a visual programming paradigm, then how
can scalability and extensibility issues be addressed? Furthermore, if such a novice robot
programming system is successfully implemented, then what are the consequences of its use in
terms of impacting the “intellectual act of design and the material act of building” 30 respectively?
Margaret M. Burnett et al., “Scaling Up Visual Programming Languages,” in Computer 28 no. 3 (1995): 45.
The reasons for why this might be the case are discussed in greater detail in Chapter 2.1.4.
27
This refers both to the production of final building components as well as models and prototypes.
28
Burnett et al., “Scaling Up Visual Programming Languages,” 46.
29
Burnett et al., “Scaling Up Visual Programming Languages,” 46–47.
30
Mario Carpo, “Revolutions: Some New Technologies in Search of an Author,” in Log 15 (2009): 51.
25
26
5
Scope
The research is embedded within the Chair of Architecture and Digital Fabrication at ETH Zurich 31,
which has been investigating the architectural implications and potentials of additive robotic
fabrication 32 over the last decade. To extend such investigations to the larger scale, the Chair set up
a specialised research module – Architecture and Digital Fabrication 33 at the Future Cities
Laboratory 34 in Singapore. The module investigates how robotic fabrication may be applied to the
design and construction of future high rises. This is done through three research projects, including
this PhD work, as well as a design research studio. The module’s resources include a laboratory
containing three customised robotic systems. 35 The setup allows a range of material experiments to
be carried out.
This research is conducted from an architectural perspective and embedded within a pedagogic
setting. It adopts a case study based approach, which is a form of qualitative inquiry. 36 A custom
robot programming solution named YOUR was developed by the author; it served as the vehicle for
carrying out the research work. Two cases—a Design Research Studio (DRS) and a workshop—were
set up to study how architecture students carried out fabrication-based robot programming 37 tasks
using YOUR. Empirical data was collected from multiple sources including programming artefacts,
interviews and direct observations. The decisions underlying YOUR’s design are evaluated according
to this data and new requirements derived from it. In addition, the data is further interpreted
through qualitative and quantitative metrics to obtain a rich, detailed description of students’ robot
programming process.
“Gramazio Kohler Research,” Gramazio Kohler Research, accessed January 1st 2016,
www.gramaziokohler.arch.ethz.ch
32
Fabrication processes can generally be classified as additive, subtractive or formative. Additive fabrication
involves the aggregation of materials into greater wholes and is comparatively less wasteful.
33
Module 2—Architecture and Digital Fabrication is led by Principle Investigators Professor Fabio Gramazio
and Professor Matthias Kohler. The Singapore-based team includes Michael Budig, Selen Ercan, Norman Hack,
David Jenny, Dr. Silke Langenberg, Willi Lauer, Jason Lim and Raffael Petrovic.
34
The Future Cities Laboratory (FCL) is a trans-disciplinary research centre focused on urban sustainability. It
contains ten research modules that investigate a range of topics related to the future city. “Future Cities
Laboratory,” Future Cities Laboratory, accessed January 1st 2016, http://www.futurecities.ethz.ch/
35
Each system consists of a Universal Robots UR5 robotic arm mounted to a Güdel linear axis system.
36
Creswell identifies five different qualitative approaches: narrative research, phenomenology, grounded
theory, ethnography and case studies. John Creswell, Qualitative Inquiry and Research Design: Choosing
Among Five Approaches, 2nd ed. (London: Sage Publications, 2007), 6–10.
37
In this thesis, the term “robot programming” is always used in relation to fabrication (i.e. the programming
of robotic fabrication processes), even if this is not explicitly stated.
31
6
The first outcome of this research is the development of a robot programming solution (YOUR) that
makes robot control accessible. It has been empirically tested, and is ready to be deployed in a
pedagogic setting. The novelty of YOUR lies in its support for a hybrid visual-textual approach to
robot programming, and the fact that it is designed to be extended by end-users. The second
outcome is a detailed documentation of the process by which students design, implement and use
robot programs, which helps to fill a gap in current research literature; it leads to a discussion of
end-user programming issues especially with regards to scalability, maintenance and collaboration.
The third outcome is a deepened understanding of the pedagogic issues involved with teaching
robot programming to architecture students; and recommendations for how such instruction can be
carried out in studio and workshop contexts.
Thesis structure
Chapter 2 reviews the state of the art in programming systems that target first, the architecture and
then, robotics domains. It then reviews the existing solutions used by architects for programming
robots, and focuses on those that are based on visual programming.
Chapter 3 introduces the case study methodology that was applied in the research. It explains the
choice of methodology; and presents two selected cases, as well as the primary research
instrument—YOUR. The chapter subsequently describes the approach taken to develop YOUR; and
explains the methods for collecting, interpreting and representing data from the case studies
Chapter 4 covers the first case study, which was a Design Research Studio (DRS) that was run twice.
It explains the setup of the studio and assignment given to student teams, which included the task of
programming robotic model fabrication processes. The chapter describes how each team used YOUR
to implement a robot program in accordance with their architectural design and model production
concepts, followed by the results of interviews conducted at the end of each studio. In addition, it
details the evolution of YOUR’s design as it was used and tested in the DRS. The chapter concludes
by discussing pedagogic issues involved in teaching students robot programming within a studio
context.
Chapter 5 covers the workshop case study. It explains the setup of the workshop and the task given
to each student team, which was to extend a robot program, prepared in advance, for controlling a
specific fabrication process. The chapter describes in detail how each team accomplished this task,
7
as well as the results of an interview conducted at the workshop’s end, where students evaluated
the usability of the robot programming tools. The chapter concludes by discussing pedagogic issues
that arose in the workshop and lessons that were learnt.
Chapter 6 synthesises the results from the two previous chapters and extracts five themes for
further discussion: the problem of scale in robot programming; the importance of end-user
extensibility in robot programming tools; the merits of combining visual and text programming; the
reversal of the digital design to physical production sequence; and the potentials of human-robot
collaborative building.
Finally, chapter 7 summarises the results and findings from the research and proposes new
directions in which it may be further developed.
8
2 State of the Art
This chapter surveys the state-of-the-art in programming systems 38 that target the domains of
architecture, robotics and finally the intersection between the two. The design and implementation
of these systems, with respect to both the programming language and environment, are described
and their impact in terms of supporting end-users to achieve design and/or production related
domain goals assessed.
2.1 Programming in architectural design
The roots of programming in architecture can be traced back to the moment when computers
started to become widespread in academia and practice. 39 Even from this early stage, the potentials
of applying programming to architectural design were recognised and discussed. 40 Compared to
using Computer Aided Design (CAD) software, programming represents a deeper form of
engagement with the computer. 41 It involves learning about computational geometry, which is the
foundation of most design applications, 42 and gaining proficiency in a formal programming language.
Though it requires significant intellectual investment and introduces a level of indirection to the
design process, 43 programming offers multiple advantages. Architects may, for example, use
A programming system is comprised of a language and an environment for creating programs. While they
can be separate, for example a Python program can be written in a number of different code editors, they may
also be tightly integrated—this is usually the case with visual programming. Hence, this chapter will refer to
programming systems in general.
39
This is occurred in the mid to late 1980s when affordable 16 bit machines, such as the IBM PC and Apple
Macintosh, were introduced and a mass market consequently established. William J. Mitchell, “Afterword: The
Design Studio of the Future,” in The Electronic Design Studio, ed. Malcolm McCullough et al. (Cambridge/MA:
MIT Press, 1990), 482.
40
For example, books such as The Art of Computer Graphics Programming and Microcomputer Aided Design
for Architects and Designers were published around this time. They addressed the topic of programming and
provided examples written in Pascal and AutoLisp respectively.
William J. Mitchell et al., The Art of Computer Graphics Programming (New York: Van Nostrand Reinhold, 1987).
Gerhard Schmitt, Microcomputer Aided Design for Architects and Designers (New York: John Wiley & Sons,
1988).
41
Mark Burry, Scripting Cultures: Architectural Design and Programming (West Sussex: John Wiley & Sons,
2011), 8.
42
Computational geometry is the “underlying abstraction embedded in most design applications”. Robert Aish,
“The Ghost in the Machine,” in Architectural Review no. 1389 (2012): 20.
43
This is because the architect manipulates the notation that in turn generates the design representation.
Robert Aish, “From Intuition to Precision,” in Digital Design: The Quest for New Paradigms 23rd eCAADe
Conference Proceedings, ed. José by Duarte et al. (Lisbon: Technical University of Lisbon, 2005), 11.
38
9
iteration 44 abstractions offered by the programming language to generate a large number of
elements that make up a high resolution design; change the parameters driving an algorithm to
quickly generate design variants and explore a large solution space; and utilise mathematical
formulae to describe complex geometries in a concise manner.
More significantly, programming offers architects an opportunity, as suggested by Aish, to define a
personal design vocabulary from first principles. 45 This is realised through creating procedural and
data abstractions to represent custom operations and primitives. For example, an architect may
develop a procedure to generate a mathematically defined surface, perhaps to represent a roof
structure, which cannot be modelled using the commands offered by the CAD program. He/she may
also define a class to represent such surfaces and can thereafter create multiple instances of it.
Architects who can program are therefore emancipated from the built-in paradigms of their CAD
tools 46, which impose a limited, hard-coded semantics of design. 47
However, it is only in recent years that a culture 48 has emerged around programming and
computational design, which has thus far been a fringe activity in the profession. On one hand, this is
because knowledge is now easily disseminated through the internet. On the other hand, new
generation programming systems are more accessible. The following chapters survey six of these
systems, which cover a spectrum of programming paradigms. To provide a common point of
reference, a procedure that generates a conic spiral, which was originally by Leitão et al. 49 to
compare four of these languages, is reproduced in all the reviewed systems.
2.1.1
Text programming systems
The vast majority of architects write small programs to solve specific design problems, rather than
develop full-scale applications for a general audience. This form of end-user programming is usually
referred to as scripting. It is geared towards rapid prototyping of programs for exploratory design.
Iteration is the process of performing a series of steps repeatedly. International Organisation for
Standardisation, ISO/IEC/IEEE 24765: Systems and software engineering—Vocabulary, 1st ed. (Geneva: ISO/IEC,
2010), 190.
45
Achim Menges, “Instrumental Geometry,” Architectural Design 76 no. 2 (2006): 44.
46
Fabio Gramazio and Matthias Kohler, Digital Materiality in Architecture (Baden: Lars Müller Publishers), 8.
47
Robert Aish who refers to CAD software as imposing “hard-coded” architectural semantics in his discussion
with Achim Menges. Menges, “Instrumental Geometry,” 44.
48
Burry, Scripting Cultures, 8–11.
49
Leitão et al. implemented a spiral generating procedure in RhinoScript, VisualScheme and Grasshopper.
António Leitão et al., “Programming Languages for Generative Design: A Comparative Study,” in International
Journal of Architectural Computing 10, no. 1 (2012): 139–162.
44
10
Here, four scripting languages are reviewed which target Rhinoceros 50 and AutoCAD 51—two CAD
platforms that are now well established in academia, research and professional practice. 52
RhinoScript 53 is scripting tool for Rhinoceros that is based on Microsoft’s VBScript language. In terms
of syntax, RhinoScript makes liberal use of non-abbreviated keywords and avoids introducing too
many symbols. 54 The resultant code (Figure 2-1), while verbose, is improved in terms of readability.
It offers over 800 procedures 55 with many relating to geometry creation and manipulation. Many of
these procedures correspond to standard commands in Rhinoceros commands and are descriptively
named. For example, the AddPoint function in line 9 of the code fragment literally adds a point to
the document.
1
Sub ConicSpiral(Length, N)
2
Dim arrPoint(2)
3
Dim t, pi, i
4
For i = 0 To N - 1
5
t = i * Length / N
6
arrPoint(0) = t * Cos(5 * t)
7
arrPoint(1) = t * Sin(5 * t)
8
arrPoint(2) = t
9
10
11
Call Rhino.AddPoint(arrPoint)
Next
End Sub
Figure 2-1 Spiral sub-routine written in RhinoScript 56
RhinoPython targets the same Rhinoceros CAD platform as RhinoScript, 57 but differs from it in two
important ways. First, it allows users to script in the Python 58 language, which has a “simple, easy to
Rhinoceros is a product of Robert McNeel & Associates. “Rhinoceros,” Robert McNeel & Associates,
accessed January 1st 2016, http://www.rhino3d.com/
51
AutoCAD Architecture is a product of Autodesk Inc. “AutoCAD Architecture,” Autodesk Inc., accessed
January 1st 2016, http://www.autodesk.com/products/autocad-architecture/overview
52
In 2012, there were more than 455,000 people subscribed to the Rhino4 mailing list. This is a conservative
estimate of the user base. “Rhino’s market share?” Grasshopper Algorithmic Modelling for Rhino, accessed
January 1st 2016, http://www.grasshopper3d.com/forum/topics/rhino-s-market-share
53
“RhinoScript Wiki”, Robert McNeel & Associates, accessed January 1st 2016,
ttp://wiki.mcneel.com/developer/rhinoscript
54
For example, it uses “And” in place of “&&” for the logical operator and avoids semi-colons and square
brackets which are common in other languages.
55
An inspection of the RhinoScript module reveals that there are currently 861 functions.
56
The subroutine written here is based on an example from the RhinoScript 101 primer. David Rutten,
RhinoScript 101 for Rhinoceros 4.0, (Seattle: Robert McNeel & Associates, 2007), 37.
50
11
learn syntax [that] emphasises readability” 59. Second, it provides access to the underlying
RhinoCommons Software Development Kit (SDK), and consequently Rhinoceros’s full object library,
as well as the extended .NET framework and Python standard library. At the same time, it also offers
functions corresponding to those in RhinoScript. Figure 2-2 shows Leitão’s implementation of the
spiral sub-routine in Python. 60 The script imports (line 2) the Point3d class from RhinoCommons SDK
and makes use of list comprehensions 61 (lines 8-9).
1
from math import sin, cos
2
from Rhino.Geometry import Point3d
3
4
5
def frange(l,n):
return [float(i)*l/n for i in range(n+1)]
6
7
8
9
def conic_spiral(length, n):
return [Point3d(t * cos(5*t), t * sin(5*t), t)
for t in frange(length,n)]
Figure 2-2 Spiral sub-routine written in Python.
Unlike the two previous two languages, DesignScript 62 was developed specifically for use in the
architectural design domain. 63 Code written in DesignScript (Figure 2-3) is structured in blocks
delimited by curly braces and statements are terminated by semi-colons. In this way, the language
resembles C#; 64 however, DesignScript additionally offers syntactic abstractions such as range
expressions 65 (Figure 2-3: line 8) and optional typing, which makes the code terser and easier to
write. DesignScript provides extensive vocabulary of geometric primitives and operations derived
RhinoPython was introduced in Rhinoceros version 5.
More specifically, it uses IronPython, which is an implementation of the Python programming language that
is tightly integrated with the .NET framework. “IronPython,” accessed January 1st 2016, http://ironpython.net/
59
Guido Van Rossum, “What is Python? Executive Summary,” Python, accessed January 1st 2016,
https://www.python.org/doc/essays/blurb/
60
Leitão et al., “Programming Languages for Generative Design: A Comparative Study,”146.
61
A list comprehension is a syntactic construct that allows programmers to create lists in a concise way and
originated from the functional Haskell language.
62
DesignScript is an Autodesk product. “DesignScript,” Autodesk Inc., accessed January 1st 2016,
http://designscript.ning.com/
63
Robert Aish, “DesignScript: Origins, Explanation, Illustration,” in Computational Design Modelling:
Proceedings of the Design Modelling Symposium Berlin 2011, ed. Christoph Gengnagel et al. (Berlin: Springer,
2012), 2.
64
C# is a general programming language developed by Microsoft. “C# Reference,” Microsoft Developer
Network, accessed January 1st 2016, https://msdn.microsoft.com/en-us/library/618ayhy6.aspx
65
A range expression can be used to create numeric collections.
57
58
12
from the Autodesk Shape Manager Geometry library. Moreover, it offers access to the .NET
framework as well as any compiled Dynamic Linked Library (DLL). In the above example, code is
written in an imperative block (lines 6–10) and using a “for” loop, which is similar in style to the
RhinoScript version.
1
import("ProtoGeometry.dll");
2
import("Math.dll");
3
4
def conicSpiral(length, n)
5
{
6
return = [Imperative]
7
{
8
t = 0 .. n ..(length / n);
9
return = Point.ByCoordinates(t * Math.Cos(5 * t), t * Math.Sin(5 * t), t);
10
11
}
}
Figure 2-3 Spiral function written in DesignScript.
Rosetta 66 is an open source programming system geared towards generative design. It allows users
to write programs in a front-end language of choice and target different back-end CAD applications
without having to refactor the code. 67 As an example, the spiral procedure is written in Racket 68 for
Rhinoceros (Figure 6). In terms of syntax, Racket uses prefix notation and parentheses to delimit
expressions, 69 both of which have been criticised for affecting readability. The above example
demonstrates a functional style of programming, whereby an anonymous function is mapped over
elements in a list to create the points (lines 5-6). 70
Rosetta was developed at the Instituto Superior Técnico, Technical University of Lisbon by António Leitão
and José Lopes. “Rosetta-lang,” Rosetta, accessed January 1st 2016, https://code.google.com/p/rosetta-lang/
67
Rosetta supports Javascript, AutoLisp, Racket and Python as front-end languages, and targets AutoCAD and
Rhinoceros as the back-end applications.
68
“Racket – A programmable programming language,” Racket, accessed January 1st 2016, http://racketlang.org/
69
An in-fix style “1 + 2” is more familiar because it is used, for example, in arithmetic. The equivalent prefix
notation is “+ 1 2”. Parentheses become difficult to read when there is excessive nesting. The term “Lots of
Irritating Superfluous Parentheses (LISP)” has been used to deride the Lisp family of languages which Racket
belongs to.
70
The anonymous function is created using a lambda expression. map is a high-order function that accepts
another function as an argument and applies it to elements in a list.
66
13
1
(require(planet aml/rosetta))
2
3
(backend rhino5)
4
(define (conic-spiral a b length n)
5
(map (lambda (t) (point (cyl (* a
6
t) t (* b t ))))
(range length n)))
Figure 2-4 Spiral function written in Racket.
Rosetta’s purported advantage is its portability 71—the same program can target different CAD
platforms. However it is unclear why this is an advantage, since architects are unlikely to switch
between CAD platforms during the design process. By comparison, DesignScript can be used across
Autodesk products, thus allowing architects to for example to invoke functionality offered by
performance analysis software such as Ecotect. 72 Similarly, many software packages allow users to
script in Python.
RhinoPython and DesignScript allow end-users to access the underlying geometric libraries that the
professional developers of the CAD software use themselves. RhinoScript does not provide as
extensive a range of geometric primitives and operations, while Rosetta only provides those
common to its targeted CAD applications. 73 Python, Racket and DesignScript are multi-paradigmatic
languages; they allow users to “select which [programming] paradigms or combination of paradigms
are appropriate.” 74 For example, in Python, a user can write a program in an imperative style, yet
use functional-style abstractions such as list comprehensions 75 to create a collection of elements—a
common design operation—more easily. DesignScript is unique amongst the surveyed languages
though, in terms of its support for an associative programming paradigm, which is a form of graphbased dependency modelling. 76
Lopes, José, and António Leitão, “Portable Generative Design for CAD Applications,” in Integration through
Computation: Proceedings of the 31st Annual Conference of the Association for Computer Aided Design in
Architecture (ACADIA), ed. Joshua Taron et al. (Calgary/Banff: The University of Calgary, 2011), 196–203.
72
Robert Aish et al., “Progress towards Multi-Criteria Design Optimisation using DesignScript with SMART Form,
Robot Structural Analysis and Ecotect Building Performance Analysis,” in Synthetic Digital Ecologies:
Proceedings of the 32nd Annual Conference of the Association for Computer Aided Design in Architecture,
ed. Jason Johnson et al. (San Francisco: California College of the Arts, 2012), 47–56.
73
Lopes, José, and António Leitão, “Portable Generative Design for CAD Applications,”198.
74
Robert Aish, “DesignScript: Origins, Explanation, Illustration,”2.
75
For example, the creation of repetitive elements is a common idiom in architectural design. List
comprehensions provide a concise way to generate large collections of such elements.
76
Robert Aish, “DesignScript: Scalable Tools for Design Computation,” in Computation and Performance:
Proceedings of the 31st eCAADe Conference, Volume 2, ed. Rudi Stouffs and Sevil Sariyildiz (Delft: Delft
University of Technology, 2013), 91.
71
14
2.1.2
Visual programming systems
Visual programming is distinguished by its use of graphical rather than linguistic notations. 77 More
specifically, Nickerson 78 states that these graphical notations take the form of diagrams. Here, a
diagram is defined as a “figure drawn in such a manner that the geometrical [and topological]
relations between parts of the figure illustrate relations between other objects.” 79 Visual
programming systems have been extensively used in domains such as education 80, engineering 81 and
music synthesis 82. Its introduction to architecture though has been a recent development. The two
most established visual programming systems in architecture 83, Generative Components 84 and
Grasshopper 85, will be reviewed. Both were developed with architects as the target audience and
are dataflow 86 programming systems.
Generative Components (GC) is an “object oriented, feature based modelling system and
development environment” 87 that is integrated with the MicroStation CAD platform 88. In GC, an
associative or parametric model is built in a series of steps known as transactions. Each transaction
can consist of more than one modelling operation, which usually involves either creating or editing
features. There are almost forty feature types, which can be geometry or data, alongside about 500
update methods for creating them 89; the latter constitutes the nodes that can be added to the
dataflow graph. A relationship is established between two features by specifying one as an argument
Bonnie Nardi, A Small Matter of Programming (Cambridge/MA: MIT Press, 1993), 61.
Jeff Nickerson, “Visual Programming,” (PhD diss., New York University, 1994), 5.
79
Nickerson, “Visual Programming,” 6.
80
Scratch is a programming environment geared towards creating stories, games and animations. “Scratch,”
Scratch, accessed January 1st 2016, http://scratch.mit.edu/
Squeak is a modern implementation of the Smalltalk programming language and environment. “Welcome to
Squeak,” Squeak, accessed January 1st 2016, http://www.squeak.org/
81
LabVIEW (Laboratory Virtual Instrument Engineering Workbench) is a product of National Instruments. “NI
LabVIEW,” National Instruments, accessed January 1st 2016, http://www.ni.com/labview/
82
Max/MSP is a product of Cycling 74. “MAX,” Cycling 74, accessed January 1st 2016,
http://cycling74.com/products/max/
83
Architects also used other visual programming systems such as Dynamo for Revit, Houdini and Maya’s
hypergraph.
84
Generative Components (GC) is a product of Bentley Systems. “Bentley,” Bentley Systems, accessed January
1st 2016, https://www.bentley.com/en/products/product-line/modeling-and-visualizationsoftware/generativecomponents
85
Grasshopper is a product of McNeel. “Grasshopper,” Robert McNeel & Associates, accessed January 1st
2016, http://www.grasshopper3d.com/
86
Dataflow programming is based on the concept that a program “is a directed graph and that data flows
between instructions, along its arc”. Wesley Johnston et al., “Advances in Dataflow Programming Languages,”
in ACM Computing Surveys 36 no. 1 (2004): 1.
87
Menges, “Instrumental Geometry,” 44.
88
MicroStation is a 3d CAD Design and Modelling software and product of Bentley Systems. “MicroStation,”
Bentley, accessed January 1st 2016, http://www.bentley.com/en-US/Products/MicroStation/
89
Generative Components v8i Quick Start Guide (Bentley Systems: 2008), 14.
77
78
15
in the other’s update method. GC also offers an abstraction mechanism called feature compilation
that allows a sub-graph to be collapsed into a single node, resulting in the creation of a new object
called a user defined feature. 90
Figure 2-5 A Generative Components model of the conic spiral is shown through multiple representations: a 3D
interactive window displays the geometry; a symbolic diagram represents relationships graphically; a
transaction window shows the sequence of modelling steps taken; and a feature window reveals the selected
feature’s (spiralPoint) properties.
Grasshopper is a visual programming language and environment for Rhinoceros. It claims to make
generative design using algorithmic techniques accessible to end-users without scripting
experience. 91 Grasshopper (version 0.0.60) offers about 590 components and parameters, which
represent operations and data respectively; they constitute the nodes of the graph. A wire can be
directly drawn from the output of one component/parameter to the input of another, adding an
edge in the graph. 92 The main abstraction mechanism offered by Grasshopper is the cluster. A subgraph can be collapsed or ‘clustered’ into a single component which is then manipulated as an entity.
The cluster can then be saved as a user object which can be reused in other Grasshopper programs.
Robert Aish and Robert Woodbury, “Multi-level Interaction in Parametric Design,” in Smart Graphics:
Proceedings of the 5th International Symposium SG 2005, ed. Andreas Butz et al., (Berlin: Springer, 2005), 156.
91
“Grasshopper requires no knowledge of programming or scripting, but still allows designer to build form
generators from the simple to the awe-inspiring”.
92
Invalid connections can also be made when the wrong data type is fed to an input, resulting in a raised
exception.
90
16
Figure 2-6 A Grasshopper implementation of the conic spiral is a graph with 8 nodes and 9 edges. 93 The
terminal node in the graph (Pt) outputs the list of points on the spiral.
One difference between GC and Grasshopper is that the former biases, to some extent, a sequential
approach to modelling based on defining transactional steps, while the latter offers more freedom in
the order of activity, to the extent that invalid graphs can be created. A second difference regards
the mode of programming. In GC, users mainly create and edit features through a menu-based
interface which involves writing expressions; the graph is then automatically updated. In
Grasshopper, users directly manipulate components and wires in the graph.
2.1.3
Summary: Programming in architectural design
Of the few empirical studies conducted comparing text and visual programming in architectural
design, those by Leitão et al. 94 and Celani and Vaz 95 reported similar results, with the former group
stating:
“… VPLs [Visual Programming Languages] are more motivating for beginners,
allowing them to become productive sooner. On the other hand, TPLs [Text
António Leitão and Luis Santos, “Programming Languages for Generative Design: Visual or Textual,” in
Respecting Fragile Places: 29th eCAADe Conference Proceedings, ed. Tadeja Zupancic et al. (Ljubljana:
University of Ljubljana, 2011), 550.
94
Leitão and Santos, “Programming Languages for Generative Design: Visual or Textual,” 549–557; Leitão et al.,
“Programming Languages for Generative Design: A Comparative Study,” 139–162.
95
Gabriela Celani and Carols Vaz, “CAD Scripting and Visual programming Languages for Implementing
Computational Design Concepts: A Comparison from a Pedagogical Point of View,” in International Journal of
Architectural Computing 10, no. 1 (2012): 121–137.
93
17
Programming Languages] are considerably more productive for dealing with
complex problems …” 96
There are several reasons why visual programming languages may be more accessible. First, they can
offer a high level of abstraction so that users require less initial knowledge to begin programming. 97
Second, dataflow based visual languages provide immediate feedback when changes are made to
the graph. 98 Third, a more concrete programming process is offered whereby graphical objects, with
iconic representations, can be seen, explored and manipulated in place of abstract symbols. 99 This
helps to de-emphasise issues of syntax, which Kelleher and Pausch have identified to be a major
barrier to learning programming. 100
The acknowledged shortcoming of visual programming is its poor scalability. As Burnett et al. 101
emphasises, the implication is that visual programs cannot be expanded, in terms of size or general
applicability, without significant sacrifices in ease of maintenance, performance and
comprehensibility. This view is widely supported. Leitão and Santos identify poor maintainability as a
cause for the throwaway nature of Grasshopper programs 102; Aish states that “graph node
representation does not scale to more complex logic” 103; and Davis et al. highlight the problem of
graph diagrams being incomprehensible due to their “visual tangle of relationships.” 104 On the other
hand, textual code is “is always amenable to a straight, serial reading” 105; and text programming
languages offer multiple abstraction mechanisms for hiding details and structuring code, which Davis
et al. emphasise, “enables designs that would be too complex otherwise.” 106
Leitão and Santos, “Programming Languages for Generative Design: Visual or Textual,” 550.
Margaret Burnett et al., “Scaling Up Visual Programming Languages,” in Computer 28 no. 3 (1995): 45.
98
Burnett et al., “Scaling Up Visual Programming Languages,” 45–46.
Leitão et al., “Programming Languages for Generative Design: A Comparative Study,” 143.
99
Burnett et al., “Scaling Up Visual Programming Languages,” 45.
100
Caitlin Kelleher and Randy Pausch, “Lowering the Barriers to Programming: A Taxonomy of Programming
Environments and Languages for Novice Programmers,” in ACM Computing Surveys 37 no.2 (2005): 86.
101
Burnett et al., “Scaling Up Visual Programming Languages,” 45–54.
102
Leitão et al., “Programming Languages for Generative Design: A Comparative Study,”144.
103
Aish, “DesignScript: Scalable Tools for Design Computation,” 88.
104
Daniel Davis et al., “Understanding Visual Scripts: Improving Collaboration through Modular Programming,”
in International Journal of Architectural Computing 9 no. 4 (2011): 363.
105
Marian Petre and Thomas Green, “Learning to Read Graphics: Some Evidence that ‘Seeing’ an Information
Display is an Acquired Skill,” in Journal of Visual Languages and Computing 4 (1995): 63.
106
Daniel Davis et al., “Untangling Parametric Schemata: Enhancing Collaboration through Modular
Programming,” in Designing together—CAADFutures 2011: Proceedings of the 14th International Conference
on Computer Aided Architectural Design, ed. Pierre Leclercq et al. (Liège: Les Éditions de l'Université de Liège,
2011), 66.
96
97
18
While the surveyed programming systems have been categorised as either textual or visual, some of
them can be more accurately described as hybrid systems 107 that privilege, to various degrees, one
form of notation over the other. For example, an alpha version of DesignScript Studio was released
in 2013 that supports visual graph-node diagramming in addition to textual scripting via code
blocks. 108 A graphical language front-end was planned for Rosetta, which has thus far only
accommodated textual languages. 109 Grasshopper, while primarily visual, also provides components
that allow users to script in Python, VB.NET or C# languages. Finally, GC offers a GCScript language
and an editor for writing expressions and scripts. 110
Hybrid systems potentially allow users to combine the merits of visual and text programming
languages. However, as Leitão points out, it is difficult to transition between the two as they are
based on different syntactic paradigms. 111 There have been recent efforts to tackle this problem.
DesignScript Studio has introduced a “node-to-code” feature that converts a region of the graph to
DesignScript code 112; such a feature helps to support novices in progressing from graph-node to
textual programming, which is less accessible but potentially more powerful once the user is
proficient in it. 113 In a related development, Rhinoceros introduced “Node in code” 114, which allows
programmers to call Grasshopper components as functions in a Python script. At the moment
though, due to an absence of empirical studies, it is unclear how effective these features are in
addressing this transition problem. Nonetheless, these developments represent a significant step
forward in resolving the dichotomy between visual and text programming languages.
Hybrid systems combine both visual and textual elements. Boshernitsan, Marat and Michael Downes, Visual
Programming Languages: A Survey, (Berkeley: University of California Berkeley, 2004), 2.
108
The DesignScript standalone editor was discontinued at the end of 2014 and instead integrated into the
visual programming system Dynamo (version 7.0). Patrick Tierney, “DesignScript is now Dynamo,” Dynamo
blog, accessed January 1st 2016, http://dynamobim.com/designscript-is-now-dynamo/
109
This language front-end is called RosettaFlow but has not been publically released. Lopes and Leitão,
“Portable Generative Design for CAD Applications,” 70–71.
110
Generative Components also provides such features as statement and expression builders to aid users in
writing code.
111
Antonio Leitão, “Teaching Computer Science for Architecture: A Proposal,” in Future Traditions: Rethinking
Traditions and Envisioning the Future in Architecture through the Use of Digital Technologies,” ed. José Sousa
and João Xavier (Porto: FAUP, 2013), 96.
112
Aish, “DesignScript: Scalable Tools for Design Computation,” 88–89.
113
Unfortunately, the node to code functionality was not available for user trials at the time when the case
studies were run. If this research was carried out again in the present timeframe, then such techniques may
indeed offer significant advantages to the end-user programmer.
114
Steve Baer, “ghPython – New Component and parallel modules,” Steve Baer’s Notes, accessed January 1st
2016 , http://stevebaer.wordpress.com/2013/12/11/ghpython-node-in-code/
107
19
2.2 Industrial robot programming
The concept of a programmable manipulator was originally proposed by George Devol in 1954 and
then subsequently realised in 1961 as the Unimate robot, which was immediately adopted for use by
General Motors and the Ford Motor Company. 115 Since then, robotic manipulators have found
widespread industrial application, for example in the automotive and electronics industries, 116
where they have been used to automate manufacturing processes. Common tasks carried out by
industrial robots include welding, spray-painting and material handling. Programming is the process
of instructing the robot how to perform a task. It involves choreographing a sequence of robotic
movements and actions, which results in the task’s completion. Industrial robot programming is
traditionally classified as on-line or off-line programming. 117
2.2.1
On-line robot programming
On-line programming uses “the actual robot in situ.” 118 It involves teach programming methods,
whereby the operator manually guides the robot through the process of completing a task, while
recording movement and other related data. This approach may be considered a form of
Programming by Example (PBE) 119, whereby the program is generated from a demonstration. Teach
programming can be further differentiated into lead-through 120 and teach-pendant programming 121
methods.
In lead-through programming, the operator guides the robot physically, usually via a handle
attached to its tip, through the entire motion. Meanwhile the controller 122 automatically records the
robot’s joint angles at fixed intervals. This stream of movement data can then be replayed later on to
Tobias Bonwetsch et al., “Towards a Bespoke Building Process,” in Manufacturing the Bespoke, ed. Bob
Sheil (Chichester: Wiley and Sons, 2012), 78–82.
116
Martin Hägele et al., “Industrial Robotics,” in Springer Handbook of Robotics, ed. Bruno Siciliano and
Oussama Khatib (Berlin: Springer), 964.
117
Hägele et al., “Industrial Robotics,” 977.
118
Hägele et al., “Industrial Robotics,” 977.
119
Nardi, A Small Matter of Programming, 71–77.
120
José Ceroni and Simon Nof, “Robotics Terminology,” in Handbook of Industrial Robotics, 2nd edition, ed.
Shimon Nof (New York: John Wiley & Sons, 1999): 1286.
121
Ceroni and Nof, “Robotics Terminology,” 1293.
122
The controller is computer hardware, housed in a separate casing, which processes data for operating the
robotic manipulator.
115
20
recreate the motion. This method is suited for programing processes such as spray-painting, which
involve continuous path movements.
In teach-pendant programming, the operator guides the robot via a tethered hand-held device called
a pendant, and uses it to enter instructions one at a time. The software on the pendant usually has a
menu-based interface with buttons mapping to available robot instructions. For example, to instruct
the robot to perform a point-to-point movement, the operator drives the robot to a target
position 123 and presses a button; a statement, which invokes a movement function with the
recorded target position as its argument, is automatically generated and added to a text program
shown on-screen. Subsequent movement and action-related instructions are appended to the
program in a similar way. The operator can also insert other forms of statements, for example
conditionals or loops, by using additional buttons that are available.
The generated text programs are written in proprietary programming languages provided by the
robot manufacturers. These languages can be described as Domain Specific Languages (DSL) because
of their limited expressivity, and some of them will be shortly discussed. The operator can run the
program, and save it for later use or editing in an off-line programming environment.
2.2.2
Off-line robot programming
An off-line program is developed “partially or completely without requiring the use of the robot
itself.” 124 It is created on a remote computer using software tools and then loaded into the robot
controller at a later stage. The manufacturing process does not have to be interrupted to program
the robot. This is advantageous from an economic perspective. General advances in computing
technology have also led to the development of increasingly powerful off-line programming
software tools. As a result, off-line programming has become the predominant approach. 125
Off-line programming software provided by robot manufacturers replicate the software found on
the robot controller, as well as the graphical interface on the teach pendant. Manufacturers also
provide simulation environments with a catalogue of virtual robots. Software users can load in CAD
geometry to represent the work-cell; apply teach programming methods, albeit to a virtual robot;
Depending on the type of pendant, the operator may move the robot using a joystick, physical buttons or
virtual widgets shown on-screen. This process is also known as jogging.
124
José Ceroni and Simon Nof, “Robotics Terminology,” 1293.
125
Hägele et al., “Industrial Robotics,” 977.
123
21
and verify the program results through simulation. Examples of off-line programming software
include ABB’s RobotStudio 126 and KUKA’s OfficeLite/Sim Pro package 127.
An alternative off-line programming approach is to directly write the program in the appropriate DSL
within a text editor. Here, the DSLs for Universal Robots, which are the primary machines used in
this research, as well for KUKA and ABB, which are leading robot manufacturers, are discussed
further. 128 A simple program that instructs the robot to draw a rectangle (Figure 2-7) is written in all
three languages to highlight their differences. 129
Figure 2-7 An example drawing program.
URScript is the programming language for Universal Robots. It is an imperative language that is
based on Python. It benefits from Python’s good readability and offers some of its standard control
abstractions such as for loops. While URScript appears to be like Python though, it lacks many of its
features such as the support for dictionaries 130, recursion and object-oriented programming.
Moreover, programmers do not have access to the Python standard library.
“RobotStudio,”ABB, accessed January 1st 2016, http://new.abb.com/products/robotics/robotstudio
“Simulation-Planning-Optimization software, ” KUKA, accessed January 1st 2016, http://www.kukarobotics.com/en/products/software/simulation/
128
These languages are considered to be representative enough for this research as the majority of robots
used in the context of architectural design are from these manufacturers. Fabio Gramazio et al., The Robotic
Touch: How Robots Change Architecture (Zurich: Park Books, 2014), 484–487.
129
This program is based on an example from the ABB programming manual. ABB, Operating manual:
Introduction to RAPID (2007), 26.
130
A dictionary is a built-in Python data type.
126
127
22
1
pen_pose = p[0.2, 0, 0.3, 0, 0, 0]
2
target1 = p[0.6, -0.1, 0.8, 3.142, 0 , 0, 0]
3
target2 = p[0.6, 0.1, 0.8, 3.142, 0 , 0, 0]
4
target3 = p[0.8, -0.1, 0.8, 3.142, 0 , 0, 0]
5
target4 = p[0.8, 0.1, 0.8, 3.142, 0 , 0, 0]
6
7
def main():
8
set_tcp(pen_pose)
9
movel(target1, 0.2)
10
movel(target2, 0.2)
11
movel(target3, 0.2)
12
movel(target4, 0.2)
13
14
movel(target1, 0.2)
end
Figure 2-8 Drawing program written in URScript.
URScript offers robotics specific functions organised in the following modules: motion, internals,
math and interfaces. It also introduces a specialised Pose data type to represent the position and
orientation of the robot tip (Figure 2-8: line 1–5), as well as a thread control statement to handle
concurrency. Users can define their own custom functions (Figure 2-8: line 7–14), and this is the
primary means of structuring a URScript program. However, they cannot define custom data
abstractions and because of the lack of import mechanisms, must ensure that programs are selfcontained. Consequently, URScript programs are limited in terms of scalability.
The KUKA Robot Language (KRL) is “an imperative programming language similar to Pascal” 131 that is
used for programming KUKA robots. Several of KRL’s syntactic features adversely affect its
readability. The language introduces symbols such as “$”and “#” (Figure 2-9: Lines 4 and 5) which
are uncommon in other languages; requires text to be in upper-case; and uses abbreviated, nondescriptive names for procedures. For example, the role of the LIN procedure (Figure 2-9: lines 13–
17), which refers to a linear movement, is difficult to infer from the name alone.
Henrik Mühe et al., “On Reverse-engineering the KUKA Robot Language,” (paper presented at the 1st
International Workshop on Domain-Specific Languages and models for Robotic systems, Taipei, October 22nd,
2010).
131
23
1
DEF PROG()
2
3
PDAT_ACT = {VEL 50,ACC 100,APO_DIST 100}
4
FDAT_ACT = {TOOL_NO 13,BASE_NO 6,IPO_FRAME #BASE}
5
BAS (#PTP_PARAMS,50)
6
7
DECL POS P1, P2, P3, P4
8
P1 = {POS: X 600, Y -100, Z 800, A 180.0, B 0.0, C 0.0}
9
P2 = {POS: X 600, Y 100, Z 800, A 180.0, B 0.0, C 0.0}
10
P3 = {POS: X 800, Y -100, Z 800, A 180.0, B 0.0, C 0.0}
11
P4 = {POS: X 800, Y 100, Z 800, A 180.0, B 0.0, C 0.0}
12
13
LIN P1 C_DIS
14
LIN P2 C_DIS
15
LIN P3 C_DIS
16
LIN P4 C_DIS
17
LIN P1 C_DIS
18
19
END
Figure 2-9 Drawing program written in KUKA Robot Language (KRL).
KRL offers standard control structures for iteration and selection. It also offers standard primitive
data types as well as STRUC 132 and ENUM 133 types. KRL provides a library of robotics specific
procedures relating to motion, inputs/outputs, communication and concurrency; as well as robotics
specific data abstractions like POS (Figure 2-9: line 8) and AXIS, which are pre-defined STRUCs.
Besides creating new procedures, programmers can also define new data abstractions derived from
the STRUC type. Code may be structured using functions and sub-programs and data objects can be
imported. A KRL program is divided into a SRC file and a DAT file, with the former containing the
code and the latter a list of stored data and point coordinates. This organization introduces hidden
dependencies in the SRC file’s code, where for example, a variable defined in the DAT file is
referenced (Figure 2-9: line 4).
RAPID is an imperative programming language for ABB robots. Several of RAPID’s syntactic features
impact its readability. There are subtle distinctions in syntax, for example between colon and semicolon symbols, which may be confusing or easily overlooked 134; and inconsistent letter casing, where
A STRUC is a structure data type that is typically used to encapsulate small groups of related variables.
ENUM refers to an enumeration data type.
134
John Pane and Brad Myers, Usability Issues in the Design of Novice Programming System (Pittsburgh:
Carnegie Mellon University, 1996), 11.
132
133
24
some keywords are in uppercase and others in lowercase. Moreover, RAPID requires users to
explicitly declare both variable and data types during assignment (Figure 2-10: lines 1–2), while
URScript for example does not.
1
PERS tooldata tPen := [TRUE, [[200, 0 , 30], [1, 0, 0, 0]],
2
3
[0.8, [62, 0 , 17], [1, 0, 0, 0], 0, 0, 0]];
CONST robtarget p1 := [[600, -100, 800], [0.707170, 0, 0.707170, 0],
4
5
[0, 0, 0, 0], [9E9, 9E9, 9E9, 9E9, 9E9, 9E9]];
CONST robtarget p2 := [[600, 100, 800], [0.707170, 0, 0.707170, 0],
6
7
[0, 0, 0, 0], [9E9, 9E9, 9E9, 9E9, 9E9, 9E9]];
CONST robtarget p3 := [[800, -100, 800], [0.707170, 0, 0.707170, 0],
8
9
[0, 0, 0, 0], [9E9, 9E9, 9E9, 9E9, 9E9, 9E9]];
CONST robtarget p4 := [[800, 100, 800], [0.707170, 0, 0.707170, 0],
10
[0, 0, 0, 0], [9E9, 9E9, 9E9, 9E9, 9E9, 9E9]];
11
12
PROC main()
13
MoveL p1, v200, fine, tPen;
14
MoveL p2, v200, fine, tPen;
15
MoveL p3, v200, fine, tPen;
16
MoveL p4, v200, fine, tPen;
17
MoveL p1, v200, fine, tPen;
18
ENDPROC
Figure 2-10 Drawing program written in RAPID.
RAPID offers standard control structures as well as primitive data types. It provides a set of robotics
specific procedures, termed instructions, which are organised into motion, input/output,
communication, interrupts, error recovery and math modules. Compared to URScript and KRL, it
also offers task specific instructions, which are at a higher process-centric level of abstraction 135,
through its spot-welding, arc-welding and glueware packages. RAPID also provides specialised pose,
position, orientation and target (Figure 2-10: line 3) data-types. Users can create new procedural
abstractions in the form of functions, routines and trap routines 136; as well as custom data
abstractions 137 that are compositions of atomic data types. A RAPID program can be composed out
of several modules with each being further comprised of multiple routines.
Hägele et al., “Industrial Robotics,” 979.
A trap routine is essentially an event handler, whereby the event is an INTERRUPT.
137
These composite data types are called records.
135
136
25
While the surveyed DSLs differ in terms of syntax, they offer similar types of data and procedural
abstractions: a pose or orientation data type, and procedures relating to motion, input/output (IO),
communication and concurrency. These abstractions constitute an essential vocabulary for robot
programming. Some manufacturers provide an Integrated Development Environment (IDE) to write
programs in. For example, RobotStudio includes a dedicated IDE for writing RAPID code. This
environment offers features such as syntax highlighting, code-folding and auto-completion, which
eases the coding task substantially. However, other manufacturers, like Universal Robots, do not
offer equivalent IDEs.
2.2.3
Summary: Industrial robot programming
Teach programming methods allow users to create a program by directly manipulating a physical
(on-line) or virtual (off-line) robot. This accounts for their accessibility. These methods offer a means
of robot control to operators who may have an implicit understanding of how a task should be
performed, but lack the knowledge to express their plan in a formal programming language. The
ease of lead-through programming is greater than that of teach-pendant programming. In the latter
case, the operator has to learn how to use the teach-pendant software. In general, teach
programming methods have limited scalability. An operator usually has to demonstrate the entire
process, with each step involving some degree of robot manipulation. As Yong and Bonney state, the
“time taken [with teach programming] quite often rises disproportionately with increasing
complexity of the task.” 138
Off-line programming approaches that involve coding in a textual DSL are less accessible than teach
programming methods. This is mainly because operators have to be knowledgeable about
programming in the DSL. Such knowledge may be more commonplace if the DSL is based on general
programming languages that are widely used today. This is somewhat the case for URScript, which
resembles the popular Python language. 139 However, DSLs such as KRL and RAPID, which were
introduced earlier, differ significantly from modern programming languages in terms of syntax, and
may therefore be unfamiliar even to experienced programmers. 140
Yong, Y. F. and Maurice Bonney, “Off-line Programming,” in Handbook of Industrial Robotics, 2nd edition, ed.
Shimon Nof (New York: John Wiley & Sons, 1999), 353.
139
On January 1st 2016, Python was ranked the 5th most popular programming language on the TIOBE index.
“TIOBE index for January 2016,” TIOBE, accessed January 31st 2016, http://www.tiobe.com/tiobe_index
140
ABB introduced RAPID alongside their S4 robot controllers in 1994. KUKA introduced the windows-based KR
C1 controller in 1996 (the current version is KR C4). The evolution of these two DSLs was constrained by the
138
26
Compared to teach programming though, off-line programming in a DSL is a more scalable approach.
The waypoints of a motion path may be generated programmatically, which is more efficient than
having to teach positions individually. This is especially so if they are numerous waypoints, as is the
case in a complex task. In addition, it is easier to edit and organise code directly using a keyboard
and text editor, rather than via a menu/button-based interface. This can facilitate a structured
approach to programming, based on adapting and re-using code modules to build more complex
programs.
2.3 Robot programming in architecture
In 2005, the Chair of Architecture and Digital Fabrication at ETH Zurich introduced the industrial
robot to an architectural context. 141 Since then such machines have been used to fabricate bespoke
artefacts ranging from models to building structures.
In general, fabrication processes can be classified as: subtractive, additive or formative. Each
category requires a different robot programming approach. For subtractive processes such as
milling 142, the end-effector has to move continuously over a surface. The main task is to specify a
motion path, which is most easily described through geometric entities such as curves. Additive
processes such as brick-laying 143 involve many discrete pick and place events. The primary task is to
specify motion targets and input/output (IO) actions, while the actual trajectory of the robot is
secondary. Formative processes such as metal folding 144 require the robot to exert force in sharp
movements. The chief task is to specify and coordinate localised motion, IO actions as well as
changes in speed.
need to retain backwards compatibility with legacy robot controllers, which remain in use today. By contrast,
Universal Robots only introduced the UR5 robotic arm in 2009. See: “ABB Robotics Historical Milestones,” ABB,
accessed January 1st 2016, http://new.abb.com/products/robotics/home/about-us/historical-milestones;
“History,” KUKA, accessed January 1st 2016, http://www.kukarobotics.com/en/company/group/milestones/1996.htm; “About Universal Robots—Our History,” Universal
Robots, accessed January 1st 2016, http://www.universal-robots.com/about-universal-robots/our-history/
141
Gramazio and Kohler, Digital Materiality in Architecture: 49.
142
Clemens Neugebauer and Martin Kölldorfer, “Fabricating the Steel Bull of Spielberg,” in Robotic Fabrication
in Architecture, Art and Design, ed. Sigrid Brell-Cokcan and Johannes Braumann (Vienna: Springer), 130–136.
143
Ralph Bärtschi et al., “Wiggled Brick Bond,” in Advances in Architectural Geometry, ed. Cristiano Ceccato et
al. (Vienna: Springer, 2010), 137–148.
144
“Curved Folding,” Gramazio Kohler Research, accessed January 1st 2016,
http://gramaziokohler.arch.ethz.ch/web/e/lehre/207.html
27
Current programming systems used in architectural design cannot be used, in their present state, for
programming robots, because they offer the wrong abstractions. Meanwhile, on-line industrial robot
programming solutions are unlikely to be feasible because architectural construction processes have
little repetition. At the same time, commercial software developers have yet to develop new
products that target architects who wish to program robots for fabrication purposes. Architects
currently approach this problem by developing new workflows around the use of existing solutions,
or by taking the lead in developing custom solutions from scratch.
2.3.1
Combining existing solutions
One solution is to combine scripting in a CAD application with the use of off-line robot programming
software. A design is first generated in the CAD application and relevant geometric data is then
imported into the off-line programming software. For example, this workflow was applied in the
Shifted Frames 145 elective course at ETH Zurich; students wrote Python scripts to generate the
design of their structures in Rhinoceros, and instructors subsequently wrote the robot programs in
RAPID using RobotStudio. However, this approach is only valid if off-line programming software
exists for the particular robot used in the first place. 146
The main advantage with this approach is that each programming solution is optimised for its target
domain and provides the specialised abstractions essential for computational design or robotic
control. Furthermore, CAD software provide additional functionalities for visualising and
documenting designs, while off-line programming software offer simulation capabilities.
Nonetheless, there are two major drawbacks.
First, architects are obliged to learn a new software and language for robot programming. There are
significant syntactic and semantic differences 147 between languages used for computational design
and robot programming. As a consequence of their mismatch, knowledge of either language is nontransferable. Second, this approach reinforces the separation between design and production 148,
“Shifted Frames,” Gramazio Kohler Research, accessed January 1st 2016,
http://gramaziokohler.arch.ethz.ch/web/e/lehre/228.html
146
Therefore this approach cannot be used with Universal Robots.
147
For example, Python uses ‘=’ for assignment and ‘==’ for comparison, RAPID uses ‘:=’ and ‘=’ respectively;
the ‘{}’ symbol have different meanings; Python uses exceptions for error handling, RAPID uses trap routines.
148
Jan Willmann et al., “Digital by Material: Envisioning an extended performative materiality in the digital age
of architecture,” in Robotic Fabrication in Architecture, Art and Design, ed. Sigrid Brell-Cokcan and Johannes
Braumann (Vienna: Springer), 12.
145
28
both in practice 149 and conceptually. Designs are finalised in one software environment before robot
programming commences in another. This can lead to the belated recognition of construction
problems, and represents a missed opportunity to inform design decisions based on fabrication
constraints. Moreover, because the robot programming is always preceded by design, it is often
undervalued as a creative act in itself. 150
2.3.2
Custom visual based programming solutions
Another approach is to develop new programming solutions. Recent efforts in this direction have
focused on the visual dataflow programming paradigm. In the last four years alone, five robot
programming plugins were developed for Grasshopper and made publically available. 151 They are, in
order of release: KUKA|PRC 152, HAL 153, Crane 154, Godzilla 155 and Scorpion 156. These plugins extend
Grasshopper with functionalities so that it can be used as an off-line robot programming
environment. They offer a set of robotics specific components that can be used in conjunction with
native Grasshopper components. As a consequence, the drawbacks of the previous approach are
addressed, because architects only need to learn one programming language and can author design
and robot control data concurrently from a single environment. To illustrate their features more
In the Shifted Frames elective courses, this separation was reflected in the delegation of tasks. Students
focused on design before handing the model over to instructors who programmed the robot.
150
The constructive capabilities of a robotic fabrication process determine the limits of a design space. Since
the process is itself open to design, via physical tooling and programming, then these limits can be shifted.
Michael Budig et al., “Integrating Robotic Fabrication in the Design Process,” in Architectural Design: Made by
Robots, ed. Fabio Gramazio and Matthias Kohler (London: John Wiley & Sons, 2014), 31.
151
At the moment, plugins have not been developed for other visual programming environments such as
Dynamo or DesignScript Studio.
152
KUKA|prc is developed by Johannes Braumann and Sigrid Brell-Cokcan. Association for Robots in
Architecture, Association for Robots in Architecture, accessed January 1st 2016,
http://www.robotsinarchitecture.org/kuka-prc
153
HAL robot programming and control was originally developed by Thibault Schwartz.
HAL Robotics Ltd., accessed January 1st 2016, http://www.hal-robotics.com/
154
Crane was developed by Brian Harms in 2012. Crane Robotics, accessed January 1st 2016,
http://cranerobotics.com/
155
Godzilla is a product of RoboFold Ltd. It was later renamed Robots.IO.
Robots.IO, accessed January 1st 2016, http://robots.io/wp/
156
Scorpion was developed by Khaled Elashry and Vincent Huyghe under supervision from Sean Hanna and
Ruairi Glynn at the Bartlett School of Architecture. Scorpion robotics, accessed January 1st 2016,
http://scorpion-robotics.com/
149
29
clearly, the following plugins are discussed with respect to the example files provided by their
developers. 157
Figure 2-11 Example KUKA|prc Grasshopper program.
KUKA|prc is a commercial programming solution for KUKA robots. It has been used in several
educational workshops 158 as well as for commercial projects 159. It (version 2014-3-31) offers 42
components organised into 5 functionally distinct groups: core, toolpath, utilities, virtual robot and
virtual tools. The underlying concept is that components map directly to procedures in the KRL
library. For example, a LIN procedure has a corresponding component that generates a KRL
formatted linear movement command. With referenced to Figure 2-11, the process of creating a
program involves two main steps: first relevant input data from geometry is extracted; second a
sequence of commands is generated by components that receive these inputs; and third these
commands, together with robot and tool 160 data, are fed into a Core component which produces a
robot simulation and saves a generated KRL program to file.
While KUKA|prc offers low level command components that can be combined to program additive
and formative processes, it is geared towards subtractive milling, which it was originally developed
for. 161 In addition, KUKA|prc does not expose the full functionalities offered by the KRL library. For
These example files are geared towards showing novices how to set up a basic robot program as well as
highlighting the plugin’s key features.
158
Sigrid Brell-Cokcan and Johannes Braumann, “Industrial Robots for Design Education: Robots as Open
Interfaces beyond Fabrication,” in Global Design and Local Materialization: 15th International Conference,
CAAD Futures 2013, ed. Jianlong Zhang and Chengyu Sun (Berlin: Springer, 2010), 109–117.
159
KUKA|prc was used to mill the formwork for casting 23m high aluminium arch. Clemens Neugebauer and
Martin Kölldorfer, “Fabricating the Steel Bull of Spielberg,” in Robotic Fabrication in Architecture, Art and
Design, ed. Sigrid Brell-Cokcan and Johannes Braumann (Vienna: Springer), 130–136.
160
The robot and tool objects are generated from components in the Virtual Robot and Virtual Tool groups.
161
Sigrid Brell-Cokcan and Johannes Braumann, “A New Parametric Design Tool for Robot Milling,” in LIFE
in:formation, On Responsive Information and Variations in Architecture: Proceedings of the 30th Annual
157
30
example, it does not provide a way to generate control flow statements, as well as interrupts and
triggers which are KRL’s event-handling mechanisms.
Figure 2-12 Example HAL Grasshopper program.
HAL is a commercial programming solution that targets multiple robot platforms 162. Similar to
KUKA|prc, it has been used in several teaching and research applications 163. HAL (version 5.0) offers
154 components —the most numerous amongst the surveyed Grasshopper plugins. It introduces a
key abstraction known as a Toolpath, which is a custom class whose properties include targets,
motion types and speeds. Users program the robot by creating Toolpath objects. The process of
creating a robot program, shown in Figure 2-13, involves 13 different components and 5 sequential
stages: 1) target generation, 2) toolpath creation, 3) simulation, 4) code generation and 5) export or
streaming.
In addition, HAL offers advanced IO programming packs, 164 multiple robot-programming support,
and specialised milling, hotwire cutting and assembly components. However, it is arguable whether
this extensive feature set comes at the expense of usability. HAL components constitute about a
quarter of the total number of standard Grasshopper components. According to Lewis and Olson, a
profusion of programming primitives creates a significant cognitive barrier; 165 in this case, users have
Conference of the Association for Computer Aided Design in Architecture, ed. Aaron Sprecher et al. (New York:
Cooper Union and Pratt Institute, 2010), 357–363.
162
HAL supports ABB, KUKA and Universal robots and provides 30 of them as predefined parameters.
163
The workshops have been run in schools such as Ecole supérieure des Beaux Arts du Mans and conferences
like Smart Geometry. HAL has also been used for projects at the architectural practice EZCT. Thibault Schwartz,
“HAL: Extension of a Visual Programming Language to Support Teaching and Research on Robotics Applied to
Construction,” in Robotic Fabrication in Architecture, Art and Design, ed. Sigrid Brell-Cokcan and Johannes
Braumann (Vienna: Springer), 98–101.
164
These components allow users to generate control statements in a specific robot programming language,
including event-handling and threading related statements.
165
Clayton Lewis and Gary Olson, “Can principles of programming lower the barriers to programming?” in
Empirical studies of programmers: second workshop, ed. Gary Olson et al. (New Jersey: Ablex Publishing, 1987),
248–263.
31
to learn what each component does and how they can be combined. Moreover, several components
are non-intuitive to use, for example the Pick And Place component requires 30 input parameters.
Such components also take up a sizable amount of screen space and therefore render other parts of
the graph less visible. 166
Figure 2-13 Example Crane Grasshopper program
Crane is a programming solution for Stäubli robots. It was developed to operate the Robot House at
the Southern California Institute of Architecture (SCI-Arc) and used in academic projects. In place of
compiled components, Crane offers 22 user objects that are primarily VB.NET scripting components
and clusters. There are four main categories of components: the first generates robot meshes for
visualisation purposes; the second provides convenience functions; the third interfaces with external
devices through User Datagram Protocol (UDP); and the final group provides kinematics and
program generation functionalities. Figure 2-13 illustrates the process of creating a program, which
has two main steps: 1) target planes are fed into an Inverse Kinematics (IK) solver that converts them
to robot joint angles; 2) these angles are then given as one of the 23 inputs to the VAL 167 project
creator component which generates the program.
Nardi considers inefficient use of screen space as a serious problem of visual notations. Nardi, A Small
Matter of Programming, 65.
167
VAL is the proprietary robot language for the Stäubli robots.
166
32
Unlike the previous Grasshopper plugins, Crane allows users to access the sub-graphs encapsulated
in clusters as well as code within the VB.NET scripting components. These abstractions are
potentially open to end-user modification. However, in the case of the project creator component,
the sub-graph contains clusters that are sealed. Meanwhile, the VB.NET components encapsulate
relatively large scripts, with the Inverse Kinematic component having in excess of 500 lines of code.
The scripting components, being simplified text editors, are not the ideal environments for editing
such code. As a consequence, neither cluster nor components can be easily modified, which suggests
that end-user extensibility was not a key consideration in Crane’s design.
Figure 2-14 Example Godzilla Grasshopper program.
Godzilla (later renamed Robots.IO) is a commercial programming solution targeting multiple robot
platforms 168 and has been used to fabricate several installations 169. Godzilla (version 2.0) offers five
components: two for setting up a robot, two for setting up a tool and a key timeline component. To
create a program, a mesh that describes the physical workspace environment (Figure 2-14: 1), a
robot component (Figure 2-14: 2), and target planes (Figure 2-14: 3) are fed into the timeline
component (Figure 2-14: 4). Each target is represented on the timeline as a key-frame and additional
events such as IO actions can be manually added. Users can scroll through a simulation using the
timeline’s control buttons. The generated program is hidden from the user but can be sent to a
remote IP address. 170
The minimal number of components and use of an intuitive timeline metaphor 171 contribute
significantly to Godzilla’s accessibility. In addition, the multi-dimensionality of a visual programming
It currently supports ABB, KUKA, Universal Robots and Stäubli robots.
Godzilla was used to fabricate the Venice Architecture Biennale installation by Zaha Hadid Architects.
“Venice Architecture Biennale,” Zaha Hadid Architects, accessed January 1st 2016, http://www.zahahadid.com/design/contribution-to-2012-venice-biennale-theme-‘common-ground’/
170
A separate plugin called MechaGodzilla is required in order to send the program to a robot.
171
The timeline and the related concepts of key-framing originated from animation software.
168
169
33
environment is exploited to convey more information. Several timelines can be arranged in rows in a
single component (Figure 16-4); their juxtaposition aids users to choreograph synchronised robotic
actions. However, Godzilla’s ease of use comes at the expense of programming functionality. Only
four operations are available—linear and joint movements, as well as digital and analogue IO actions.
In fact, Godzilla arguably functions more like a graphical user interface, since actions and motion
characteristics such as speed must be manually added or edited. This becomes a limitation when the
robotic process requires numerous actions or changes in motion types to be specified.
Figure 2-15 Example Scorpion Grasshopper program.
Scorpion is a programming solution targeting Universal Robots. It was developed at the University
College London, Bartlett School of Architecture and has been used in academic projects. 172 It offers
seven components 173 that are either Python scripting components or clusters saved as user objects.
As shown in Figure 2-15, there are four steps in the programming process: 1) target planes are
created from geometrical inputs, 2) they are fed into an IK solver that returns joint angles, 3)
components are used to generate movement commands based on these angle values; and 4) the
commands are formatted into a program that is sent to the robot. The key drawback of Scorpion is
that it only exposes two operations from the URScript library—linear movement and digital IO
commands.
Khaled Elashry and Ruairi Glynn, “An Approach to Automated Construction Using Adaptive Programming,”
in Robotic Fabrication in Architecture, Art and Design 2014, ed. Wes McGee and Monica Ponce de Leon (New
York: Springer, 2014), 51–66.
173
Two are for generating move or IO commands, two for visualising the robot and its end-effector, two for
uploading and receiving data from the robot and the last is a kinematics solver.
172
34
Like Crane, Scorpion allows users to view the sub-graphs of clusters and code within scripting
components. However, it too fails to capitalise on the potential of supporting end-user modification.
The IK solver is a cluster that when opened, contains 214 components; six of them are scripting
components and another seven are clusters, which means that there is another deeper level of
abstraction. The cluster sub-graph suffers from readability issues because of its size and lack of
secondary notation 174. In addition, the feedback component (Figure 2-15: 5) encapsulates Python
code taken directly from a third party robot programming library 175. There are over 600 lines of code
introducing four custom classes and advanced concepts such as threading. To understand, let alone
modify, the clusters or feedback component, end-users already have to be experts in visual and text
programming respectively.
2.3.3
Custom text based programming solutions
UR 176 is a Python based robot programming solution originally developed at the Chair of Architecture
and Digital Fabrication for the robot-folding workshop 177. It consists of a package of Python modules
and classes that: wrap a subset of the URScript language; provide custom matrix and vector classes,
handle socket communication with the robot; and offer access to the RhinoScript library. A Python
IDE is used as the main programming environment. By importing UR, architects can write a script
that generates designs using RhinoScript functions as well as the robot program using wrapped
URScript commands. Rhinoceros is used as a backend for visualising the results.
S.A.N.S is a Python based robot programming solution with a strong focus on enabling interactive
control of the robot. 178 The underlying concept of S.A.N.S is that of a server which handles the
communication and data exchange between several clients. These clients can be robots, CAD
Scribble comments, groupings and colour are forms of secondary notation in Grasshopper. They act as
perceptual cues that “convey extra meaning, above and beyond the ‘official’ semantics of the language.”
Thomas Green and Marian Petre, “Usability Analysis of Visual Programming Environments: A ‘Cognitive
Dimensions’ Framework,” in Journal of Visual Languages and Computing 7 no. 2 (1996): 139.
175
Python-urx is a Python library for controlling Universal Robots developed by Olivier Roulet-Dubonnet.
“GitHub python-urx,” accessed January 1st 2016, https://github.com/oroulet/python-urx
176
UR was originally developed by Dr. Ralph Bärtschi of ROB Technologies.
177
“Curved Folding,” Gramazio Kohler Research, accessed January 1st 2016,
http://gramaziokohler.arch.ethz.ch/web/e/lehre/207.html
178
S.A.N.S was originally developed by Kathrin Dörfler and Romana Rust. “S.A.N.S Wiki,” S.A.N.S, accessed
January 1st 2016, https://sites.google.com/site/sanswikipage/
174
35
applications and tablet devices. For the “Sensor and Actuator Networks” workshop, 179 participants
used the Eclipse IDE as the main programming environment from which to run the server application
as well as author design algorithms.
A major advantage of UR and S.A.N.S over the previous solutions is their extensibility. For example,
in UR, end-users can add to the library of wrapped URScript functions, while in S.A.N.S, they can
create new clients. Consequently, users are able to include custom functionality as befits their needs,
so long as they have the required programming expertise. However, this highlights a main issue with
both solutions, which relates to accessibility. They offer a high abstraction barrier as end-users must
learn many new concepts, including advanced ones like classes, sockets and parsing, in order to
write a program. Unless suitable front-ends are developed, it is unlikely that novices can access the
considerable functionalities of these packages.
2.3.4
Summary: Robot programming in architecture
Beyond the additional learning costs involved, the disadvantage of using separate programming
systems for computational design and programming robotic fabrication processes is that it
predisposes architects to think of the two as separate intellectual tasks. Consequently, the potential
to interweave design and fabrication logics is left unexplored. This divide can be addressed by the
earlier described custom visual and text based solutions, which allow architects to author design and
robot control data from one environment and using a single language.
However, to date, these solutions have not been compared in empirical studies. Nonetheless, the
comparative benefits and drawbacks of visual versus text programming systems as previously
discussed are applicable here. To varying degrees, the available Grasshopper plugins make robot
programming more accessible by: providing a high level of abstraction, for example by allowing
architects to work with curves rather than numbers; exploiting immediate feedback to provide
visualizations of robot states; and offering a concrete programming process whereby components
can be directly assembled or manipulated, as in the case of Godzilla’s timeline component, to
sequence control instructions.
Kathrin Dörfler et al., “Interlacing: An Experimental Approach to Integrating Digital and Physical Design
Methods,” in Robotic Fabrication in Architecture, Art and Design, ed. Sigrid Brell-Cokcan and Johannes
Braumann (Vienna: Springer), 84.
179
36
Conversely, the problem of scalability that besets visual programming systems is likely to be
magnified. This is because the task is no longer limited to generating a design solution(s) via
programming but also the corresponding robotic building instructions. Beyond this, there is an
additional problem with the Grasshopper solutions that relates to their lack of extensibility. The
commercial plugins essentially offer a hard-coded vocabulary of components with which the user
must be able to program all possible robotic processes. One solution, as exemplified by HAL, is to
offer a large library of components, but the probable trade-off is reduced usability. Meanwhile, the
text-based solutions like UR are extensible but only by very knowledgeable programmers because of
their high abstraction barrier.
The following chapter introduces YOUR—a custom robot programming solution that was developed
by the author for carrying out this research.
37
3 Methodology
This research adopts a case study based approach, which is a form of qualitative inquiry. 180 Two
cases were set up to study how architecture students carried out fabrication-related robot
programming tasks using a custom developed software solution named YOUR. Empirical data was
collected from multiple sources including robot programs, interviews and direct observations. It
provides the basis for evaluating decisions underlying YOUR’s design and for identifying new
requirements for it. In addition, the data is further interpreted through qualitative and quantitative
metrics and a rich, detailed description of students’ programming process is obtained. Overall
themes from the case study results are then identified for further discussion.
3.1 Choice of approach
In general, qualitative research involves the collection of “open-ended, emerging data with the
primary intent of developing themes from [it].” 181 By contrast, quantitative research involves the
collection of “data in numerical rather than narrative form” 182 usually by setting up experiments with
controlled variables. While qualitative approaches are traditionally used in the social sciences, they
can be successfully applied in other disciplines, including as argued by Ko, 183 software engineering.
Creswell suggests that such approaches are appropriate when the topic is new and “needs to be
explored” 184; when “a complex, detailed understanding” 185 is required; and when “statistical
analyses simply do not fit the problem.” 186
The practice of programming robots for fabrication purposes is a recent one in the architecture
domain, and there is a lack of research literature that addresses it. A qualitative approach, which is
Creswell identifies five different qualitative approaches: narrative research, phenomenology, grounded
theory, ethnography and case studies. John Creswell, Qualitative Inquiry and Research Design: Choosing
Among Five Approaches, 2nd edition (London: Sage Publications, 2007), 6–10.
181
John Creswell, Research Design: Qualitative, Quantitative and Mixed Method Approaches, 2nd edition
(London: Sage Publications, 2003), 18.
182
Robert Donmoyer, “Quantitative Research” in The Sage Encyclopedia of Qualitative Research Methods, ed.
Lisa Given, (Thousand Oaks: Sage publications, 2008), 713-718.
183
Andre Ko, “Understanding Software Engineering through Qualitative Methods,” in Making Software: What
really Works, and Why We Believe It, ed. Andy Oram and Mary Treseler (Sebastopol: O’Reilly Media, 2011): 55–
63.
184
Creswell, Research Design, 22.
185
Creswell, Qualitative Inquiry and Research Design, 40.
186
Creswell, Qualitative Inquiry and Research Design, 40.
180
38
exploratory in nature, is suitable in this case because it allows the requirements for a robot
programming solution to be discovered. Furthermore, the robot programming process cannot be
studied in isolation, because it is closely related to other tasks, such as computational design and
physical tooling, which contribute to its complexity. Finally, “numbers and statistics are of little
help” 187 when the aim is to understand, for example, how a robot programming solution is used.
3.2 Case studies
Two cases 188, a Design Research Studio (DRS) and a workshop were selected for study. The studio
was run in 2012, over a period of two semesters, and then repeated in 2013. The workshop was run
in 2014 and limited to a week. Both the DRS and workshop took place in a classroom/studio
environment and involved architecture students as participants. Almost all the students in the DRS
had no prior programming experience. Those in the workshop had limited programming experience,
but had never seen, let alone used a robot before.
Students were formed into teams. In the DRS, each team was given an overall assignment to design
a high-rise. As part of the assignment, they were tasked to program robotic processes for fabricating
physical models of their designs. This involved implementing, testing and running the program. In
the workshop, students were not given an overall design assignment. Their task was to learn how to
control a prepared robotic fabrication process using a sample program, and then customize the
process by extending the program.
3.3 Research instrument
A custom robot programming solution named YOUR was developed by the author. It served as a
vehicle for carrying out research in the case studies. Students were provided with YOUR and would
use it to accomplish their assigned tasks. UR 189 was studied as a reference when YOUR was initially
developed in August 2011. While the former was a package written in Python, the earliest version of
Ko, “Understanding Software Engineering through Qualitative Methods,” 56.
One issue when choosing a collective case study approach is to determine the number of cases needed.
Creswell advices “no more than four or five cases” as it would dilute the overall analysis. Creswell, Qualitative
Inquiry and Research Design, 76.
189
UR is described in greater detail in Chapter 2.3.3.
187
188
39
YOUR was a set of Python scripting components for the Grasshopper visual programming system. At
that time, the only other Grasshopper-based robot programming solution available was KUKA|prc,
but it comprised compiled components, whose implementation details (source code) are hidden
from the end-user.
It was unclear what the software requirements 190 for YOUR were at the beginning, and thus the
strategy was to develop it in an incremental and iterative 191 fashion. Basili and Turner described this
approach as one where developers “start with a simple implementation … and iteratively enhance
the evolving sequence of versions until the full system is implemented. At each iteration, [software]
design modifications are made along with adding new functional capabilities.” 192
YOUR was mainly developed over the course of the first two DRS. Working prototypes were
introduced into the studio for use in programming robotic model fabrication processes. Students
gave direct feedback by highlighting bugs and occasionally requesting functionality that was missing.
The prototypes were patched and then immediately pushed out to the studio again. Hence, YOUR
was refined in a stepwise process over the semester.
However, YOUR was re-evaluated at the end of each semester based on the studio’s results, as well
as formal interviews conducted with students. More significant design changes were implemented
at these points. At the same time, YOUR functionalities which were not used by students were
removed. Conversely, exceptional solutions developed by them were integrated. In this way, YOUR
was developed in an evolutionary manner (Table 3-1).
The requirements for YOUR were largely identified and implemented by the end of the first DRS case
study. To validate its design, YOUR was then tested in the workshop case study and later refined
according to its results.
Requirement refers to “a capability needed by a user to solve a problem,” which in this case is to program a
robotic fabrication process. International Organisation for Standardisation, ISO/IEC/IEEE 24765: Systems and
software engineering—Vocabulary, 1st edition (Geneva: ISO/IEC, 2010), 301.
191
Craig Larman and Victor Basili, “Iterative and Incremental Developments: A Brief History,” Computer 36 no.
6 (2003): 47–56.
192
Victor Basili and Albert Turner, “Iterative Enhancement: A Practical Technique for Software Development,”
in IEEE Transactions of Software Engineering 1 no. 4 (1975): 390.
190
40
DESIGN RESEARCH STUDIO 2012
120214
120224
120306
120307
120313
120906
120921
121031
Orient
Orient
Orient
Orient
Orient
Tool
Tool
Utility
OrientCr
v
OrientCrv
OrientCrv
OrientCrv
OrientCrv
Pick
ToolAngles
Tool
Tool
Tool
Tool
Tool
Tool
Place
Popup
ToolAngles
Pick
Pick
ToolCrv
ToolCrv
ToolCrv
MoveJ
DigitalOut
Popup
Place
Place
Pick
Pick
Pick
MoveAxis
Pick
DigitalOut
Glue
Glue
Place
Place
PlaceL
Sender
Place
Pick
Sender
Sender
Glue
MoveJ
Glue
PlaceL
MoveJ
Place
ToolCrv
Sender
MoveAxis
MoveJ
Orient
MoveL
MoveC
MoveJ
Sender
MoveAxis
OrientCrv
MoveC
OrientLocal
MoveAxis
Glue
Place
ToolCrv
OrientLocal
MoveLocal
PlaceL
Sender
Glue
MoveLocal
MoveAxis
StoreFunc
StoreFunc
MoveAxis
Sender
Introduced
datatree
components
MoveC
Sender
Listener
MoveL
Utility
MoveJ
OrientLocal
Listener
MoveL
Introduced
laser-cut
components
MoveLocal
FollowPath
DigitalOut
FK
Popup
IK
ToolAngles
Utility
Introduced
Python package
41
DESIGN RESEARCH STUDIO 2013
WORKSHOP 2014
130121
130128
130201
130822
130830
130902
140310
140501
Utility
Tool
LoadPy
LoadPy
LoadPy
LoadPy
LoadPy
LoadPy
Tool
ToolAngles
AddBase
AddBase
AddBase
Tool
Tool
Tool
ToolAngles
Popup
Tool
Tool
Tool
DigitalOut
DigitalOut
Sleep
Popup
DigitalOut
ToolAngles
DigitalOut
DigitalOut
MoveJ
MoveJ
DigitalOut
DigitalOut
Glue
Popup
MoveJ
MoveJ
MoveL
MoveL
MoveJ
Pick
MoveJ
DigitalOut
MoveL
MoveLocal
MoveC
MoveC
MoveL
Place
OrientLocal
Pick
MoveC
MoveAxis
MoveP
ServoC
MoveC
MoveJ
MoveLocal
Place
MoveLocal
ServoC
ServoC
MoveLocal
ServoC
MoveL
MoveAxis
Glue
MoveAxis
Action
MoveLocal
Action
MoveLocal
MoveC
Sender
MoveJ
Sender
Listener
Action
Sender
Action
OrientLocal
Listener
MoveL
Listener
FK
Sender
Listener
Sender
MoveLocal
FK
MoveC
FK
IK
Listener
FK
Listener
MoveAxis
IK
OrientLocal
IK
MoveL
FK
IK
FK
Sender
Pick
MoveLocal
Popup
MoveC
IK
Speed
IK
Listener
Place
MoveAxis
Pick
Sender
Speed
Fold
Speed
FK
MoveL
SendLocal
Place
MoveP
AddBase
MoveSense
Crumple
IK
MoveC
Sender
Glue
Speed
MoveAxis
MoveP
Cut
FollowPath
Utility
Listener
OrientLocal
Fold
Sleep
LoadFunc
Glue
SendLocal
FK
SendLocal
MoveSense
Crumple
LoadPy
IK
ServoC
Cut
AddBase
ClearBase
Action
LoadFunc
ClearBase
ViewBase
ViewBase
Table 3-1
Introduced
MaxInspired
Revised Python
package
Evolution of the graphical YOUR toolkit. Components highlighted in green were added, yellow
modified; and red deleted. The toolkit’s development is discussed in greater detail in the following
chapter.
42
3.4 Data collection
Data was collected from three sources of information: robot programs, interviews and direct
observation. For the design studio cases, students’ robot programs were collected at the end of each
semester. Each team determined when and how they saved their programs. Similarly, robot
programs were collected at the end of the workshop case study.
Interviews were conducted at the end of the case studies. They were semi-structured, which
according to Ayres; involves asking “informants a series of predetermined but open-ended
questions.” 193 With this format, the interviewer, who is the author in this case, can obtain
unanticipated answers, and follow up on these responses to elicit more information. These
interviews were audio-recorded and their protocols can be found in the Appendix—Chapter 9.
In the DRS case study, the author observed students during allocated studio hours, which was twice
each week. For the workshop, students were observed throughout its duration. In both cases, the
author was a participant observer 194 and occasionally aided students in their robot programming
tasks. Observational data was recorded as field-notes.
3.5 Data interpretation and representation
Two metrics are used to further interpret some of the collected data. The first is the Cognitive
Dimensions Framework, which was originally developed by Thomas Green. 195 The framework
introduces qualitative criteria—dimensions—for evaluating an information artefact’s usability. It has
been used by Green and Petre to compare visual and text programming systems, 196 and by Microsoft
to evaluate their Application Programming Interfaces (API) 197. The cognitive dimensions framework
provides a vocabulary of non-specialists terms to describe the qualities of an information artefact,
Lioness Ayres, “Semi-Structured Interview,” in The Sage Encyclopedia of Qualitative Research Methods, ed.
Lisa Given, (Thousand Oaks: Sage publications, 2008), 810.
194
Lynne McKechnie, “Participant Observation,” in The Sage Encyclopedia of Qualitative Research Methods, ed.
Lisa Given, (Thousand Oaks: Sage publications, 2008), 598.
195
Thomas Green, “Cognitive Dimensions of Notations,” in People and Computers V, ed. Alistair Sutcliffe and
Linda Macaulay (Cambridge: Cambridge university Press, 1989), 443–460.
196
Thomas Green and Marian Petre, “Usability Analysis of Visual Programming Environments: A ‘Cognitive
Dimensions’ Framework,” in Journal of Visual Languages and Computing 7 no. 2 (1996): 131–174.
197
Steven Clarke, “How Usable Are Your APIs,” In Making Software: What Really Works, and Why We Believe It,
ed. Andy Oram and Greg Wilson, (Sebastopol: O’Reilly Media, 2011), 545–565.
193
43
such as a robot program implemented by students. Part of the interview for the workshop case was
based on a cognitive dimensions questionnaire developed by Blackwell and Green. 198
The second metric used is token count, which is meant to assess the complexity of a program. It was
proposed by Levitin 199 and refers to the total number of tokens in a textual program as parsed by a
compiler. 200 Nickerson introduced a graphical version of token count that is given by the sum of all
nodes, edges and labels in a diagram. 201 A set of analysis tools 202 were developed to calculate the
graphic and textual token count of a Grasshopper program, as well as the number and types of
components—standard, YOUR and custom-developed—used in it.
Figure 3-1 shows the analysis results for a sample program. There are eleven discrete objects drawn
on the canvas, nine of them are labelled, 203 and there are eight wires. Thus the total graphic token
count is twenty-eight. There are three YOUR components (Figure 3-1: 5, 6 and 11), which have
encapsulated scripts, and the sum of their textual token count is 526. The overall token count
(graphic and text) of the Grasshopper program is 554.
Figure 3-1 Analysis scripts calculate the program’s token count (highlighted in yellow) and list the numbers and
types of components used.
Alan Blackwell and Thomas Green, “A Cognitive Dimensions Questionnaire for Users,” in Proceedings of the
Twelfth Annual Meeting of the Psychology of Programming Interest Group, ed. Alan Blackwell and Eleonora
Bilotta (Corigliano Calabro: Edizioni Memoria, 2000), 137–152.
199
Anany Levitin, “How to Measure Software Size, and How Not To,” in Proceedings of IEEE COMPSAC 1986,
(Washington D.C.: IEEE Computer Society Press, 1986), 314–318.
200
Jeff Nickerson “Visual Programming: Limits of graphic representation,” in Proceedings of IEEE Symposium on
Visual Languages (Los Alamitos: IEEE Computer Society Press, 1994), 178–179.
201
Jeff Nickerson, “Visual Programming” (PhD diss., New York University, 1994), 189.
202
The analysis tools are a python package consisting of 3 modules: analyse, compare and style.
203
The group (8) and panel (10) objects are not labelled.
198
44
At the same time, the analysis tools can also be used to identify changes between successive
Grasshopper programs and to graphically represent them. They were used to trace the evolution of
a program. Figure 3-2 shows the results of comparing a sample program with its earlier version.
Components outlined in green were added, while those in yellow were edited. Moreover, if the
modified component was a Python scripting component, code changes were marked up, using the
same colour scheme.
import urscript as ur
import urscript as ur
from Grasshopper.Kernel import
from Grasshopper.Kernel import
GH_RuntimeMessageLevel as gh_msg
GH_RuntimeMessageLevel as gh_msg
if not plane:
if not plane:
ghenv.Component.AddRuntimeMessage(gh_msg.
ghenv.Component.AddRuntimeMessage(gh_msg.War
Warning,'Failed to collect data for
ning,'Failed to collect data for required
required input: plane')
input: plane')
else:
else:
tool_pose = ur.pose_by_plane(plane)
new_pose = ur.pose(0,0,0,0,0,0)
a = ur.set_tcp(tool_pose)
a = ur.set_tcp(new_pose)
Figure 3-2 The differences between two Grasshopper programs, as well as the code in YOUR components, are
marked up graphically (green—added; yellow—modified; red—deleted).
For each case study, a rich thick description 204 of how each team carried out their fabrication-based
robot programming task is developed from the collected data. The adjective rich refers to the
description being highly detailed, drawn from diverse sources and portraying multiple dimensions of
the activity. 205 Thick refers to the explicative quality of the description, for example by explaining the
reasons underlying students’ programming decisions.
Sherry Marx, “Rich Data,” in SAGE Encyclopaedia of Qualitative Research Methods, ed. Lisa Given,
(Thousand Oaks: Sage publications), 795.
205
Sherry Marx, “Rich Data,” 794.
204
45
For the design research studio case, a typical description covers the underlying design and
fabrication concepts of the project, the implementation of the robot program based on the latter,
and outcomes of running the program. The description includes images of fabricated artefacts, code
snippets and implemented programs that have been analysed with the aforementioned tools, and
integrates appropriate responses from the interview.
For the workshop case, the description is structured chronologically and covers students’ progress in
extending the given robot program. The description includes images of fabricated artefacts, code
snippets and the Grasshopper program. The latter two are marked up graphically using the analysis
tools to illustrate their evolution from earlier versions. Similar to the studio, descriptions also
incorporate feedback from the interview.
For both cases, descriptions of students’ robot programming activities are followed by their
responses to interview questions in a separate section.
46
4
Case study: Design Research Studio
The first case study was a Design Research Studio (DRS), which explored robotic fabrication and
design computation in the context of large-scale residential high-rise developments in Singapore. 206
A key objective of the DRS was to conceptualise and develop innovative high-rise typologies that
offer an alternative and indeed, counterargument to the repetitive and mono-functional designs
that currently dominate Singapore’s urban landscape (Figure 4-1). 207 The DRS adopted an
experimental approach whereby students would develop high-rise designs using computational
techniques and materialise them as 1:50 scale robotic fabricated models in repeated cycles. They
would have to author both digital and physical processes using a programming language as their
medium of expression.
Figure 4-1 The prevailing public housing towers in Singapore are mono-functional and repetitive (site—Punggol
new town).
4.1 Design Research Studio setup
The DRS was run in 2012 and repeated in 2013. Each studio was split into spring and fall semesters.
There were twelve students in the first DRS and nine in the second. They were Master students who
were either in the penultimate or final year of their architectural education. Students were formed
Fabio Gramazio et al., The Robotic Touch: How Robots Change Architecture (Zurich: Park Books, 2014), 280.
Approximately 80% of Singapore’s population resides in public high-rises that reach up to and in certain
cases, exceed 40 storeys in height. Belinda Yuen, “Romancing the High-rise in Singapore,” Cities 22, no. 1
(2005): 3.
206
207
47
into three teams for each studio and had to work collaboratively. The assignment for each team was
to design a residential high-rise at a selected site in Singapore. Design proposals had to be
represented through 1:50 scale robotic fabricated models. A key task for each team was to program
the robotic model fabrication process.
These students had no prior, or at best, very limited robot programming experience. In fact, most of
them had never programmed in any capacity before. Only two students, one in each DRS, had some
experience with visual programming in Grasshopper for generative design. Several had taken
previous elective courses involving robotic fabrication, where they were exposed to Python scripting
for the first time. However, in these courses, students focused on scripting the design of the artefact
that was to be fabricated, and not on programming the robotic process itself.
Figure 4-2 The custom robotic setup with a 4m x 1.7m x 2.7m working envelope.
A unique robotic system was set up for the DRS. It consisted of a UR5 robotic arm 208 mounted on a
Güdel linear axis machine 209 (Figure 4-2). The latter could move the arm in two orthogonal
“UR5 Robot,” Universal Robots, accessed January 1st 2016, http://www.universalrobots.com/en/products/ur5-robot/
209
“2 axis linear modules, ”Güdel, accessed January 1st 2016,http://www.gudel.com/products/linearaxes/linear-axis-one-and-multi-axis-with-rack-drive/2-axis-type-zp/
208
48
directions—horizontally and vertically—with respect to its base. 210 Consequently, it expanded the
working range of the arm, allowing it to fabricate complete 1:50 scale models, which would exceed
two metres in height, 211 in a single pass. A set of vacuum grippers for the robot was developed as
well. 212
There were two default approaches for controlling this robotic system. The first was to use separate
teach pendants to control the arm and axis machine individually. The other option was to write a
text program in the proprietary URScript language 213 and control both concurrently. 214 However,
neither approach was ideal in the context of the studio. The objective was to realise highly
differentiated models and this would require giving the robot complex building instructions. In the
former case, a prohibitively large number of individualised robotic motions and actions would have
to be taught. In both cases, there was no direct way to reference digital design data that can be used
to coordinate these motions and actions.
4.2 Robot programming setup: 2012 spring semester
A custom solution was needed to allow students in the DRS to program the above robotic system
and fabricate models of their high-rise designs. There were two initial requirements. First it had to
be accessible as students were assumed to be novices in robot programming. One potential solution
was to choose a visual dataflow programming language. As discussed earlier in Chapter 2.1.3, such
languages can potentially offer: a high level of abstraction so that novices require “fewer
concepts” 215 to begin programming; de-emphasise issues of syntax which Kelleher and Pausch have
The complete robotic system has eight degrees of freedom. The robotic arm has 6 revolute (rotational)
joints while the axis has two prismatic (translational) joints.
211
A 1:50 model of a 40 storey high-rise with 3 m floor-to-floor height would be approximately 2.4 m in height.
212
These end-effectors were designed and built by Willi Lauer. Michael Budig et al., “Design of Robotic
Fabricated High-rises,” in Robotic Fabrication in Architecture, Art and Design 2014, ed. Wes MGee and Monica
Ponce de Leon (New York: Springer, 2014), 111–130.
213
URScript is the proprietary programming language for Universal Robots. See Chapter 2.2.2.
214
The robot integrators Bachmann Engineering and consultancy ROB Technologies added a set of URScript
based functions that could be used to control the axis machine. “Bachmann AG,” Bachmann Engineering AG,
accessed January 1st 2016, http://www.bachmann-ag.com/; “ROB Technologies AG,” ROB Technologies AG,
accessed January 1st 2016, http://www.rob-technologies.com/en/home
215
Margaret Burnett et al., “Scaling Up Visual Programming Languages,” in Computer 28 no. 3 (1995): 45.
210
49
identified to be a learning barrier 216; and provide immediate feedback 217 to support progressive
evaluation 218 which Green and Petre consider to be “downright essential for novices” 219.
Second, students had to be able to carry out robot programming from within their design
environment. One solution would be to use a common language for computational design as well as
robot programming. On a pragmatic level, this frees students from having to learn an extra
programming system. More significantly though, it allows them to express, in a single notation, the
respective geometric and constructive-based logics for generating a virtual as well as physical design
representation, and hence to relate them.
Both these requirements led to a decision to choose Grasshopper—a graphical algorithm editor
integrated with the Rhinoceros 3-D modeller—as a base programming system, which would be
extended with robotics specific functionalities in the form of add-on components. At this point in
time, only one other Grasshopper-based robot programming solution—KUKA|prc—was available. 220
However, it could only be used to program KUKA robots and was geared towards milling. A further
advantage of choosing Grasshopper was that it also supported textual programming within the
graphical environment; therefore offering more flexibility with regards to programming approach.
The extensive official support 221 for end-user development of Grasshopper plug-ins, the author’s
previous experience with such development, and a large user community were additional factors
influencing this decision.
Caitlin Kelleher and Randy Pausch, “Lowering the Barriers to Programming: A Taxonomy of Programming
Environments and Languages for Novice Programmers,” in ACM Computing Surveys 37 no. 2 (2005): 86.
217
António Leitão et al., “Programming Languages for Generative Design: A Comparative Study,” in
International Journal of Architectural Computing 10 no. 1 (2012): 143.
Burnett et al., “Scaling Up Visual Programming Languages,” 45.
218
Progressive evaluation is a cognitive dimension that refers to the ability to evaluate or check incomplete
program fragments to gain feedback. Thomas Green and Marian Petre, “Usability Analysis of Visual
Programming Environments: A ‘Cognitive Dimensions’ Framework,” in Journal of Visual Languages and
Computing 7 no. 2 (1996): 136.
219
Green and Petre, “Usability Analysis of Visual Programming Environments”, 158.
220
KUKA|prc is reviewed in Chapter 2.3.2.
221
McNeel officially provides a Software Development Kit (SDK) as well as Visual Studio templates for creating
such components. In addition, there is an active online community that shares development knowledge as well
as open source code. For example, see: “McNeel Grasshopper Developer forum,” McNeel, accessed January 1st
2016, http://discourse.mcneel.com/c/grasshopper-developer
216
50
Orient
OrientByCurve
SetTool
SetToolByCurve
MoveJoints
MoveAxis
Pick
Place
Glue
Sender
Figure 4-3 YOUR Grasshopper toolkit comprising ten Python scripting components.
A custom solution, named YOUR, was developed for programming the robots used in the DRS. Figure
4-3 shows the version of YOUR that was introduced at the start of the 2012 DRS. It was a minimal
toolkit consisting of just ten Python scripting components. The goal was to avoid having a profusion
of low-level primitives, which according to Lewis and Olson 222 and reiterated by Green and Petre, “is
one of the great cognitive barriers to programming.” 223
Figure 4-4 The robot picks a part from the laser-cut sheet (left), moves it through the gluing station (middle),
and places it (right).
This version of YOUR was designed to support an assembly-based model construction process. The
robotic arm picks laser-cut cardboard elements from a feeder, brings them to a gluing station and
then places them either horizontally or vertically to represent floors and walls in the model (Figure
4-4). These elements could be individually shaped and freely positioned via the laser-cutter and
robot respectively.
Clayton Lewis and Gary Olson, “Can principles of programming lower the barriers to programming?” in
Empirical studies of programmers: second workshop, ed. Gary Olson et al. (New Jersey: Ablex Publishing Corp,
1987), 248–263.
223
Green and Petre, “Usability Analysis of Visual Programming Environments,” 136.
222
51
Figure 4-5 Downstream production-related section of sample Grasshopper program.
Figure 4-5 shows a section of a sample Grasshopper program given out to students at the start of the
DRS. 224 It illustrated how these components were to be used. First a reference base (A) is set up
using a standard Grasshopper plane component. It refers to a physical location in space, where the
model is to be built, for example the centre of a wooden palette. This location now corresponds with
the origin of the CAD model. This base plane is specified in the robot’s coordinate system. Orient
components (B) transforms target planes defined in the CAD model to the robot’s coordinate
system. 225 Information about the end-effector is specified using the SetTool component (C). In the
case of the grippers prepared for students, the location and orientation of the vacuum nozzles have
to be described. The oriented target planes are fed into the Pick (D), Place (E) and Glue (F)
components, which generate a list of URScript formatted commands for those respective operations.
They offer a “process-centric level of abstraction” 226, whereby students can focus on providing the
correct input parameters to the process and not on specifying each movement and action step.
Finally, standard Grasshopper Weave components (G) are used to sequence the list of commands
generated by the Tool, Pick, Glue and Place components. They are then fed to the Sender
component (H), which as its name suggest, is responsible for sending instructions to the robot. The
Sender is switched on by a toggle, which is highlighted in a red circular group.
The upstream section of the program generated a digital representation of a simplified high-rise design. The
program had a total graphic token count of 282.
225
It uses the reference base, which is one of its inputs, to calculate the appropriate matrix to perform this
transformation.
226
Martin Hägele et al., “Industrial Robotics,” in Springer Handbook of Robotics, ed. Bruno Siciliano and
Oussama Khatib (Berlin: Springer), 979.
224
52
4.3 Results: 2012 spring semester
The following chapters describe how each team carried out the robot programming task using the
aforementioned YOUR toolkit. The underlying design and fabrication concepts are first described,
followed by the implementation of the program, and finally the material results of running the
program.
4.3.1
Tiong Bahru Tower
Figure 4-6 1:50 models representing the intermediate and final design proposals.
The concept of a continuous void, which brings light and ventilation to the interior of a building,
informed the design of the first Tiong Bahru 227 tower iteration (Figure 4-6—left). It had a regular
hexagonal plan and a central forty storey tall open-air atrium. In their next design iteration, the team
introduced a new structural concept characterised by a branching system of load-bearing walls
(Figure 4-6—right). In addition, the high-rise was shaped according to new parameters such as views,
setbacks and circulation. It was no longer regular in form and each floor slab was unique as a result.
The student team adopted the fabrication process prepared for the studio and developed it further.
The underlying concept was to assemble the tower out of individualised laser-cut cardboard
227
The team comprised Pascal Genhart, Patrick Goldener, Florence Thonney, and Tobias Wullschleger.
53
elements, which would either represent walls or floor slabs. Wall elements were quadrilateral and
constrained to having parallel top and bottom edges. Floor elements were larger and free-form in
shape. The process comprised three distinct robotic operations—picking, gluing and placing. Both
element types were picked from cut-sheets placed on a feeder station. Only wall elements were
glued. They were placed upright in the model, while floor elements were laid flat.
Figure 4-7 shows the Grasshopper program implemented by the team for fabricating their final highrise. It referenced a Rhino model that contained a representation of the design, which was
generated beforehand by a separate Grasshopper program. Therefore the program shown in figure
10 was only used for planning and executing the robotic fabrication process. It had a total graphic
token count of 3440 and was organised in eleven parts. Part 1 referenced floor slabs in the digital
model and stored them as parameters in the graphical program. Part 2 received these parameters as
inputs and generated a cut sheet for the laser-cutter, as well as movement targets for the picking
and placing operations. Part 3 and 4 were equivalent to parts 1 and 2, but addressed walls instead of
floors. Part 5 generated the movement targets for the wall gluing operation.
Part 6 specified the virtual location of a feeder station where cut sheets were mapped to. It included
sliders and panels for adjusting the feeder’s and cut-sheet’s position and by extension, those of the
picking targets as well. On the other hand, part 7 was responsible for adjusting the position of
placing targets. These two sub-graphs allowed students to make corrections during the assembly
process. For example, one problem that emerged was that the discrepancy between digital and
physical models grew as more storeys were fabricated. 228 Since movement targets were derived
from the digital model, elements were oftentimes placed too high or low on the physical model as a
result. In the former case, vertical elements would simply topple over even with deviations as slight
as 1mm. In the latter case, the robot may press the element into the built model and in a worst case
scenario, cause it to collapse. Thus it was necessary to lower or raise the placement targets
accordingly.
This was due to factors such as the incorrect specification of material thickness and gradual deformation of
the model under its self-weight. For example, if the thickness of the cardboard used is specified wrongly by
just 0.1mm, this still results in an accumulated error of 1 mm when 10 floors are placed. This gap cannot be
filled by the adhesive and consequently vertical elements would topple over when placed.
228
54
55
Figure 4-7 The Grasshopper program for assembling the final tower was organised in eleven parts. Part 1 stored
parameters referencing floor slabs drawn in the digital model. Part 2 generated a cut sheet for the
slabs, as well as movement targets for the picking and placing operations. Part 3 and 4 were equivalent
to parts 1 and 2, but addressed walls instead of floors. Part 5 generated the movement targets for the
wall gluing operation. Parts 6 and 7 were for adjusting picking and placing targets respectively. Part 8
generated instructions for picking and placing floors. Part 9 specified movements common to both
floor and wall assembly processes. Part 10 generated instructions for picking, gluing and placing walls.
Part 11 was the control interface.
56
Figure 4-8 The floor assembly process: pick (left); move to safety point (middle); and place (right).
Part 8 specified the sequence of operations for picking and placing floors (Figure 4-8). It contained
five types of YOUR components: Orient, Pick, Place, MoveLinear 229 and MoveAxis 230. Orient
components were used to transform target planes to the robot’s coordinate system. Linear
movements to safety waypoints were specified before pick and place operations. The axis machine
was moved forward or backwards depending on the horizontal location of the placing target, as well
as up and down whenever the cut-sheet had to be changed. Part 9 specified three safety movements
using MoveJoint components that were common to both the floor and wall assembly process. Part
10 specified the sequence of operations for picking, gluing and placing walls (Figure 4-9). It was
structured similarly to part 8, but contained additional Sleep 231 components that were used for the
gluing operation. The MoveLinear and Sleep components were added to the YOUR toolkit during the
semester as it was requested by the team.
Figure 4-9 The wall assembly process: pick (left); glue (middle); and place (right).
The MoveLinear component instructs the robot to move towards a target in such a way that its tip traces a
straight path.
230
The MoveAxis component controls the vertical/horizontal movement of the axis machine.
231
A Sleep component instructs the robot to pause for a specified amount of time.
229
57
Figure 4-10 The control interface; panels A, B and C contained instructions for adjusting the robot’s position,
while D and E contained instructions for assembling floors and walls respectively.
Part 11 of the Grasshopper program was akin to a control interface; it was used to run the
fabrication process. Figure 4-10 shows a portion of this sub-graph in greater detail. The student team
created five colour-coded panels; each stored a list of URScript commands that were generated by
upstream sections of the graph. Panels A, B and C contained single instructions for adjusting the
robot’s position. Panels D and E contained instructions for assembling floors and walls respectively.
They were connected by hidden wires to parts 8 and 10. Depending on the stage of the fabrication
process, students would choose which panel to connect to the Sender component (Figure 4-10: F).
Hence this section of the graph was constantly re-wired and edited.
The student team represented a portion of their final design as a model (Figure 4-11). It comprised
225 floor slabs and 646 walls. The team managed to overcome inaccuracy issues in the assembly
process by developing corrective motion or adjustment strategies, and integrating them into their
program. Thus they managed to fabricate their model almost entirely through robotic means.
Manual intervention was mainly restricted to applying glue to the top of walls before floor slabs
were placed and fixing occasional walls that toppled. The entire robotic assembly process took
around 2½ days. 232
This does not take into account the time invested in laser-cutting which included preparing the files as well
as the actual laser-cutting operation.
232
58
Figure 4-11 The final fabricated model.
4.3.2
Lakeside Tower
In this project, the student team 233 developed a custom fabrication process midway through the
semester. It was conceptually similar to 3D printing whereby a vertical structure is built up in
horizontal layers. However, in this case, discrete parts are deposited as opposed to extruding a
continuous material. The team decided to work with standardised cardboard parts, which would be
laid flat and assembled together to represent larger architectonic elements such as floors. The
decision to work with only horizontal elements was motivated in part by the team’s difficulties in
assembling walls in their early models, as well as the desire to realise more articulated designs by
working at a finer resolution. 234
The team comprised Sylvius Kramer, Alvaro Romero, Michael Stünzi, and Fabienne Waldburger.
The team’s first model, which was fabricated using the prepared process, contained 142 parts. The next
three models were fabricated using their custom process and contained 4108, 6002 and 15,754 parts
respectively.
233
234
59
Figure 4-12 1:50 models representing three design iterations (from left to right).
The team focused on designing the primary structure of the high-rise. Their initial model (Figure
4-12—left) illustrates the underlying concept of cantilevering floor slabs from vertical cores. The
cores were represented by stacked square-shaped cardboard parts and floors by laser-cut sheets.
The distinction between cores and floors was erased in the next design iteration. Each core not only
grew vertically upwards, but also spread horizontally outwards at specified heights until it merged
with neighbouring cores. The entire model (Figure 4-12—middle) was assembled out of a standard
square-shaped part. For the final design (Figure 4-12—right), the team straightened out the cores
and increased the size of horizontal surfaces to form more expansive floors. Two new parts were
designed. One was shaped in the form of an arc, while the other was circular. They were assembled
during the vertical and horizontal growth phases respectively.
Figure 4-13 The end-effector contains an automated-gluing mechanism (left) and a part dispenser (right).
60
The team had to streamline the assembly process because a large amount of parts would have to be
stacked in order to achieve the requisite verticality. They developed a custom robot end-effector
(Figure 4-13) to combine gluing and placing operations, while eliminating picking altogether. It
integrated a gluing system that sprayed adhesive onto horizontal surfaces and a dispenser system
that deposited cardboard elements stored in refillable cartridges. Both systems were digitally
actuated. Consequently, the robot no longer has to return to a distant gluing or picking station after
placing every element, as was the case with the original fabrication process.
Figure 4-14 Arc-shaped elements are assembled in the vertical growth phase (left), while circular elements are
assembled in the horizontal growth phase (left).
The team implemented two separate graphical programs named VerticalGrowth and
HorizontalGrowth to fabricate their final model. They alternated between the two, running the first
program to assemble arc-shaped elements for the vertical section of the core, and the second to
assemble circular parts for horizontal section (Figure 4-14).
The VerticalGrowth program is shown in Figure 4-15. It had a total graphic token count of 1056 and
was organised in five parts. Part 1 contained a curve parameter that was the initial node of the
entire graph. It referenced a vertical line drawn in the digital model, which would represent the
central axis of a core. Based on this line, part 2 of the graph generated a list of planes. Each plane
described the position and orientation of an arc-shaped cardboard element. The radii and height of
this vertical section of the core were design parameters that could be adjusted. As the core grew, it
narrowed before widening again at the top. Part 3 of the graph re-sorted these elements according
to height, so that they were sequenced according to the logic of assembly, which was from bottom
to top.
61
Figure 4-15 The VerticalGrowth Grasshopper program was organised in five parts: part 1 stored parameters;
part 2 generated the core and movement target planes; part 3 sequenced the target planes; part 4 generated
instructions for placing the parts; and part 5 was the control interface.
The robotic instructions were generated in part 4. 10 YOUR related components were used. They
were arranged in 4 distinct groups (Figure 4-16). The first group contained a modified version of
SetDigitalOut 235, which was introduced to the toolkit midway through the semester. The team added
new statements to its script to set digital input/output (IO) values for all the spray nozzle and
dispenser actuators at the start of the fabrication process and renamed the component UR_IO.
Figure 4-16 The subgraph in part 4 of the VerticalGrowth program generated the robot instructions.
235
This component generated a command that is used to switch the true/false values of input/output ports.
62
The second group contained three components and was related to gluing. The team used a SetTool
(2B) component to specify a location below the spray adhesive nozzle as the tool centre-point. The
Orient component (2A) transformed the target plane, which was generated in part 3, to the robot’s
coordinate system. The plane was the input for a modified Glue component (2C), whose script was
extended with statements for actuating the spray nozzle. As a result, the robot positions the nozzle
point of the end-effector above the target, which is a previous layer of cardboard elements, and
sprays adhesive onto it.
The third group contained SetTool (3B), Orient (3A) and a modified Place (3C). The team removed
statements from the original placing script which were used for turning off the vacuum gripper. They
renamed the modified component UR_SafetyPoint. These components instructed the robot to
position the centre of its end-effector at a safety waypoint aligned with the vertical axis of the core.
The team included this step to reduce the risk of collision between the large end-effector and the
model.
......
68
def ur_place_script():
69
'''Formats UR_script for a Place action'''
70
matrix = rg.Transform.ChangeBasis(ref_plane, rg.Plane.WorldXY)
71
axis_angle= matrix_to_axis_angle(matrix)
72
#Create UR Script
73
script = "# -------------- Placing Now----------------\n"
74
pose_fmt = "p[" + ("%.4f,"*6)[:-1]+"]"
75
#2) Descend
76
pose_place = [ref_plane.OriginX/1000, ref_plane.OriginY/1000, (ref_plane.OriginZ +
77
safetyZ)/1000,axis_angle[0], axis_angle[1], axis_angle[2]]
78
pose_place_fmt = pose_fmt%tuple(pose_place)
80
script += "movel(%s, a = %.2f, v = %.2f)\n"%(pose_place_fmt,accel/4,vel/4)
81
#3)SetDigitalOut
82
script += "set_digital_out(%s,False)\n"%(io_num_4)
83
script += "sleep(0.2)\n"
84
script += "set_digital_out(%s,True)\n"%(io_num_2)
85
script += "sleep(0.5)\n"
86
#4)Move to safety
87
pose_safe = [ref_plane.OriginX/1000, ref_plane.OriginY/1000, (ref_plane.OriginZ + safetyZ-
88
20)/1000,axis_angle[0], axis_angle[1], axis_angle[2]]
89
pose_safe_fmt = pose_fmt%tuple(pose_safe)
90
script += "movel(%s, a = %.2f, v = %.2f)\n"%(pose_safe_fmt,accel,vel)
91
#5)Reset IOs
92
script += "set_digital_out(%s,False)\n"%(io_num_2)
93
script += "sleep(0.2)\n"
63
94
script += "set_digital_out(%s,True)\n"%(io_num_4)
95
return script
......
Figure 4-17 Script in the modified Place component (green—new code; yellow—edited code).
The fourth group contained the same three components. Here, SetTool (4B) was used to specify a
location directly below the part dispenser as the new tool centre point. The team also modified the
Place component, but differently from before. They extended its script (Figure 4-17) with statements
that were copied directly from standard SetDigitalOut (lines 81-85), MoveLinear (lines 87-91) and
custom UR_IO components (lines 93-96); thus integrating their functionalities. These components
instruct the robot to move the dispenser above the target point and release a new cardboard
element. Finally, in part 5 of the graphical program, the team used standard Grasshopper Weave
components to concatenate and order the formatted instructions generated in part 4. These
instructions were then sent to the robot via the Sender component.
Figure 4-18 The HorizontalGrowth Grasshopper program was organised in five parts: part 1 contained the
input parameter; part 2 generated the slab and movement target planes; part 3 sequenced the targets; part 4
generated instructions for placing the parts; and part 5 was the control interface.
Figure 4-18 shows the second program named HorizontalGrowth. It had a total graphic token count
of 1646. It was structured similarly to VerticalGrowth. Part 1 contained a parameter that referenced
a point drawn in the digital model. Based on this point, part 2 generated a list of planes that
described how circular cardboard elements were to be positioned and oriented during the horizontal
growth phase. In part 3, the planes were re-ordered in the assembly sequence. Parts 4 and 5 were
64
identical for both Grasshopper programs. Therefore the robot performs the same sequence of
operations regardless of the element type.
The team did not finalise their design before they started fabrication. Their digital model only
contained vertical lines and points that represented where cores were. Moreover, only the first
storey was modelled. They ran the VerticalGrowth and HorizontalGrowth programs to assemble
each of those cores. The axis machine was subsequently raised using the teach pendant when one
storey was completed. Since the cores rose vertically, the team simply ran both robot programs
again while referencing the same set of lines and points. However, they made design decisions while
building the model by adjusting storey heights and choosing which cores to extend. This process was
repeated for the rest of the model. In the end, it was assembled out of 15,754 elements and took six
days to complete (Figure 4-19).
Figure 4-19 The final fabricated model.
65
4.3.3
Rochor Tower
The Rochor Tower 236 team also developed a custom fabrication process midway through the
semester. Like the previous team, they decided to work with a standardised part, which in this case
was a rectangular sheet. However, the team introduced a new robotic folding operation that could
be used to individualise sheets and give them three-dimensional form. A flat sheet would represent
a floor slab, while a folded sheet placed upright on its side would represent a wall. The team first
experimented with cardboard as the modelling material before switching to aluminium, because it
allowed a wider range of fold angles to be realised.
Figure 4-20 1:50 models representing the intermediate and final design proposals.
Figure 4-20 shows the team’s intermediate and final design proposals represented as models. The
formal vocabulary of the high-rise was derived from the fabrication process. All walls had either one
or two folds. They were positioned according to rules that ensured the efficient transfer of vertical
loads through the structure. The final tower had a simple overall form. It was rectangular at its base
and tapered towards the top.
236
The team comprised Sebastian Ernst, Sven Rickhoff, Silvan Stohbach and Martin Tessarz.
66
Figure 4-21 The custom end-effector comprises a mechanical gripper with a set of vacuum nozzles.
The fabrication process was conceived of as having three distinct operations. The first was picking
sheets from a stack. A standard picking operation could be planned since wall and floor elements
were identical at the start. The second operation was placing. Here, a distinction had to be made
between a floor slab and a wall element. The former would be held by vacuum nozzles at the side of
a custom end-effector (Figure 4-21) and placed horizontally; the latter would be gripped by the endeffector and placed on its side. The last operation was folding. It only applied to walls, which could
have one or two folds with varying angles.
Figure 4-22 The final Grasshopper program was split into two general sections. Parts 1 to 6 generated the
design of the high-rise; while parts 9 to 14 were production related. Part 7 linked the two.
Figure 4-22 shows the final Grasshopper program implemented by the team. It had a total graphic
token count of 6334 and was organised in fourteen parts. Parts 1 to 6 of the program were design-
67
related and comprised more than 40% of the entire graph. 237 The overall form of the tower was
generated in part 1 and could be adjusted through three design parameters that controlled its taper,
regularity and setbacks. Next, floor slabs and walls were created. In parts 2, 3 and 4, walls were
transformed through rotation, reflection and folding operations. Each part produced a distinct type
of folded wall and determined its position. Part 5 checked the stability of these walls; while part 6
ensured that they did not intersect each other or extend beyond the floor slabs. Part 7 connected
the design related section of the program to the downstream production related section. It parsed
the geometric representation of the high-rise design and produced input data, such as folding angles
and placing target planes, for subsequent robotic operations. The target planes were also visualised
and notated in sequence, allowing students to inspect their validity and quickly identify errors.
Figure 4-23 The production related section of Grasshopper program was organised in seven parts. Part 8 was
setup related; part 9 generated instructions for placing floor slabs; part 10 corrected parameters for folding
operations; parts 11 and 12 generated folding instructions; part 13 sequenced together instructions; and part
14 was the control interface.
237
The graphic token count for parts 1 to 6 was 2617, which is 41% of the total.
68
Figure 4-23 shows the production related section of the graph. Part 8 was for setting up and testing
movement waypoints. Here, the team used a prototype 238 of a Listener component, which would be
added to the next version of YOUR, to directly extract joint angle or pose information from the
physical robot. The team would quickly set up safety waypoints by physically manipulating the robot
into desired configurations, before extracting the joints data. The team also created a sub-graph
that generated a visualisation of the robot based on an input list of joint angles, for example those
that were previously saved.
Figure 4-24 For the floor assembly process, the robot picks a sheet (left), moves to a safety position (middle),
and places the sheet after the axis machine has descended (right).
Part 9 of the program generated the instructions for assembling floor slab elements (Figure 4-24). It
contained five different types of YOUR components: MoveJoints 239, MoveAxis, Orient, Glue and Place.
The team modified the Glue component 240 and turned it into a custom picking component. They
added digital IO and sleep command related statements to its script to turn on/off the vacuum
nozzle and introduce pauses respectively during the picking operation. After picking a sheet, the
robot arm moves to a predefined safety configuration, the axis machine descends, and the sheet is
placed.
The prototype listener was written in C# by the author and had to be re-factored later on.
The MoveJoints component generates instructions for a point-to-point motion, where the angles of the
robot’s six joints are changing linearly over time.
240
The Glue component originally generates instructions for the robot to move through a series of waypoints
that correspond with the gluing station.
238
239
69
Figure 4-25 The panels on the right (in yellow) store a set of angles (left column) and the actual angles
achieved (right column). The discrepancy between specified and measured values are visualised in a graph
component (bottom right— blue line versus red line respectively). An over-folding correction factor is derived
from the graph.
The next four parts were related to the assembly of wall elements. Part 10 was responsible for
deriving and applying a correction factor to input folding angles (Figure 4-25). The team discovered
that aluminium sheets tended to spring back after folding. Thus the sheets had to be over-folded in
order to achieve accurate results. Parts 11 and 12 of the program were identical. Each sub-graph
generated robotic instructions for producing a folded wall (Figure 4-26). First, the sheet was inserted
into a clamp and tilted at an angle corresponding to the incline of the fold line. Next, the robot
switched its grip and performed a rotation to create the first fold. If necessary, the robot shifted its
grip and folded the sheet a second time before pulling it out of the clamp. The difference between
parts 11 and 12 was that the former instructed the robot to perform these operations on the left
side of the clamp, while the latter specified the right.
Figure 4-26 For the wall assembly process, the robotic arm folds the sheet on the clamp (left); moves to the
placing target (middle); the axis machine lowers itself (right).
Each sub-graph contained twenty-one YOUR related components; three types were repeatedly
used—Orient, Glue and SetTool. The Glue component was modified to instruct the robot how to fold
70
and re-grip the sheet. Its script was extended with digital IO commands for actuating the gripper.
The SetTool component was always sequenced before the Glue component. It was used to set an
offset tool centre point which would be the pivot around which the sheet is folded. 70% of this subgraph was dedicated to deriving the movement target planes and offset tool centre points for the
Glue and SetTool components respectively.
In part 13, all instructions for picking, folding and placing wall elements were woven together. The
picking instructions generated in part 9 were reused, as wall elements were identical to floors prior
to folding. Depending on the configuration and eventual position of the wall, either the output of
part 11 or 12 was selected as the folding instructions. Placing instructions were generated by a
standard Place component.
Figure 4-27 The control interface of the robot program.
Finally, part 14 served as the control interface for running the program (Figure 4-27). It contained
the Sender component for sending instructions, which were generated in part 13, to the robot. All
widgets—sliders, panels and toggles—relevant to the production process were also placed here.
Using these widgets, the team could for example, select which walls and floors to assemble or adjust
the z-heights of placing targets to correct for inaccuracies.
The final model (Figure 4-28) was assembled out of 2385 aluminium sheets. Of these, 1743
represented floor slabs. While the team picked and placed the floor slabs robotically for the first
storey of the model, they did this by hand thereafter to increase assembly speed. The team tried to
optimise the robotic process by speeding up movements. However, it was still slower than manual
assembly as the arm had to travel a substantial distance between pick and place targets and could
only handle one sheet at a time. However, all 643 wall elements were folded and placed robotically.
71
The team did not have to pre-mark crease lines and placing positions, as well as measure fold angles.
Moreover, they dispensed with gluing altogether since the folded walls could stand upright on their
own. The entire structure was simply stacked together and realised in a day.
Figure 4-28 The final fabricated model.
4.4 Robot programming setup: 2012 fall semester
The original concept for YOUR was to support a purely visual approach to robot programming. While
the scripts encapsulated in YOUR components could be accessed, it was unclear at the beginning
whether students would do so. However, all teams did in fact inspect and even modify the scripts.
Moreover, this took place independently. Students did not require any coding assistance from
instructors despite their limited programming experience. Their approach was to edit, add or delete
selected statements in a script, while referring to code in related components. This result suggested
that it was feasible for students to progress from graphical to text programming if appropriate
scaffolds, example code in this case, were provided.
72
YOUR was redesigned to support a hybrid programming approach for the fall semester. The aim was
to encourage students to mix visual programming with scripting, predominantly within components.
The latter offered two key advantages. First students could modify the behaviour of YOUR
components and thus begin to explore custom robotic operations unsupported by the standard
toolkit. Second, they could begin to use data and control abstractions unavailable in Grasshopper,
such as dictionaries and loops respectively, which may facilitate them in scaling up their programs to
implement more complex fabrication processes.
utils
kinematics
ur_standard
ur_custom
comm
Figure 4-29 YOUR package comprising of five Python modules.
The key development was the introduction of a Python package that is then referenced by YOUR
Grasshopper components. The package consisted of five modules (Figure 4-29). ur_standard
provided a collection of functions that return strings formatted as URScript statements. Each
corresponds directly to an equivalent function in the URScript library. Compared to the earlier toolkit
of components, ur_standard exposed a larger subset of functionalities offered in the base URScript
library. 241 ur_custom contained functions that return compound URScript statements describing
custom robotic operations, for example pick and place, for which there are no equivalent URScript
functions. kinematics introduced geometric based solvers for the forward and inverse kinematics
problems that can be used to simulate the robot. comm handled socket-based communication with
the robot, while utils mainly contained functions to convert between different mathematical
representations of orientation. 242
With the introduction of the package, robot programming was no longer limited to Grasshopper.
Alternatively, students may import the package and write scripts directly in Rhinoceros’s embedded
Python editor. The same script could also be written in a standard Python component in
It wrapped fourteen standard functions from the base URScript library. Only two of the components from
the toolkit in the spring semester mapped directly to procedures from the URScript library.
242
The pose of a robot has both a position and orientation component. An orientation can be represented in
various formats (Axis-angle, rotation matrix, quaternions etc.).
241
73
Grasshopper. However the Python editor offers more features, such as debugging tools, which
makes it more suited for writing complex programs.
1
accel = float(accel)
1
2
vel = float(vel)
2
3
list_joint_angles = joint_angles.split("\r\n")
3
accel = float(_accel)
4
vel = float(_vel)
joints = [float(j) for j in _joints]
4
5
script = ""
5
6
for angles in list_joint_angles:
6
7
script += "movej([%s], a = %.2f, v= %.2f)\n”
8
7
import ur_standard
a = ur_standard.movej(joints, accel, vel)
%(angles,accel,vel)
9
a = script
Figure 4-30 Two iterations of the MoveJoints component. The newer version (right) has a similar signature, in
terms of inputs and outputs, to the pervious one (left), but its script is shorter and more readable.
By introducing the python package, the scripts encapsulated in components became more concise
and readable. Non task-specific functions and boilerplate code 243 were moved to the underlying
package. Scripts were limited to eighteen lines of code so that they could be viewed all at once
within the component’s text editor. 244 For example, the scripts in previous Pick, Glue and Place
components included a matrix_to_axis_angles function with seventy-three lines of mathematics
related code; this function was moved to the utils module. In addition, string formatting statements,
which introduced unfamiliar symbols (\r) and operators (%), were removed. 245 This helped to make
the code more readable (Figure 4-30).
Boilerplate code refers to sections of code that is replicated in many places.
The scripting component’s text editor can display eighteen lines of code without scrolling. By default, the
editor cannot be too large as it will obscure the rest of the graph, thus exacerbating the ‘visual real estate’
problem that plagues visual programming environments in general.
245
Students gave feedback that such symbols made the code difficult to read. Therefore even in the YOUR
package, string formatting statements were re-written using the .format rather than % style.
243
244
74
Interface
SetTool
Utility
SetToolByAngles
Popup
SetDigitalOut
Movements/Actions
MoveJoints
MoveAxis
Pick
Place
MoveLinear
MoveCircular
MoveLocal
OrientLocal
FollowPath
Kinematics
Forward Kinematics
InverseKinematics
Communication
Sender
Listener
Figure 4-31 YOUR Grasshopper toolkit comprising eighteen Python scripting components; those introduced in
the fall semester are highlighted in green.
At the same time, the graphical toolkit was also modified. The number of YOUR components rose
from ten to eighteen (Figure 4-31). First, a utility component was introduced that performed two
roles. It adds the package’s directory to the sys.path attribute, so that its modules can be imported
by components in the toolkit. It also allows users to store reference bases in a globally accessible
dictionary. The previous Orient component was subsequently removed. Instead, every motion
component that receives a target plane as an input was modified, and now performs the
transformation itself by looking up the appropriate reference base in the global dictionary (Figure
4-32). Hence, students would have to assemble fewer components, which potentially leads to
smaller Grasshopper programs.
75
Modified Pick
Previous Pick
Figure 4-32 Compared to the previous version (left), the new the Pick component(right) no longer needs to be
connected to an Orient component.
The previous Pick, Glue and Place components, which offered a process-centric level of abstraction,
were supplemented by SetDigitalOut, Popup, MoveLinear and MoveCircular. Together with
MoveJoints, these introduced components serve as lower level primitives that can be assembled
together to describe compound robotic operations. New custom motion components were also
introduced. MoveLocal and OrientLocal allowed students to specify target planes with respect to the
tool end-effector’s coordinate system, providing what Hägele et al. describe as a “tool centric” level
of abstraction. These components were suited for programming localised movements that
characterise formative processes such as folding. Finally, a FollowPath component was added that
allows students to use curves, a familiar geometric entity, to specify a motion path.
Besides the Sender component, a Listener was introduced. It could be used to extract information
from the robot, such as its current joint angles or pose. Forward Kinematics and Inverse Kinematics
components were also introduced. They allowed students to generate a visualisation of the robot’s
configuration when target joint angles or planes are given respectively. Such functionality would be
useful in motion planning. For example, students could decide on new safety waypoints by
manipulating the virtual robot instead of the actual one. And they could pre-empt errors relating to
unreachability or collision by visualising the robot’s state in advance and making appropriate
corrections if necessary.
4.5 Results: 2012 fall semester
For the fall semester, teams could continue working on their projects from the first semester or start
anew. In either case, they had to develop their high-rise proposals beyond the schematic level and
address a more comprehensive set of design issues including circulation and façade systems. The
76
following chapters describe how each team used the updated version of YOUR and implemented a
robot program according to their revised or new design and fabrication concepts.
4.5.1
Nested Voids
Figure 4-33 1:50 model representing the final design proposal (left). Voids (blue highlight), primary walls and
secondary wall systems in the high-rise (right).
The first project, Nested Voids 246, was a continuation of the Tiong Bahru Tower. The team developed
the concepts of internal voids and a branching wall system further in the fall semester. Void spaces
were now designed to puncture the envelope of the high-rise, creating terraces that served as sky
gardens. Walls were differentiated into primary ones that were organised around voids and
secondary ones that defined internal spaces. A new enclosure system was introduced that
comprised louvered screens. Figure 4-33 illustrates how these systems were integrated into the
high-rise.
246
The team comprised Pascal Genhart and Tobias Wullschleger.
77
Figure 4-34 Twist variations (left); and louvered screens formed internal partitions or were part of the tower’s
exterior façade (right).
The fabrication concept was to combine the assembly process developed in the first semester with
an acrylic deformation process developed in the second. The latter would be used to realise a new
enclosure system for the tower. It emerged out of a series of initial material investigations. The team
used a hot air gun to thermally deform acrylic sheets in order to create various optical and formal
effects. The eventual process involved twisting rectangular acrylic strips around their long axis.
These strips represented louvres in screens that either enclosed interior spaces or were part of the
exterior façade (Figure 4-34). By designing the twist and position of strips, the team could adjust the
permeability of these screens to modulate light and views.
Instead of developing the program from scratch, students extended the one used for fabricating the
previous Tiong Bahru Tower model. They implemented new functionality for assembling the
louvered strips and refined the rest of the program. Figure 4-35 shows the result. It had a graphic
token count of 4096 and was structured in eleven parts. It referenced a digital model of the high-rise
design that was generated by a separate Grasshopper program.
78
79
Figure 4-35
The Grasshopper program used to fabricate the final model was organised in eleven parts. Part 1 was
used to set up a reference base and load YOUR; part 2 was used to specify the location of the feeder;
parts 3 and 4 generated the picking and placing targets for floors and walls respectively; part 5
generated picking and placing targets, as well as twisting angles for the acrylic strips; part 6 was for
making speed and movement adjustments; part 7 was for moving the axis; parts 8 and 9 generated
instructions related to the assembly of floors and walls respectively; part 10 generated instructions for
twisting and assembling louvres; and part 11 was the control interface.
80
Part 1 contained the utility component that was used for referencing the model base and loading
YOUR into Grasshopper. Part 2 was used to specify the location of the feeder, which could be
adjusted to as way to correct for picking inaccuracies. Parts 3 and 4 generated the picking and
placing targets for floors and walls respectively. Part 5 was responsible for generating picking and
placing targets, as well as twisting angles for the acrylic strips. The main role of the sub-graph in part
6 was to adjust the movement targets and speeds for placing operations.
Parts 7 to 10 of the graphical program generated formatted instructions for the robot. Part 7
contained a MoveAxis component from the toolkit for controlling the axis machine. Part 8 and 9
were related to the assembly of floors and walls respectively. The team used the new MoveLocal
component from the updated toolkit for the gluing operation in part 9. Otherwise both sub-graphs
were virtually unchanged from their corresponding parts in the Tiong Bahru tower program.
Part 10 was the main addition to the graph. It specified three sequential operations for assembling
louvered strips in the model. The first operation was to pick the acrylic strip from a stack on the
feeder. However, instead of using a Pick component, the team assembled MoveLinear and
MoveJoints components in order to plan the motion path explicitly and include additional safety
waypoints. This prevented the robot from colliding with the feeder as it was picking strips from the
stack.
Figure 4-36 An acrylic strip is placed in a clamp (left), heated for 7 seconds (middle) before being twisted
(right).
The second operation was to deform the strip (Figure 4-36). There were four steps involved (Figure
4-37). The strip was: (1) inserted into the automated clamp; (2) heated up till it was pliable; (3)
twisted by the robot then cooled; and (4) released from the clamp and pulled out. MoveLinear and
MoveJoints components were used to generate instructions for the insertion and twisting motions
respectively. The students also wrote URScript commands directly in panels to switch on/off the
heating gun, open/close the gripper and clamps, as well as specify waiting times.
81
Figure 4-37 The strip twisting process has four steps—insert, heat, twist and retract (left); the instructions are
fed into a weave component (right).
The third operation was gluing. Here the robot moved to a pre-defined configuration and paused,
while glue was applied to the strip manually. The last operation was placing. The students used
MoveLinear components to generate movement instructions and wrote IO commands in panels to
open/close the gripper. A weave component was finally used to sequence all the instructions for the
four operations together. It had a total of thirty-five input streams (Figure 4-37-right).
Part 11 of the program was the control interface. It contained Sender from the YOUR toolkit, which
was the terminal node of the entre graph. It included panels whose values could be changed to
specify a storey to be built, as well as to make corrections to the assembly process. The students
decided whether to assemble floors, walls or louvres by connecting the outputs of their respective
sub-graphs (8, 9 and 10) to the sender.
The team represented their final design proposal through a sectional model in order to depict the
high-rise’s interior voids. The final model was built in three parts before being stacked on top of one
another (Figure 4-38). It was completed in six days. 943 unique walls—704 primary and 239
secondary—as well as 4064 acrylic strips were robotically assembled. Furthermore, 2451 strips were
twisted before placing. While the implemented program could be used to assemble floor slabs as
well, the team realised that it was simpler and faster to place them by hand. This was because the
82
louvres on one storey traced the boundary of the floor slab above, thus providing a convenient
reference system.
Figure 4-38 The model construction process (left); final model (right).
The team had essentially been extending and refining the same robot program over the course of
the entire studio as their fundamental fabrication concept remained unchanged. They had optimised
the robotic movements accordingly to their design and also knew how to make appropriate
adjustments during the assembly process. Hence, they were able to fabricate their model almost
entirely by robotic means and with the requisite accuracy. As a consequence of this success, their
program was adopted as the basis for a robot programming setup to be used in the subsequent DRS.
83
4.5.2
Bent Stratifications
The Bent Stratifications team 247 developed a fabrication process first before commencing with the
design of their high-rise. They selected generic 2 cm wide acrylic strips as the basic modelling
element and implemented a custom thermal deformation process to bend them (Figure 4-39). Strips
could be individualised by varying the angle, location and number of introduced bends. They were
vertically stacked in an upright position to compose larger space-defining architectonic elements,
namely walls. A range of such elements with different planar configurations and heights could be
realised. In this way, the student team first derived a vocabulary of constructible forms and then
applied them directly in their design. Moreover, the idea to introduce sectional variation originated
from the layer-based modelling process (Figure 4-39-right).
Figure 4-39 A section of the strip is heated up (left); the robot then bends the strip before it is cooled (right).
The final high-rise comprised three separate leaning sub-towers that merged together at different
heights. Each sub-tower was uniformly extruded from a rectangular footprint at an angle. While the
overall tower had a regular exterior form, its interior was differentiated. Split levels were introduced
for almost every storey and the floor-to-ceiling heights were varied. At the same time, a mixture of
programs—apartments, hotel rooms and offices, was distributed throughout the tower. Walls were
configured to shape internal spaces according to these programmatic requirements.
247
The team comprised Michael Stünzi and Sylvius Kramer.
84
Figure 4-40 1:50 model representing the final design proposal (left); example floorplan of apartments and
hotel lobby (middle); section of high-rise (right)
For the model, strips that formed part of a wall had two, three or four bends, while those that
represented beams could have up to ten. All strips were variations of approximately twenty basic
configurations. 248 The student team’s objective was to quickly implement a working program to
prove their fabrication concept and verify that strips could indeed be produced at the required level
of precision. 249 Their strategy was to first fabricate the simplest two bend strip, before incorporating
additional bends. At the beginning, the student team worked primarily with standard YOUR
components. By assembling them in different sequences, they were able to prototype a working
program and commence with empirical fabrication tests.
Strips were generated by morphing between corresponding walls on two key floorplans. Since those walls
had different configurations, the resulting strips varied in terms of their bend angles or segment lengths.
249
Strips needed to be precisely fabricated in order to line up when they were stacked. This would ensure that
loads could be transferred downwards in a straight path. Otherwise the structural integrity of the model could
be affected.
248
85
Figure 4-41 An initial implementation of the robot program. It is used to produce and place a strip with five
bends.
Figure 4-41 shows an early Grasshopper program that produced a strip with five bends (beam). Part
1 contained the Utility component for referencing the physical model base and the underlying YOUR
package. Part 2 contained parameters that stored input polyline curves drawn in the digital model
which described the intended form. Part 3 contained standard Grasshopper components that
generated input data needed by YOUR components, such as target planes, from the reference
polyline. Part 4 specified the sequence of robotic operations needed for the fabrication process. It
had twenty-two YOUR components—four components were repeated five times, once for each bend,
and the last two (4b) were related to placing. Finally, part 5 contained Sender component and
controls for sending formatted instructions to the robot.
86
Figure 4-42 3 YOUR components (SetTool, Pick and a modified Move) were used repeatedly to carry out the
steps in the bending process.
Figure 4-42 shows in greater detail part 4a of the implemented program. It contained 8 YOUR
components that generated robotic instructions for bending a strip twice. In step 1, the robot
gripped the strip and pulled it into position for bending and then readjusted its grip. This operation
could be directly programmed using the standard SetTool and Pick components. In step 2, the strip
was heated up (2a) and then bent by the robot before being cooled (2b). To program this operation,
students used SetTool to first create a virtual pivot point and then MoveLinear to rotate/bend the
strip around it. They modified the latter component by adding statements calling set_digital_out and
sleep functions from the underlying ur_standard module to trigger the hot air and cool air actuators.
To create additional bends, the student team simply copied components for step 1 and 2 repeatedly.
As a result, they were able to progress from the simplest two-bend strips to the most complex within
a short amount of time.
87
Figure 4-43 The final implemented robot program. Parts 3 and 4 contain the two custom components.
Once the student team had fine-tuned the program and verified the correct parameter values, such
as heating and cooling times, for producing accurate bending results, they began to abstract it by
collapsing the sub-graphs in parts 3 and 4 into single Python scripting components. Figure 4-43
shows the resulting program which was used for fabricating the final model. The two new user
defined components were named CurvesToPlanes (3) and PlanesToURScript (4). By switching to
scripting, the student team could utilise iteration abstractions, which are unavailable in Grasshopper,
to implement repetitive logics in their process.
......
1
###BENDING###
2
###CHOOSE CORRECT PLANES###
3
RefPlRobotAdj = RefPlRoboter[((i*6)+3):((i+1)*6)]
4
trans_planes = [utils.rhino_to_robotbase(rp, sc.sticky["model_base"])for rp in RefPlRobotAdj]
5
num_planes = len(trans_planes)
6
7
###SET IOs###
8
script += ur_standard.set_digital_out(iOClamp, True)
9
script += ur_standard.set_digital_out(iOHotAir, True)
10
script += ur_standard.sleep(SecHotAir)
11
script += ur_standard.set_digital_out(iOHotAir, False)
12
13
###BENDING ITSELF###
14
for j in range(num_planes):
15
script += ur_standard.move_l(trans_planes[j],Accel/2,Vel/2)
16
17
if j == 0:
18
script += ur_standard.set_digital_out(iOColdAir, True)
19
script += ur_standard.sleep(SecColdAir)
20
script += ur_standard.set_digital_out(iOColdAir, False)
21
script += ur_standard.set_digital_out(iOClamp, False)
22
88
23
if j == (num_planes-2):
24
script += ur_standard.set_digital_out(iOClamp, True)
25
script += ur_standard.sleep (0.5)
26
script += ur_standard.set_digital_out(iOGripper, True)
......
Figure 4-44 A portion of the script in the PlanesToURScript custom component.
In the case of CurvesToPlanes, statements calling RhinoScript functions, which mapped directly to
the Grasshopper components used, were placed in the body of a for loop. Figure 4-44 shows a code
snippet from the script in PlanesToURScript that is analogous to the subgraph in Figure 4-42 (left).
Statements calling YOUR functions, which were related to the bending operation, were placed in the
body of the loop (Figure 4-44: line 14 onwards). However, rather than script from a blank slate, the
student team first copied code encapsulated in the YOUR components they wanted to abstract and
edited it accordingly.
The size of the Grasshopper program was reduced by almost 85% as a result of creating these two
custom components. Because the final notation was compact, all objects—parameters and
buttons—that had to be changed or interacted with during the fabrication process were visible at
once. Besides being more readable, the program was also simpler to control. The team member who
did not develop the program was able to, within a short amount of time, understand and run it
independently. This led to improved collaboration during the fabrication process. The final model
was assembled out of 2064 plastic strips that were stacked in 140 layers (Figure 4-45). Every strip
was bent and placed by the robot. 220 laser-cut floor slabs were manually placed and glued using
the strips which represented beams as a guide. In total the team required eleven days of production
time to complete their model.
89
Figure 4-45 The model construction process (left); final model (right).
4.5.3
Undulating Terraces
The Undulating Terraces team 250 took a similar approach to the Rochor Tower group and developed
a model construction process first. Their concept was to deform a continuous material into curved
forms that would represent architectonic elements. Figure 4-46 illustrates this process. First a paper
strip is formed into a loop. A second layer is created by affixing the remainder of the strip to the
inner loop at multiple points. In order to accommodate the outer layer’s excess length, the paper
strip naturally assumes an undulated shape. By controlling the position of fixation points and the
outer layer’s length, different curved forms can be produced.
250
The team comprised Sebastian Ernst, Sven Rickhoff and Silvan Stohbach.
90
Figure 4-46 The paper strip deformation process.
The team developed their high-rise designs based on the formal vocabulary derived from the
material process. Their final proposal was for a 35 storey high-rise with a central open-air atrium
(Figure 4-47). Each storey had a sinuously shaped floor-plan that was unique. Floor slabs
cantilevered outwards and were bound by balustrades which also functioned as shading devices.
Wide corridors were created that looped around an entire level and served as communal spaces. An
intermediate layer of screens was introduced between the balustrades and exterior apartment walls,
defining a threshold between the public and private zones.
Figure 4-47 Section of the high-rise (left). A typical storey consists of 1) a floor slab and continuous wall, 2)
balustrades, and 3) structural beams and screens (right).
91
Initially, the student team built quick study models by hand to understand how paper deformed.
They then began to develop a robot end-effector that could enact this manual process. Figure 4-48
shows its evolution. First, the end-effector was designed to perform one function—staple paper
strips together. It twisted the strips during the stapling operation to produce a curved result.
However, the fixation points had to be marked beforehand and the paper strips manually fed. In the
next stage, the objective was to automate the entire process. The end-effector would pull paper out
of attached rolls and then staple them together. The robot could produce the curved strips and
assemble them as a result.
Figure 4-48 Evolution of the end-effector: the first (leftmost) version only stapled strips; the second added a
shifting mechanism, the third incorporated a twisting mechanism instead; the fourth included a mechanism to
pull paper from attached rolls; and the final version used a shifting mechanism with different nozzle system.
However, in spite of developing increasingly complex end-effectors, the team failed to fully
automate the process. They consequently revised the fabrication concept and focused once again on
a collaborative human-robot building approach. They decided to simplify the end-effector and make
the paper feeding operation a manual one. In addition, the end-effector would shift one of the layers
rather than twist it prior to stapling (Figure 4-49). As a result, more exaggerated curved forms could
be produced and strips did not need to be marked in advance.
92
Figure 4-49 The end-effector grips the two strips, shifts one of them, and then staples them.
The development of the physical end-effector and robot program were tightly linked. Each new
version of end-effector differed from the preceding one in terms of physical capabilities. Therefore, a
new control program had to be implemented each time. For the first end-effector, the team scripted
within a custom python component and called functions from the ur_standard and ur_custom
modules. Once they had verified that it worked, they abstracted the script as a function and named
it staple. It was defined in a new python module—ur_g_four—that was added to the standard YOUR
package.
This approach was repeated when subsequent end-effectors were developed. The team expanded
ur_g_four with new functions after they had been tested as scripts in graphical components. For
example, they included functions such as human_interaction, which paused the robotic process and
allowed students to perform actions such as feeding paper strips or refilling staples. By creating
these new abstractions, the team was in fact defining an extended vocabulary of robotic operations,
which was specific to their human-machine collaborative fabrication process.
93
Figure 4-50 shows the team’s final program, which described the procedural logic for generating the
high-rise design, and for producing the physical model. It had a total graphic token count of 1083
and was organised in nine parts. Parts 2 to 7 were design related. The main input to this section of
the graph was a curve, drawn in the digital model, which described the outline of the central atrium.
It was stored as a parameter in part 2, and was fed, together with other design parameters stored in
part 3, to parts 4, 5, 6 and 7. In turn, these parts generated representations of the balustrades,
beams and screens, exterior apartment walls, and floor slabs respectively.
Meanwhile parts 1, 8 and 9 were production related. Part 1 loaded the extended version of YOUR
into Grasshopper and specified information regarding the model base and end-effector. Part 8 was
connected to part 4. It generated the robotic instructions for producing curved paper strip
representations of the balustrades. However, due to time constraints, the team did not implement
equivalent subgraphs for producing the beams, screens or wall model elements. They needed to
develop their end-effector be further modified in order to fabricate other elements. 251 Finally, part 9
was the control interface. It contained the Sender component as well as a MoveJoints component
that was used to reset the robot’s position.
Part 8 contained a single custom component (Figure 4-50: 8A) that generated the robot instructions
for producing balustrade elements. Students wrote its script (Figure 4-51) from scratch. In the first
section of the script, standard as well as custom (ur_g_four) YOUR related modules (Figure 4-51:
lines 5–8) were imported. The instructions for deforming the paper strip were generated in the body
of a for loop (Figure 4-51: lines 33–49). Figure 4-52 illustrates the steps in this deformation process.
In step 1a, the robot was instructed to move to a start position (Figure 4-51: line 32) and await a
prompt from the student to begin (Figure 4-51: line 33). In step 1b, the vacuum nozzles were
switched off (Figure 4-51: lines 34–35) and the end-effector moved along the length of the strip to
the fixation point (Figure 4-51: line 37). A conditional if statement was used to select a pair of
nozzles (Figure 4-51: lines 38–41) to switch on, thus gripping one of the strips. The student checks if
the strip was gripped and then prompts the robot to continue (Figure 4-51: line 44). In step 1c, the
robot shifts the strip by a distance corresponding to the excess length desired (Figure 4-51: line 45).
The two paper strips were then stapled together (Figure 4-51: lines 47–49) and began to deform as a
result. The statements describing these steps were specified in the body of a for loop. The loop
iterates (step 1, step 2, etc.) until a single balustrade was completed.
251
For example, a different nozzle system was needed to grip wider strips in the case of walls.
94
95
Figure 4-50
The final Grasshopper program was structured in nine parts. Part 1 loaded YOUR and was setup
related; parts2 and 3 stored design parameters; parts 4, 5, 6 and 7 generated representations of the
balustrades, beams and screens, exterior apartment walls, and floor slabs respectively; part 8
generated instructions for producing the curved balustrade elements; and part 9 was the control
interface.
96
......
5
import ur_standard
6
import utils
7
import ur_g_four
8
import ur_custom
......
31
for _p in range(len(_move)):
32
script += ur_standard.move_l(_planes,_accel,_vel)
33
script += ur_g_four.human_interaction("ziehen")
34
script += ur_standard.set_digital_out(3,False)
35
script += ur_standard.set_digital_out(4,False)
36
script += ur_standard.sleep(_t)
37
script += ur_custom.move_local(_move[_p],_accel,_vel)
38
if x[_p] < 0:
39
40
_startIO = 3
else:
41
_startIO = 4
42
script += ur_standard.set_digital_out(_startIO,True)
43
script += ur_standard.sleep(_t)
44
script += ur_g_four.human_interaction("fest?")
45
script += ur_custom.move_local(_shift[_p],_accel,_vel)
46
script += ur_g_four.human_interaction("fresh?")
47
script += ur_standard.set_digital_out(3,True)
48
script += ur_standard.set_digital_out(4,True)
49
script += ur_standard.set_digital_out(0,True)
50
......
Figure 4-51 The script in the custom paper deformation component contains a for loop. Instructions for
gripping, shifting and stapling the strips are defined in the body of this loop.
97
Figure 4-52 The robot: 1a) moves to the start position and waits for the strips to be positioned, 1b) moves to
the fixation point and grips one of the strips, and 1c) then shifts it back before stapling the two strips together.
These steps are then repeated (2a, 2b, 2c etc.).
The final tower model had thirty-five storeys (Figure 4-53: right). In total, 980 70 mm wide paper
strips were used to fabricate walls, while 1120 25 mm strips were needed to make the balustrades
and beams. Compared to, for example, the Nested Voids or Bent Stratifications models, the degree
of manual building involved here was considerably higher. In the end, only the balustrades were
produced using the human-robot collaborative building process (Figure 4-53: left). As the connection
points on the strips did not have to be measured and marked in advance, the balustrades were
completed in a single day. Meanwhile, floor slabs were cut out of cardboard and covered with a
print-out of the floor plan. The team then manually assembled the balustrades, walls and beams
onto the slab using the plan as a guide. Each storey was then stacked on top of each other to
complete the tower.
98
Figure 4-53 The model fabrication process (left); final model (right).
4.6 Interview: 2012 Design Research Studio
A formal interview was conducted with each team at the end of the DRS (see Chapter 9—Appendix).
First, students were asked to evaluate whether YOUR was successful in making robot programming
accessible. Students responded that the initial approach of providing a graphical toolkit of YOUR
components, a sample program and a prepared robotic process was effective in allowing them to
engage immediately in robot programming. 252
They stated that that toolkit was easy to use as there were only a small number of components to
learn. Furthermore, they could infer how high level components such as Pick, Glue and Place should
be used, as they directly described steps in the assembly process. 253 Students also stated that the
underlying concept of YOUR was straightforward, as it simply involved connecting components, each
corresponding to a command, in the proper sequence to the Sender. 254 Some students compared the
process of assembling components with that of writing equivalent code. They stated that the latter
All teams managed to successfully fabricate 1:100 scale models by the 1st week.
Green and Petre argue that programming is easier if the language offer entities that map directly to the
problem domain. Green and Petre, “Usability Analysis of Visual Programming Environments”, 136.
254
One student reported that “it was very intuitive. You just plug in geometry into a component and get a
command string, and then you send that to the robot.”
252
253
99
required more effort because there were numerous syntactic rules to be aware of when writing a
statement. The former felt more intuitive, because it required fewer actions and there was
immediate feedback 255—they could view the generated instructions in a panel as well as the robot’s
physical response 256.
However, students also noted that several visual programming concepts were difficult to learn, yet
vital if they were to implement more complex algorithms. In particular, data-trees, which are
Grasshopper specific data structures, proved to be a major abstraction barrier. For example, the
Undulating Terraces team stated that one of their main difficulties with using the toolkit was in
combining the outputs of multiple YOUR components, which were oftentimes structured
differently. 257 They had to learn unfamiliar concepts and operations—such as data-matching,
flattening and grafting—in order to manipulate these data-trees and ensure that the final set of
instructions was sequenced correctly. 258
In addition, teams were asked to compare the spring and fall semester versions of YOUR and discuss
how their programming approach was impacted by the different tools. Students responded that
components from the fall semester toolkit, while outwardly and functionally similar to those from
before, were easier to modify because the encapsulated scripts were more concise and readable. 259
Another key difference was that they had access to a wider range of abstractions for robot
programming in the fall semester. Besides the toolkit components, they could directly invoke
functions defined in the Python package within a scripting context. Consequently, students were
able to mix visual and text programming in the fall semester, which they did to varying degrees.
The Nested Voids team worked primarily with standard YOUR components with slight modifications
to their code. They persisted with a graphical approach because they wanted to re-use their
program from the spring semester, which had already been rigorously tested. The Bent Striations
team prototyped their initial robot program using standard YOUR components, then translated
portions of the graph into scripts that were encapsulated in two custom components. Consequently,
they were able to reduce the size of their Grasshopper program and make it simpler to use, which
benefitted them during the production phase. The Undulating Terraces team adopted a scriptingA student described the process as akin to “plugging in and getting an instant result”.
In the latter case, students can have a live programming experience by switching on the toggle connected
to the Sender component. Each change made to the graph results in an immediate robotic response.
257
The output of a component is stored in data-trees that are automatically generated by Grasshopper
according to the structure of the component’s input data.
258
The student stated that “you have to weave the data all together and find the correct sequence. And this
makes programming with YOUR components difficult.”
259
One student answered that “it was much easier to modify components in the second semester, because
[for example] matrix related functions were not included in the script, so it was much easier to read.”
255
256
100
centric approach from the outset and focused on creating a single custom component for their
robotic process. 260 They wanted to utilise Python’s control abstractions to implement repetitive and
conditional logics underlying their process. The team also converted the scripts in their custom
component or parts of it into functions, and saved them in a module that was added to the YOUR
package.
Teams were also asked to identify missing functionality in YOUR. In general, they did not feel that
the toolkit was lacking in any critical components. A member of the Bent Striations team stated that
he was confident of implementing missing functionality using the lower-level abstractions offered by
the Python package. 261 Instead of expanding the graphical YOUR toolkit for the following DRS, he
suggested that students should be taught to create their own components. The issue of extending
YOUR’s simulation functionalities, which was limited to visualising individual states of the robot’s
configuration, was raised by the author. However, teams were ambivalent about it. Several students
considered simulation to be extraneous since they could work directly with the physical robot. 262
Moreover, they felt that concepts such as singularities could be best understood by observing the
robot’s dynamic movements.
Finally, students were asked to discuss what factors prevented them from achieving more designbuild iterations. All teams identified the slow speed of the fabrication process as a critical
problem. 263 For example, the Nested Voids and Bent Striations teams had to assemble in excess of
2000 acrylic strips, which furthermore, had to be thermally deformed. In the latter case, it took more
than five minutes to produce a strip with ten bends. The Rochor Tower/Undulating Terraces team
tried to address this problem by developing fabrication concepts which incorporated manual
building. Students also stated that it took a long time to set up an accurate and robust robotic model
fabrication process. On one hand, this was because a significant amount of time had to be invested
in developing the physical end-effector for a custom process. 264 On the other hand, a lot of empirical
testing was required to fine-tune parameter values in the robot program to account for
unpredictable material behaviour.
The students stated that “we changed strategy [in the fall semester] and only created one custom
component.”
261
The student answered that “I did not get the feeling that something was missing because with the Python
library, I could program it myself.”
262
A student stated that “I never had the feeling that I needed to simulate the robot before moving it.”
Another responded that “a virtual simulation would be fancy, but the actual robot is right there.”
263
Students described the slow building process as being a “limiting factor” and a “bottleneck”.
264
This was especially the case for the Undulating Terraces team. A member stated that “the semester was all
about gripper design” and they arguably invested too much time in developing increasingly complex setups.
260
101
4.7 Robot programming setup: 2013 spring semester
At the end of the 2012 DRS, the core functionality for YOUR had been developed and empirically
tested. At this point, the option of compiling YOUR was explored. It was rewritten in C# and a
Dynamic Link Library (Your.dll) and Grasshopper Assembly (YourGrasshopper.gha) were created.
Consequently, the toolkit would consist purely of compiled Grasshopper components, instead of
Python components that encapsulate pre-written scripts. However, it would be possible to add a
reference to Your.dll in any scripting component and hence access its functions and classes. This is
illustrated in Figure 4-54. One advantage with this approach is that compiled components can be
developed with custom graphical interfaces to improve their usability, with Godzilla’s timeline
component 265 being a good example.
Python script
1
import DFAB.Your as your
2
target_pose = your.URScript.Pose(target)
3
a = your.URScript.StandardFunctions.MoveL(target_pose)
C# script
265
1
var targetPose = new DFAB.Your.URScript.Pose(target);
2
A = DFAB.Your.URScript.StandardFunctions.MoveL(targetPose, accel, vel);
See Chapter 2.3.2.
102
Visual Basic.NET script
1
Dim targetPose As New DFAB.Your.URScript.Pose(target)
2
A = DFAB.Your.URScript.StandardFunctions.MoveL(targetPose, accel, vel)
Figure 4-54 The compiled version of the MoveLinear component is shown at the top, above the three scripting
components (Python, C# and Visual Basic respectively). The coresponding code in each scripting component is
shown below; each component references the Your.dll.
However, the disadvantage is that code in compiled components and the underlying library are no
longer accessible to end-users. The results from the 2012 DRS showed that it was important to
expose the script in YOUR components for pedagogic reasons. Students learned what YOUR
functions were available and how they should be used by inspecting the script. They transitioned to
scripting by first making selective modifications to the code. It may be argued that the inner
workings of functions and classes offered by the Python library should be hidden. However by
disclosing implementation details, students are exposed to robotics related mathematics and
kinematics concepts, as well as more advanced general programming concepts. Such knowledge is
pertinent when they want to eventually define custom abstractions, either in the form of
components or module functions, for describing new robotic operations. And indeed, one of the
previous groups—Undulating Terraces—studied the implementation of the YOUR package and
added custom modules to it.
The decision was to not compile 266 YOUR, thus departing from the approach taken by other
Grasshopper solutions such as KUKA|prc, HAL and Godzilla. The concept of an open, self-disclosing
library was retained. Instead of compiled components, those in YOUR toolkit remained as scripting
components, but were saved and distributed in the form of user objects. 267 This allows them to
appear as standard Grasshopper components, whereby they are accessible from the standard drop
down menus and are revealed in the auto-completion widget. Consequently, YOUR user objects are
functionally equivalent to their compiled versions, but offer the bonus of having accessible code.
When a program is compiled, it is translated from the higher-order language it was written in to a machine
language equivalent. The latter is no longer human-readable.
267
A user object is Grasshopper abstraction allowing users to save components or clusters
266
103
LoadPython
Interface
SetTool
SetToolByAngles
Popup
SetBase
SetDigitalOut
Movements/Actions
Pick
Place
MoveJoints
MoveLinear
MoveCircular
MoveLocal
OrientLocal
Kinematics
Forward Kinematics
InverseKinematics
Communication
Sender
Listener
SendLocalMotions
Figure 4-55 YOUR Grasshopper toolkit comprising eighteen user objects; those introduced in the spring
semester are highlighted in green.
Figure 4-55 shows the version of the YOUR toolkit at the start of the 2013 DRS. It differed slightly
from the toolkit handed out to students in the fall semester of the 2012 DRS. The components
highlighted in green were new. A Utility component from the previous toolkit was split into two
more role-expressive single purpose components. LoadPython component, as its name suggests,
allowed students to load any Python module into Grasshopper, including YOUR, while SetBase
allowed them to define a reference base and store it in a global dictionary. SendLocalMotions
mimicked the jogging functionality of the teaching pendant. Students could move the robot’s tip
up/down/left/right in the tool coordinate system by pressing buttons and thus control the robot
directly through the Grasshopper program. The Glue component was removed from the toolkit. Its
name was misleading because the output was simply a series of movement instructions. Its
functionality could be replicated by feeding a list of target planes, corresponding to the waypoints in
the gluing motion, to the MoveLinear component.
It was decided that for the spring semester, students would fabricate their models using the robotic
assembly process that was first introduced at the start of the 2012 DRS, and then subsequently
104
improved by the Nested Voids team over the rest of the year. A physical setup consisting of an
integrated feeder/gluing station and 45 degree vacuum grippers, was prepared in advance for them.
Students were also given, besides the toolkit, a sample Grasshopper program to be used in
conjunction with the setup. This program (Figure 4-56) was based on the one developed by the
Nested Voids team. 268 It had a total graphic token count of 3399, and was laid out in five broad
columns.
Figure 4-56 Sample Grasshopper program given out at start of 2013 studio for programming the prepared
assembly process.
The first column was for setting up the process and visualising the robot. It contained the
LoadPython, SetBase, SetTool and ForwardKinematics YOUR components. The second column was
responsible for generating all the necessary parameters for the robotic operations from design
geometry stored in parameters (coloured in red). There were two separate subgraphs in this column.
The top one was related to picking and placing standardised elements from a stack. The bottom one
related to picking and placing individualised elements from a laser-cut sheet located on the feeder
station. These sub-graphs only contained standard Grasshopper components.
268
The program was implemented by Tobias Wullschleger (member of the Nested Voids team).
105
The third column was responsible for generating the sequence of robot instructions for those
operations. It comprised four separate sub-graphs. They were related, in order from top to bottom,
to the assembly of: vertical elements from a stack; horizontal elements from a stack; vertical
elements from a cut-sheet; and horizontal elements from a cut-sheet. These sub-graphs used five
types of YOUR components: Pick, Place, MoveLinear, MoveJoints and MoveLocal. 269
The fourth column was for choosing which instructions to send to the robot. It contained standard
Grasshopper components for selecting items out of data-trees that stored the instructions
generated in column 3. The last column contained the Sender component, which was the terminal
node of the graph, as well as controls for making pick and place adjustments in the assembly process.
The graphical notation had to be carefully designed in order to ensure that it was readable despite
its large size. It was laid out so that data generally flows from left to right. Sub-graphs were spaced
apart and positioned to minimise wire crossings between them. Related components were chunked
together in groups that were descriptively named. All groups, as well as panels, were colour-coded
according to a pre-defined style guide. For example, panels storing fabrication related parameters,
like movement speeds, were always cyan in colour.
The concept was for a ‘plug and play’ robot program. Students would simply connect their design
geometry to the starting nodes of the graph and run the program. At most, they were expected to
adjust controls or change parameter values in the program, but not make structural changes to the
graph.
MoveJoint components were used to plan safety motions, while MoveLocal was used to specify fine
movements, such as a slight retraction after placing. Sequences of MoveLinear components were sometimes
used in place of the Pick component.
269
106
4.8 Results: 2013 spring semester
For the 2013 spring semester, students were restricted to working with the prepared robotic
assembly process. They were to focus on developing a computational system or “engine” for
generating their high-rise designs, rather than on implementing custom fabrication processes, as
teams did in the 2012 spring semester. The objective was to have more fully developed high-rise
design proposals by the end of the first semester than at a similar stage in the previous DRS.
The semester results are summarised as a whole, as teams mainly used the given Grasshopper
program and did not implement their own. Figure 4-57, Figure 4-58 and Figure 4-59 show models
representing high-rise designs developed by the Sequential Frames 270, Mesh Towers 271 and Vertical
Avenue 272 teams respectively. In general, the teams had less success with model fabrication as
compared to those from the previous DRS (in the spring semester). Most of them built the model
partially by robotic means, before switching to manual production to complete them.
Figure 4-57 1:50 models representing iterations Sequential Frames high-rise design. The team proposed a
structural system, where walls were densely arrayed along a curved band and supported horizontal floor slabs.
The spacing between walls were varied to produce interstitial spaces and openings cut out from walls to create
larger uninterrupted spaces.
The team comprised David Jenny, Jean-Marc Stadelmann, and He Yuhang.
The team comprised Petrus Aejmelaeus Lindström, Chiang Punhon, and Lee Pingfuan.
272
Vertical Avenue was a project by Kan Lijing, Foong Kaiqi and Andre Wong.
270
271
107
Figure 4-58 1:50 models representing iterations of the Mesh Towers design. The student team proposed a
high-rise typology characterised by multiple slender towers that connect to one another to form a porous,
mesh-like structure. Standard inhabitable modules were stacked to form each tower.
Figure 4-59 1:50 models representing iterations of the Vertical Avenue high-rise design. The team’s concept
was to extend the street into the high-rise and intertwine public spaces with private neighbourhoods. The final
design consisted of 4 sub-towers, with hexagonal floorplans, that were linked by a series of ramps.
Both the Vertical Avenue and Sequential Frames teams only made slight modifications to the given
robot program. Figure 4-60 shows the final state of the Vertical Avenue team’s program.
Components highlighted in green were added, while those in yellow were modified. The team added
parameters (Figure 4-60: A) to store their design geometry, components to adjust the picking targets
(Figure 4-60: B) and created two new safety movements (Figure 4-60: C). In addition, they also
adjusted the values of existing controls and parameters (Figure 4-60: D). Since they chose to
assemble elements from a cut-sheet, the team ignored the upper half of the graph, which was for
108
assembling elements from a stack. The remainder of the program was left unchanged. It was a
similar case for the Sequential Frames team. The main difference was that they used the upper
portion of the graph since they assembled elements from a stack, and deleted the lower cut-sheet
relater half to make the program more compact.
Figure 4-60 Modified robot program by the Vertical Avenue team.
Figure 4-61 The robot folds the cardboard (left); glue is applied manually to “fix” the fold (middle); the robot
pulls the cardboard out of the clamp (right).
109
Likewise, the Mesh Towers team modified the original program for building the first two iterations of
their model. However, they decided to develop a new fabrication process that incorporated a folding
operation (Figure 4-61) for their final model. Hence they created a new program that is shown in
Figure 4-62; it had a graphic token count of 3282. The team tried to reuse chunks of the sample
robot program, including the sub-graphs relating to setup (Figure 4-62: A), picking and placing floors
(Figure 4-62: B), and the control interface (Figure 4-62: C). They created a new sub-graph to
implement the folding process by using MoveLinear, MoveJoints, MoveLocal and SetTool
components. However, the team did not manage to develop their program for assembling folded
walls to a degree where it could be used to realise the entire model. In fact, they only demonstrated
that it could be used to produce a folded wall, and then due to time constraints, built the final model
by hand.
Figure 4-62 Robot program implemented by the Mesh Tower team for folding and assembling cardboard walls.
Teams were interviewed at the end of the semester to understand the reasons why they were less
successful at fabricating their models robotically than expected. Students responded that the
concept of a “plug and play” robot program simply did not work in practice. They experienced
110
regular singularity errors 273 or self-collisions while executing the sample program. To address these
errors, they had to edit the motion paths, for example by changing safety waypoints. However,
planning a generalized error-free path that could be used for every modelling element proved to be
challenging. 274 All teams also reported difficulties with making adjustments to correct for assembly
inaccuracies. This was not because there were no controls in the program for doing so, but rather
that it had too many, and students were not always sure which ones to use. 275
The underlying problem was that students did not know how to resolve singularity errors and how to
make appropriate adjustments to improve accuracy. The kinematics components were inadequate
for solving the former problem. They did not predict when singularity errors would occur and could
not show the interpolated states of a robot during a joint-type movement. 276 Solving these problems
required tacit knowledge that could only be gained, as one student stated, through more contact or
practical experience with the robot. In other words, it could not be captured in a program.
It may also have been counter-productive to give students a completed program, which they,
ostensibly, only had to run. There was less incentive for them to try and understand how YOUR
worked under the hood as a result. For example, no team modified the scripts in any of the
components. In fact, some students were unaware that they could change the code. Despite efforts
to structure the sample Grasshopper program, one student stated that it was still too complex and
he therefore did not know where to begin making modifications. The Mesh Towers team was the
most proficient team at robot programming by the end of the semester. To a certain extent, this
could be attributed to the fact that they were the only team who implemented a robot program, or
large parts of it, from scratch. They were obliged to learn more about the available abstractions
offered by YOUR and how they could be used, because their needs could no longer be served by
modifying the sample program.
Singularity errors occur when “robot axes are redundant … or when the robot is in certain configurations
that require extremely high joint rates to move at some nominal speed in Cartesian space.” The UR5 robot will
shut down in such an event. Edward Red, “Robotics Overview,” EAAL—Electronics Assembly and Automation
Laboratory, accessed January 1st 2016,
http://eaal.groups.et.byu.net/html/RoboticsReview/body_robotics_review.html
274
One student reported that it was “near impossible to set up a ‘good’ path for every piece.”
275
One student reported that she was confused because there were “too many different components which
were designed to accommodate inaccuracy.”
276
Hence collisions that occur during the motion are not visualised in advance.
273
111
4.9 Robot programming setup: 2013 fall semester
urscript
comm
Figure 4-63 YOUR package comprising of two Python modules.
YOUR was revised for the fall semester. The primary change was to the underlying YOUR package. It
was reduced to two modules and restructured (Figure 4-63). One objective was to improve its
comprehensibility and to encourage students, like the Undulating Terraces team before, to inspect
and even extend the code. The package only defined functions and not classes, thus avoiding the
need for students to have object-oriented programming knowledge. All functionality for generating
a URScript formatted program were grouped into the urscript module. Previously, this was
distributed across the ur_standard, ur_custom and utils module which were also tightly coupled. In
addition, urscript exposed more functionality from the URScript library as compared to ur_standard
before. 277
1
2
def pose_by_plane(plane):
try:
3
origin = getattr(plane, “Origin”)
4
xaxis = getattr(plane, “Xaxis”)
5
yaxis = getattr(plane, “Yaxis”)
6
zaxis = getattr(plane, “Normal”)
7
8
9
except AttributeError, e:
print “Handling attribute error:”, e
else:
10
position = (origin.X, origin.Y, origin.Z)
11
axis_angle = axisangle_from_vectors((xaxis, yaxis, zaxis))
12
orientation = tuple([axis_angle.angle * item for item in axis_angle.axis])
13
return “p[{0:f}, {1:f}, {2:f}, {3:f}, {4:f}, {5:f}]”.format(*(position + orientation))
Figure 4-64 Implementation details of the pose_by_plane function.
277
The package wrapped 27 standard functions from the base URScript library.
112
The Python package was also modified to make it CAD software independent. All previous
references to Rhino specific libraries were removed. 278 Functions were implemented based on the
principle of duck-typing 279 and in an EAFP 280 programming style. This is illustrated in Figure 4-64
whereby the pose_by_plane function 281 accepts an argument (plane) that is assumed to have the
required attributes 282 otherwise an exception is raised. To demonstrate that the modified package
can potentially be used in other CAD applications which support Python scripting, a simple program
(Figure 4-65) was implemented in Dynamo that instructs the robot to move through a series of
target planes. The python blocks highlighted in blue are the equivalent of YOUR Grasshopper
components and encapsulate an identical script. In the case, the second input for the Move Python
block is a list of plane objects from DesignScript’s ProtoGeometry library.
Figure 4-65 A simple Dynamo program with equivalent YOUR components.
The kinematics module was discarded as solvers were implemented using Transform, Plane and Vector
classes from the Rhino.Geometry library. This code was moved to the corresponding kinematics components.
279
Duck-typing is a programming style “which does not look at an object’s type to determine if it has the right
interface; instead the method or attribute is just called or used. This is based on the concept that “[if] it looks
like a duck and quacks like a duck; it must be a duck”. “Python Glossary,” Python, accessed January 1st 2016,
https://docs.python.org/2/glossary.html
280
EAFP is an acronym for “Easier to Ask Forgiveness than Permission”. It is a programming style that is
“characterised by many try and catch statements” and that is contrasted with LBYL (Look Before You Leap).
“Python Glossary,” Python, accessed January 1st 2016, https://docs.python.org/2/glossary.html
281
It returns a string formatted in the syntax of URScript’s pose data type.
282
The parameter (plane) is assumed to have the following attributes: an origin, an X-axis, a Y-axis and a
Normal. An exception is caught if this assumption is false.
278
113
LoadPython
Interface
SetTool
SetDigitalOut
Movements/Actions
MoveJoints
MoveLinear
MoveCircular
MoveLocal
MoveServo
MoveProcess
Action
Fold
Kinematics
Forward Kinematics
InverseKinematics
Communication
Sender
Listener
SpeedAdjust
Figure 4-66 YOUR Grasshopper toolkit comprising sixteen user objects; those introduced in the fall semester
are highlighted in green.
The toolkit of components was also revised (Figure 4-66). One objective was to pare down the
number of components. For example, the SetBase component, which allowed users to store bases in
a global dictionary, was removed. 283 Previously, motion components such as MoveLinear looked up
these bases in order to calculate the proper target pose for the robot. The problem was that when
the base was updated in the dictionary, this change was not propagated to motion components that
referenced it. Henceforth, motion components require bases to be explicitly given as an input. In
addition, components with duplicated functionality were addressed. Pick and Place were replaced by
a single Action component that could replicate their outputs. MoveLocal, which was used to specify
local translation movements, and OrientLocal, which specified local rotational movements, were
merged. Similarly, the SetTool and SetToolByAngles components were combined.
New components were also added to the toolkit. MoveServo and MoveProcess 284 mapped additional
URScript functions. Together with MoveLinear, MoveJoints and MoveCircular, they provided
The SendLocalMotions and Popup components were also removed based on feedback from students, who
did not find them useful.
284
MoveServo instructs the robot to move linearly in joint-space without speeding up or slowing down.
MoveProcess instructs the robot to combine a circular blend and a linear motion.
283
114
students with expanded set of primitives for setting up and controlling robotic motions with
different characteristics. Beyond the utility offered with regards to trajectory planning, the
components would also facilitate students in exploring subtractive fabrication processes, which had
not been pursued in the studio thus far. As requested by the Mesh Towers team, a Fold component
was introduced. A SpeedAdjust component was also added to enable real-time control of the robot’s
speed. For example, students could test a working process at a lower speed to mitigate collision risks.
Finally, the concept of a pose was exposed in the script of movement-related YOUR components. A
pose is an abstraction that describes the position and orientation of the robot’s tip. In previous
versions of YOUR, students described such information using planes; the conversion of a plane to a
pose was hidden at the package level. For several of the revised movement components, a pose is
created from a plane and then passed as an argument to the movement function. The script is
similar to before and therefore remains familiar to students. In components like Fold though, a pose
is created from a position and a rotation vector. The latter directly describes the axis around which
the robot would rotate during the folding operation. Students gain an additional means of
specifying the desired state of the robot after a motion.
4.10 Results: 2013 fall semester
In the fall semester, teams were no longer restricted to working with the prepared robotic assembly
process. They could extend it or develop a new model fabrication process from scratch. At the same
time, each team had to develop its high-rise design further in accordance with their new fabrication
concept. The following chapters describe how each team used the revised version of YOUR to
implement their robot programs and the material results that they achieved.
4.10.1 Sequential Frames
For the fall semester, the Sequential Frames team 285 wanted to improve the speed and reliability of
the robotic production process from the first semester. They decided to minimise laser-cutting,
which was time-consuming, and work with vertical elements that could stand upright on their own.
The team arrived at a concept to fold sheet material to represent wall elements. Only the floor slabs
285
The fall semester team comprised David Jenny and Jean-Marc Stadelmann.
115
in the model, which are considerably fewer in number than walls, would be laser-cut. In addition,
they chose to work with paper, which was far thinner than the cardboard used previously. 286 This
would allow them to prove their structural concept of substituting a large number of slender walls
for a few thick ones, and to express it in a more evocative way. The team also added a cutting
operation to produce walls with one inclined edge. In the previous design, a wall was rectangular
and had a curved opening. It was now replaced by two folded walls, with inclined edges, separated
by a gap (Figure 4-67).
Figure 4-67 Evolution of a wall element; the first three iterations were from the spring semester, while the
fourth was for the fall semester.
Figure 4-68 An early version of the robot program used to test folding-cutting operations.
Their first model in the spring semester was constructed out of 4 mm cardboard, while the next two were
built out of 2mm thick cardboard. The students used 200 g/m2 watercolour paper that was less than a
millimetre thick.
286
116
Figure 4-68 shows an initial program implemented by the team to fold, cut and place a wall element.
It had a graphic token count of 383 and was organised in four parts. In part 1, a geometric
representation of a wall is referenced in a parameter and then decomposed into constituent
surfaces and vertices. In part 2, parameters values for the folding, cutting and placing operations
were derived from the outputs of part 1. Part 3 generated the sequence of robot instructions. Five
YOUR related components were used here. First information about the custom gripper was specified
(SetTool). The robot was then instructed to: move to a starting position (MoveJoints); fold and cut a
paper sheet (fold and cut); and then remove it from the clamp and finally place it (MoveLinear and
Place). The outputs from the 5 components were sequenced together and then in part 4, sent to the
robot.
Figure 4-69 The fold and cut component (left); steps in the process (right).
The logic for the main folding-cutting process was encapsulated in the custom component named
fold and cut (Figure 4-69: left). It was derived from Fold. The student team extended the component
with a cutting operation. 287 They modified its script and sequenced the folding-cutting process in 4
main steps (Figure 4-69: right). First, the robot grips the paper sheet and folds it multiple times. The
resulting crease helps to retain the fold. Second, the clamp is released and the sheet is pulled. A
287
The original Fold had 11 statements calling YOUR functions. The student team expanded it to 20.
117
virtual tool centre point is then set up for the third step, which is the rotation of the sheet. In the
fourth and final step, the paper is clamped in position for ten seconds and manually cut. 288
After conducting multiple empirical tests, the team discovered that only folds up to forty degrees
could be realised. Thereafter, they decided that beyond this angle, folds would be fixed at ninety
degrees. Paper sheets would be pre-folded at a right angle before they were inserted into the clamp.
The team extended their robot program with a new graph (Figure 4-70) to cut and assemble these
new walls. It was structured in four parts. Part 1 referenced the geometric representation of the wall
and decomposed it into surfaces and vertices. Its outputs were fed to part 2, which was responsible
for generating parameter values—pulling distances, rotation angles, rotation points and movement
targets—needed for subsequent robotic operations.
Figure 4-70 The extended sub-graph for the cutting process.
Part 3 generated the instructions for the rotate-cut operations. It contained 4 groups (3A, 3B, 3C and
3D) that contained an identical set of YOUR related components: MoveJoints, Pull, Rotate_Cut,
MoveLinear, and Place. Rotate_Cut was a custom component that was essentially the same as fold
and cut, except that it omitted code related to folding. Each group corresponded with a variant of
The team abandoned the idea of automating the cutting process. This was because a significant amount of
time and effort would be needed to develop new clamp that could cut as well.
288
118
the folded wall (Figure 4-71) and received different rotation point and rotation angle inputs. The
outputs from each group, a list of formatted instructions, were merged into a single data-tree. In
part 5, a student has to visually identify the type of wall variant and select the appropriate set of
instructions from the tree to send to the robot.
Figure 4-71 4 variants of a 90 degree folded wall.
While one team member implemented the robot program, the other revised their design according
to the new fabrication concept. The final high-rise design (Figure 4-72) was characterised in plan by
several long curving bands that merged and separated from one another. In section, continuous
voids were carved through the building interior to increase daylighting and cross-ventilation for each
apartment. The underlying concept of an inhabitable “forest of walls” was retained, while the
structural and spatial rules governing the distribution of walls were re-applied.
119
Figure 4-72 Section (left) and ground plan (right) for the final high-rise design.
The model of the final high-rise design would require five different types of wall elements (Figure
4-73). The first was planar and had one incline cut. The second type, which was the most numerous,
had a right angle fold and an incline cut. The folded segment formed part of the exterior façade,
while the cut segment partitioned interior space. The third type had a non-right angle fold and an
incline cut. They were located where bands merged. The fourth type had two right angle folds; and
the final type wall had three folds.
Figure 4-73 The 5 different wall types (left) and their distribution in a prototypical floor (right).
120
121
Figure 4-74
The program used for fabricating the final tower was organised in seven
parts. Part 1 was setup related; part 2 was for visualisation purposes; part
3 parsed geometry and generated production related parameters; part 4
generated instructions to produce walls with a right angle fold and an
incline cut; part 5 generated instructions to produce walls with a variable
angle fold and an incline cut; part 6 was for adjusting the robot position;
and part 7 was the control interface.
122
However, only wall types two and three could be fabricated with the existing physical setup. The
team felt that it was too late to change the design at this stage of the project; and thus decided to
assemble what they could with the robot and complete the rest by hand. At this point, the team
requested support from the author to refactor their program so that it could be used for the final
production phase. Figure 4-74 shows the result. The two previous graphs were merged and
structured in seven parts; it had a total graphic token count of 747. The program was used in
conjunction with a model of the high-rise, which was generated in advance by a separate Python
script.
Part 1 was setup related; it loaded YOUR into Grasshopper and specified pre-defined movement
waypoints. 289 Part 2 contained kinematic components from YOUR and was used to visualise the
robot’s configuration, for example when it is given a target. The main improvement to the program
was in part 3. The first node in this sub-graph was parameter referencing a list of polylines that
described the boundaries of all walls on a storey. As a next step, six custom Python scripting
components were created. They parsed the collection of walls and selected those that could be
fabricated robotically (types 2 and 3); divided the selected walls into two data streams—those that
needed to be cut, and those that needed to be folded and cut; and then automatically generated the
parameter values for those respective operations regardless which wall variant it was. This subgraph replaced and simplified part 2 of both the earlier graphs shown in Figure 4-68 and Figure 4-70.
It also allowed the students to build multiple walls in a continuous process. Previously, they could
only assemble one at a time as they had to visually inspect each wall to determine its type and then
variant.
Figure 4-75 The robot positions a pre-folded paper for cutting (left); moves it to the gluing station (middle);
and places it on the model (right).
289
These waypoints specified the robot’s configuration before it began folding, cutting or gluing operations.
123
Part 4 generated the instructions for producing and assembling walls with a fixed right angle fold and
an incline cut (Figure 4-75). It was based on part 3 of the earlier graph shown in Figure 4-70. It
contained eleven YOUR related components organised in three groups: cut, glue and place. The first
and third groups were identical to the corresponding parts in the previous graph. The second group
was a new addition. In their earlier tests, the team simply placed the walls without gluing since they
could stand upright on their own. The gluing operation was similar to placing except that the gripper
does not release the wall element. Hence the group comprised a modified Place component and
several MoveJoint components for specifying pre- and post-gluing waypoints.
Part 5 generated the instructions for producing and assembling walls with a variable angle fold and
an inclined edge. It was based on part 3 of the previous graph, shown in Figure 4-68. Therefore, it
contained the same YOUR components from before for specifying the fold-cut and placing
operations. These wall elements also had to be glued prior to placing. The output generated by the
gluing group from part 4 was re-used inserted into the Weave component that sequences the list of
instructions. Finally, part 6 was used to control the robot. Here, students could decide which sets of
walls to assemble by connecting the outputs of either part 4 or 5 to the Sender.
Figure 4-76 The model production process (left) and the final model (right).
124
A section of the overall design was materialised in model form (Figure 4-76). It was built out of 2870
paper sheets that represented walls and another eighty that represented floor slabs. However, the
team only fabricated two out of the thirty-six storeys robotically and even then, only partially. One
reason was because the implemented robotic fabrication process was still too slow. While the team
demonstrated that they could build sections of the model robotically, they relied on manual
production due to time constraints. It was faster because students could produce and assemble
walls in parallel, while the robot could only do so in a serial fashion. The second reason was that only
two wall types out of five could be realised with the implemented program and physical setup.
4.10.2 Mesh Towers
In the previous semester, the team implemented a custom robotic process to fold cardboard
elements that would represent walls. Folding increased the stiffness of wall elements and
introduced a new visual aesthetic. The team wanted to develop this idea of shaping a wall according
to structural logics further in the fall semester. A wall would be thickened where loads needed to be
transferred through it and dematerialised everywhere else. After experimenting with various
modelling materials and processes, 290 the students arrived at a concept to cut Expanded Polystyrene
(EPS) foam.
Figure 4-77 The foam cutting setup (left); one half of a wall (right).
The team developed a custom hotwire setup for cutting foam blocks. Horizontal floor and ceiling
slabs in the model would be produced by trimming two edges of a rectangular foam sheet. Walls
290
The team experimented with folding cardboard and aluminium sheets.
125
would be cut from a foam block that was moved continuously through the wire (Figure 4-77). A wall
comprised two half-walls glued together on the flat sides gripped by the robot. These subtracted
floor/ceiling and wall elements would be placed on the model by the robot. The team focused on
programming the wall cutting process first. This was because floor and ceiling slabs, which only had
two straight cuts, were comparatively easy to produce, and the students were confident of
programming the placing process.
Figure 4-78 Initial program for carrying out foam-cutting tests.
Figure 4-78 shows the program that was implemented for carrying out initial wall-cutting tests. It
was organised in three parts. Part 1 loaded YOUR into Grasshopper. Part 2 contained parameters
(2a) that referenced 4 pairs of curves drawn in the digital model. Each pair described the bottom and
top edges of a surface to cut. Part 2 also contained a scripting component (2b) that generated a
series of target planes along a cut-path for the robot to follow.
Part 3 contained 2 custom scripting components that call functions from the YOUR library. The
former is essentially a simpler version of Listener in the toolkit. The latter addresses the cutting
operation and incorporates the functionality of a Sender. Its script is organised in three parts. First,
information is specified about the end-effector. Second, instructions are generated for the robot to
move to a safe starting position; through the cut path and to an ending position. Finally, these
instructions are formatted and sent to the robot.
126
At this stage, the team’s objective was to identify what type of surfaces could be cut. The formal
outcome of this subtractive process was highly dependent on the way a foam piece is moved
through the hotwire. The team conducted extensive empirical tests (Figure 4-79) to determine the
correct type of motion. They invoked different movement-related functions and adjusted the values
of parameters such as speed in the script. Their eventual solution for achieving accurate cuts was to
increase the resolution of waypoints on the cut-path and specify servo type motions with constant
speed. 291
Figure 4-79 The student team conducted over a hundred test cuts using different motions: 1) linear, 2) joint
type, 3) blended (process) and 4) servo; and adjusting their parameters.
The team revised their high-rise design once they had proven that the robotic foam-cutting process
was feasible. The high-rise comprised eighteen slender, leaning towers that were interconnected;
each tower was made up of duplex residential units stacked on top of one another. The team
introduced a new formal vocabulary for the walls. Each wall had a curved form that was determined
by its orientation relative to the walls directly above and below it. Its thickness was correlated with
its vertical position in the tower. As residential units were designed to vary in size, all floor/ceiling
slabs and walls, while conforming to a general shape, were unique. The resulting high-rise was a
porous structure with a small footprint (Figure 4-80). Views and ventilation were maximised for each
apartment and the ground plane was freed for use as a park.
A servo movement has constant speed; for other movement types, the robot accelerates at the start of the
motion and slows down at the end.
291
127
Figure 4-80 Cross-section of final tower (left) and figure-ground plan (right; mesh towers shaded in black,
while neigbouring buildings are shaded in grey).
Figure 4-81 shows the program implemented for fabricating the final physical model. It was used in
conjunction with a digital model of the high-rise. This design representation was generated
beforehand by a separate Python script. The program had a total graphic token count of 1051 and
was organised in nine parts
Part 1 loaded YOUR into Grasshopper and specified information about the physical setup—the
location of the picking station and the hot wire. Part 2 generated visualisations of the process. Part 3
stored lists of joint angles in panels that corresponded to predefined safety waypoints. It also
contained a custom Move component, which merged MoveLinear and Sender from the toolkit, for
testing these waypoints. Parts 4 and 5 were responsible for generating the movement targets for
cutting and placing walls; while parts 7 and 8 did the same for floor/ceiling slabs. Part 6 was used to
adjust placement targets in general.
128
129
Figure 4-81
The program used for fabricating the final tower was organised in 9 parts.
Part 1 was setup related; part 2 generated a visualisation; part 3 was used to
adjust the robot’s position; parts 4 and 5 related to walls, while parts 7 and 8
related to floor/ceiling slabs; part 6 was used to adjust placement targets;
and part 9 generated instructions for the robotic operations and also served
as the control interface.
130
Part 4 contained a set of parameters that referenced curves in the digital model. These curves were
organised in groups of four—one pair described the lower and upper edges of a wall on one of its
sides, while the other represented force lines running through it. These two pairs of curves were
then fed into a custom component named PyCutPrep in part 5. It generated the target planes for
cutting and placing walls, as well as a visualisation (Figure 4-82). The rest of the components in part
5 generated the target planes for trimming the side edges of walls, as well as adjusted the target
planes for placing based on the output of part 6.
Figure 4-82 The custom PyCutPrep component (left) generates targets planes for the wall-cutting and placing
operations (right).
Part 7 contained parameters that referenced geometric representations of walls and ceiling slabs in
the digital model. They were the inputs to part 8, which generated the planes for picking and placing
these slabs. The team originally intended to cut floors and ceiling slabs robotically and had
implemented this functionality in an earlier version of this Grasshopper program. However, they
decided to perform this relatively simple operation manually later on as it only involved making
straight cuts.
131
Figure 4-83 The robot moves an EPS foam block through a hot-wire (left and centre); the edges of the block
are subsequently trimmed (right).
The specification and sequencing of robotic operations took place in part 9, which also functioned as
the control interface. It contained eleven YOUR related components. There were two instances of
AimBlockPickup and PickupPiece—one for picking up foam blocks (walls) and another for sheets
(slabs). AimCutLine, CutWall, TrimFrontEdge and TrimBackEdge were related to the wall-cutting
process (Figure 4-83). PlaceWall and PlaceFloor were, as their names suggest, used to place wall and
floor/ceiling elements on the model. The final component was SpeedControl that could be used to
adjust the robot’s speed in real-time. Besides SpeedControl, which was from the toolkit, the others
were custom developed components.
Figure 4-84 The custom CutWall component generated and sent instructions for cutting a wall surface.
The team developed the custom CutWall component (Figure 4-84) first. It was based on the earlier
Python scripting component used to carry out the initial foam-cutting tests. Figure 4-85 shows a
portion of the encapsulated script, which generated instructions for the approach (lines 9-10),
cutting (lines 12-23) and exit (lines 25-26) phases of the movement. The script also included
transformation related code (lines 15–17) copied from a standard MoveLinear component. A button
was attached to the component; the instructions were sent to the robot once it was pressed.
132
......
9
10
commands.append(ur.set_tcp(ur.pose(-0.02,0,0.105,0.767945,0,0)))
commands.append(ur.movej(start_pt,3.0,3.0))
11
12
targets = []
13
count = 0
14
for plane in target:
15
_target = rg.Plane(plane)
16
_matrix = rg.Transform.PlaneToPlane(rg.Plane.WorldXY,ref_base)
17
_target.Transform(_matrix)
18
targets.append(_target)
19
if count == 0:
20
commands.append(ur.movel(ur.pose_by_plane(_target),0.1, 0.1))
21
else:
22
commands.append(ur.servoc(ur.pose_by_plane(_target),0.01, 0.0085, 0.01))
23
count += 1
24
25
commands.append(ur.movej(end_pt,0.1,0.15))
26
commands.append(ur.movej(intersafety,1.0,1.0))
......
Figure 4-85 The script encapsulated in CutWall.
The rest of the custom components were then derived from Cutwall. For each of them, the team
copied CutWall, edited selected parts of its script, changed the inputs and output parameters
accordingly, and then renamed the component. All the custom components were colour-coded to
describe their roles. Those in a red group were picking related, purple was for cutting operations,
yellow for placement and cyan for adjustments. In addition, the components were laid out, from top
to bottom, in the order in which they were meant to be triggered: first the wall (foam block) is
picked; second its outer surface is cut; third its side edges are trimmed; and fourth it is placed on the
model.
The team materialised a portion of their final design in model form (Figure 4-86). Seven towers were
built out of a total of 627 walls, 302 ceiling and 258 floor slabs—all were unique in shape. As each
wall was glued from two halves, the number of wall pieces that had to be robotically cut was
doubled. The team chose to robotically place walls and slabs for the first storey to demonstrate the
feasibility of the program. Subsequently, all placing operations were carried out manually, freeing up
the robot to be used solely for cutting walls and thus accelerating the building process. To place
elements correctly, students printed out the floor-plans of units on every level and aligned it to the
133
corresponding ones below. Using this mixed mode of production where cutting (walls) and assembly
were carried out concurrently, the team managed to complete the model in eight days.
Figure 4-86 The model production process (left) and the final model (right).
4.10.3 Vertical Avenue
The Vertical Avenue team 292 decided to streamline the model production process in the fall semester
and only utilise the robot for the assembly of vertical elements. Furthermore, their concept was to
combine manual and robotic building during the assembly process. Elements would be positioned on
the gripper and glued by hand, and thereafter placed by the robot (Figure 4-87). The team also
improved the placing operation by using a new sensor equipped gripper. The robot automatically
stops its descent during the placing motion when the sensor detects an obstruction. The element is
able to slide on the gripper to achieve maximal contact with the horizontal surface for glue to set.
This freed students from having to constantly adjust placing targets in the z-direction.
292
The team comprised Kan Lijing and Foong Kaiqi in the fall semester.
134
Figure 4-87 The robot waits in a pre-defined position and the wall is manually centred on the gripper (left);
glue is then applied (centre); the robot then resumes the placement operation (right).
The team also implemented a new process to fabricate the façade system of their high-rise model,
which would be represented out of bent acrylic panels. These panels were produced using a thermal
deformation process similar to the one originally developed by the Bent Striations team. The team
could shape the boundary between interior and outdoor spaces by controlling the location and angle
of bends. In addition, they modulated the transparency of each panel by scoring a pattern of lines on
its surface.
Figure 4-88 Section and ground plan for the final high-rise design.
At this point, the team developed the final iteration of their high-rise design. The underlying concept
from the previous semester was unchanged—sub-towers were linked together by a continuous,
spiralling street. The cluster-like organisation and hexagonal-based formal language of the high-rise
135
were retained (Figure 4-88). However, the number of sub-towers was reduced by one, which
increased the visual prominence of the interior street system and its accessibility to the public. In
addition, apartments were now enclosed by bent glass curtain walls.
The production process was divided into two phases. Façade elements would be pre-fabricated first
and then the tower would be assembled. Separate programs were implemented for each phase. As
the team was the least proficient in programming amongst those in the studio, they required
extensive support from the author to implement these programs. Figure 4-89 shows the final
program used for fabricating the façade elements. It had to be able to produce acrylic strips with
one or two bends, varying from zero to ninety degrees. The program had a total graphic token count
of 1465 and was structured in eleven parts.
Part 1 loaded all relevant python packages, including YOUR, into Grasshopper. Part 2 contained a
scripting component that encapsulated the main design logic for the tower. It received a number of
design parameters as inputs and generated a geometric representation of the tower as its output.
Parts 3, 4 and 5 were essentially identical. Each part corresponded to one sub-tower and contained 5
scripting components. They allowed students to design the striation pattern and bent form of every
façade element in that sub-tower (Figure 4-90).
Parts 6, 7 and 8 were related to the production of one-bend elements, while parts 9, 10 and 11 were
related to two-bend elements. Parts 6 and 9 contained standard Grasshopper components that
generated parameter values, such as pulling distance and angles, for the bending operations. Parts 7
and 10 contained YOUR related components for generating robot instructions. The former included
SetDigitalOut, MoveJ, Pull and Fold; the latter had an extra SecondFold component. Pull and
SecondFold were modified versions of MoveLocal and Fold respectively. Finally, parts 8 and 11 were
identical; each contained Listener and Sender YOUR components for communicating with the robot.
136
137
Figure 4-89
The final program used for fabricating the façade elements. Part 1 loaded
YOUR; part 2 generated the high-rise design; parts 3, 4 and 5 generated the
design of façade elements in each respective sub-tower; parts 6, 7 and 8
were related to the production of strips with one bend; and parts 9, 10 and
11 were related to the production of strips with two bends.
138
Figure 4-90 The student team could design where façade elements were folded (centre) and their striation
patterns (right), and simultaneously visualise the result in the Rhinoceros viewport (left).
Figure 4-91 The robot performs one bend (left), pulls the strip (centre) and performs a second bend directly
(right).
Figure 4-92 (left) shows a code snippet from the SecondFold component illustrating the main
modification students made to the original Fold component’s script. The team wanted the robot to
perform the second bend directly without resetting its grip (Figure 4-91). 293 This would allow them
to design bends that were closer together. They leveraged their familiarity with vectors (lines 2 to 5)
to describe the pivot point’s position with respect to the tool coordinate system. Subsequently, they
created a pose directly from vectors (line 6), which was possible with the revised version of YOUR, to
describe the rotational movement for the folding operation.
For the Bent Stratifications project, the robot had to release its grip, move back to a pre-set position and
then re-grip the unbent portion of the strip. Hence the team had to ensure that bends were spaced sufficiently
far apart.
293
139
1
# calculate pose for virtual pivot
2
v1 = rg.Vector3d(l1,0,0)
3
v2 = rg.Vector3d(l2,0,0)
4
v2.Rotate(-angle1, rg.Vector3d.Zaxis)
5
v3 =
6
pose_tcp_offset = ur.pose_by_vectors(_v3,
v1 + v2
(0,0,0))
......
Figure 4-92 Code snippet from the SecondFold component (left); the position of the pivot virtual pivot (VP) was
described through vector addition.
After spending significant effort fine-tuning parameter values empirically, 294 the team managed to
bent elements accurately and at a faster rate than anticipated. In fact, this success prompted them
to revise the façade’s design after the elements for the first sub-tower had been completed. The
team increased the number of two bend elements and the density of lines scored on them. This
modification was easily made because design and production information were captured in a single
associative model that remained flexible to change.
Figure 4-93 shows the program used for the second assembly phase. At this point, the design had to
be finalised since all façade elements had already been fabricated. The program was therefore used
in conjunction with a baked 295 digital model of the high-rise. It had a graphic token count of 1424
and was structured in twelve parts. Part 1 loaded YOUR into Grasshopper and defined the reference
model base and hard-coded waypoints. Part 2 contained 3 custom scripting components that parsed
the digital model and allowed students to select which elements from sub-tower 1 to assemble. Part
3 contained standard Grasshopper components that generated target planes from the geometric
representations of walls and facades for the assembly operation.
For example, the amount of time an acrylic piece was heated up had to be precisely specified, because
insufficient heating caused the element to crack, while over-heating made it sag under its own self-weight
295
Geometry created in a Grasshopper program can be added to the digital model though a “baking”
operation. However, the baked model does not capture the associative information described in the program.
294
140
141
Figure 4-93
Final program used for assembling the tower. Part 1 loaded YOUR and
was setup related; parts 2–5 were related to the assembly of subtower 1; parts 6–9 were related to the assembly of sub-tower 3; and
parts 10–12 were related to the assembly of sub-tower 1.
142
Robotic instructions are generated and sent in part 4. 10 YOUR related components are used here.
First information about the tool is specified (SetTool) and the robot moved (MoveJoints) to a predetermined safety position. A wrist angle check is performed 296 (Sleep and CheckJoints). The robot
then moves to a target above the position where an element should be placed (MoveLinear) and the
vacuum gripper is activated (SetDigitalOut). The wall element is placed on the gripper and glue
applied while the paused robot awaits a signal to continue (Request). The robot end-effector
descends until the sensor detects an obstruction, which in this case is the floor slab in the model,
and then places the wall. In the case of a façade element, the robot only indicates to the student
how it should be manually positioned and aligned (Figure 4-94). It ascends after the wall has been
placed, or the student has marked the location of the façade element.
Figure 4-94 The robot descends and indicates the alignment of the façade which is marked with two points
(left); it retracts and the element is placed accrodingly (centre and right).
The logic for the descending/ascending motion was written directly in the URScript language. A
function named move_sense was defined which contained a thread. Statements relating to the
descent movement are specified in the body of the thread and loop forever. This thread is killed
when the digital input signal of the sensor port switches to True. Thereafter control flow returns to
subsequent statements relating to the ascending motion. This function cannot be generated using
YOUR components alone. Hence a custom component named MoveSense was developed (Figure
4-95). It searches a local directory where the above function is saved and returns a statement calling
the move_sense function; and the function definition itself. The outputs generated by the previous 8
components (including MoveSense) in part 4 were then woven together in a list and connected to
the Sender component.
This check ensures that the wrist joint does not over-rotate past its limits, which, in turn, would cause an
error.
296
143
Figure 4-95 MoveSense component.
Parts 2 to 4 of the graph were replicated twice for assembling the remaining sub-towers (parts 6-8
and parts 10-12). Part 1a was incrementally expanded during the fabrication process. It contained 24
groups of components; each coincided with a stage during assembly whereby the axis machine was
moved, either vertically or horizontally. 297 A component group essentially informed the robot that
the model base had shifted relative to its own position. These bases were then stored separately as
parameters in parts 5, 9 and 13. They were connected to downstream MoveLinear components,
which transformed their targets (for pre-placing) accordingly.
The final physical model (Figure 4-96) was constructed out of a total of 1573 cardboard and 441
acrylic elements; with the former representing walls, floors, cores and ramps, and the latter
representing the façade. 390 of the cardboard elements were walls that were robotically placed,
while the rest were manually assembled. A third of the acrylic strips had one bend, while another
third had two. All were placed using the collaborative human-robot process that was developed..
The team pre-fabricated the façade elements and assembled the tower in stages across several days.
In total, they completed the first task in two days, while the assembly process took three.
The axis machine was moved up 8 times to accommodate the tower as it increased in height. 3 further
positions at each level were specified—one for each sub-tower. By utilising the axis machine, which had a
higher precision than the robot arm, as much as possible, accuracy in the assembly process could be increased.
297
144
Figure 4-96 Model production (left) and final model (right).
4.11 Interview: 2013 Design Research Studio
Each team was interviewed formally at the conclusion of the studio (see Chapter 9—Appendix for
the interview protocol). First, students were asked to discuss how their robot programming
approach differed between semesters. For the Sequential Frames and Vertical Avenue teams, the
key difference was that they mainly worked with the given program in the first semester, and had to
implement new ones from the scratch in the second. In addition, students were previously unaware
that they could modify the script of YOUR components or found no reason to do so, but this became
a key part of their programming strategy in the fall semester. They adapted standard toolkit
components to their fabrication process by editing its code. Though the primary developers of the
robot program in both teams were inexperienced at scripting, they stated that the code was easy to
read and modify because they could identify where selective changes should be made. 298
The Mesh Towers team, unlike the previous two, implemented a robot program in the spring
semester. For them, the key difference was that they adopted a more scripting-based approach in
The student from the Sequential Frames team stated that “it was quite easy to read and edit the code … I
don’t know Python syntax well enough to write it by myself, but I just take what is existing then I understand
where I need to change what … if I have to write entire script from scratch, it would be impossible”
298
145
the fall semester. Previously, the team assembled standard YOUR components to program the
cardboard folding process; thereafter, they developed a few custom ones for the foam-cutting
process. 299 One reason why they created these new abstractions (custom components) was to
reduce the size of their Grasshopper program and improve its readability, which was previously an
issue. 300 This helped to facilitate collaboration amongst team members, in terms of testing the
program and running it during the production phase. All of them could understand the program
despite differences in their proficiency with visual programming. 301
In addition, students were asked to identify missing functionality in YOUR. All teams raised the issue
of simulation, though there was no consensus on whether it was a critical functionality. Several
students stated that simulation would allow them to detect collision problems in advance, and
therefore plan alternative motion paths. 302 As one of them explained, having to constantly fix such
errors after they have occurred is de-motivating, as it slows down the implementation process
considerably. 303 A member of the Vertical Avenue team also stated that the ability to simulate the
process virtually could benefit users like her, who were intimidated by the physical robot. 304 On the
other hand, one student argued that it was more important to understand conceptually why these
errors occur in the first place; and such knowledge had to be acquired through experiencing them
first-hand with the actual robot. 305 In addition, a member of the Vertical Avenue team pointed out
that in their case, simulating how the plastic strip deformed was equally, if not more important than
the robot’s movements. 306
Next, students were asked to discuss which aspects of the robot programming process was the most
challenging. All teams replied that implementing the sequential logic of the robotic process, either
The student reported that “we used YOUR components right out of the box in the first semester”, while in
the second, “we worked inside [the script of] individual components rather than use them in combination with
one another.”
300
The student who was mainly responsible for implementing the robot program in the spring semester stated
that “there were a lot of components and it tends to be overwhelming for my team-mates.”
301
For example, the member that was least proficient in Grasshopper was conversely, the most proficient in
scripting.
302
A member of the Mesh Tower team responded that it would be “valuable to have YOUR components that
can predict or warn of any collisions or outright prevent them from happening it at all.” The Sequential Frames
team stated that they would “like a simple simulation to choose the best motion path.”
303
The student stated that such errors “slows us down considerably … it breaks the tempo of working and that
is quite damaging … it gets me so agitated that I stop everything else to try and fix it.”
304
The students stated that “at the start, I was a bit scared of the machine. For someone like me, [simulation]
would have helped.”
305
The student argued that “in my opinion, simulation is not so important. Because the robot is right in front of
me … experiencing it crash is an important part of the learning process.”
306
The student responded that “a simulation of the robot is helpful, but not really critical. What is missing and
important [is a way] to simulate the material process.”
299
146
graphically or in code, was the most straightforward. 307 However, testing and debugging programs
proved to be difficult or, at the very least, time-consuming. 308 Students had to correct errors where
the robot moved in an unexpected way by trying out new motion paths or types. They had to
systematically test out different parameter values and evaluate the material results, in order to finetune the accuracy of the process. 309 The Sequential Frames team also highlighted the problem of
having to re-implement their robot program after discovering new requirements for it. They
originally developed it to fabricate a particular type of wall and later on, had to extend it to
accommodate four new types. The team only managed to address one additional case, and thus
could not build significant portions of their model robotically in the end.
Finally, students were asked to discuss how the development of the fabrication process impacted
their design; and how they would extend their robot program if given more time. The Sequential
Frames team replied that they re-designed the walls of their high-rise using a vocabulary of forms
derived from the folding-cutting process. This altered the visual appearance of their tower
considerably. However, it was difficult to make fundamental changes to the design, as it had been
refined over two semesters. 310 Hence they argued that the fabrication process should be developed
at a far earlier stage while the design was still flexible enough to change. 311 In future, the team
wanted to generalize their robot program to handle a wider variety of wall types, and explore the
use of sensors to automate their process further.
The Mesh Towers also re-designed the walls for their final high-rise proposal. They developed an
extensive catalogue of forms that could be produced with their robot program and foam-cutting
setup, before making a choice for the walls. The team also experimented with the possibility of
creating striated or pleated surface patterns as well as openings in the wall, 312 but did not introduce
them into the design. Given more time, they would like pursue these explorations further and
extend their robot program in this direction. The Vertical Avenue team re-designed the façade
For example, one student replied that it “was very simple to come up with the logic and sequence [for the
robotic process].”
308
According to a member of the Vertical Avenue team, “it takes a lot of time to perfect the process; even then
it is not 100% accurate.”
309
A member of the Mesh Tower team felt that “trouble-shooting … and fine-tuning the code in terms of
getting more precision” was the most difficult aspect of the robot programming process.
310
A student explained that “the problem here is we already have a very developed design and a clear concept
… so there was not much space for experiments.”
311
A student suggested that it would be “interesting to start with the fabrication process and see what can be
done … [because] once you develop the process, it opens up ideas about what you do with it in terms of
design.”
312
They achieved these results by controlling the acceleration characteristics of the cutting motion, and
making layered cuts respectively. One member said that they originally “wanted to create a pleating effect to
articulate ‘the force lines’, but then considered it to be extraneous.”
307
147
system in their tower after developing the plastic bending process. They were confident of extending
their robot program to produce multiple folds in future; and realised, in hindsight, that they were
overly conservative with their façade design. 313
4.12 Pedagogic issues
By the end of each studio, teams were expected to be able to: identify the requirements of a robot
program based on their fabrication concept and high-rise design proposal; implement the program
according to these requirements; test and debug the program to ensure it works correctly; and run
the program to fabricate the physical model. In general, teams in the 2012 DRS managed to meet
these expectations. However, those from the 2013 DRS required more support to develop their
robot programs, with only the Mesh Towers team being able to do so largely on their own. 314
One reason for this result was because the studios were set up differently. In the 2012 DRS, teams
were free to modify the prepared robotic assembly process or develop alternative ones from the
outset. They did so, and extended the sample program given to them or generated new ones by the
end of the first semester. In the 2013 DRS, teams were restricted to using the prepared assembly
process for the initial half of the studio. They had to understand how the robot program that was
handed over to them worked and use it to fabricate their models. Thus the task was mostly confined
to program comprehension. The Mesh Towers team was the exception as they implemented a new
folding process at the end of the first semester.
According to Winslow, there “is very little correspondence between the ability to write a program
and the ability to read one.” 315 The 2013 teams only began creating programs in the second
semester. Compared to teams in the previous studio, they exercised such skills, which are distinct
from those used in program comprehension, for a far shorter period of time. Hence, their
dependence on instructor support to develop robot programs of a similar sophistication. From a
pedagogic perspective, the implication here is to structure a studio such that students gain early
experiences in creating programs.
The student said “I think the program will be easy to extend … it was a pity that we only designed [elements
with up to] two folds.”
314
The Vertical Avenue and Sequential Frames teams required support from the author to implement and refactor their robot programs respectively.
315
Leon Winslow, “Programming Pedagogy—A Psychological Overview,” in ACM SIGCE Bulletin 28 no. 3 (1996):
21.
313
148
However, this does not mean that students should be assigned program generation tasks right from
the start. Sample programs have value because they are “are rich sources of information about the
language which can be presented, analysed and discussed.” 316 The lesson here is to provide students
with skeletal programs, which they then have to extend, rather than complete ones. Such programs
should expose the most important abstractions and provide an underlying structure that students
can build upon when adding new parts to the program. Their size should also be restricted to
improve comprehensibility. In hindsight, the sample Grasshopper program that was handed over to
students in the spring semester of the 2013 DRS was too large. In attempting to anticipate students’
needs, it ended up providing functionality that they either did not require or know how to use.
Students reported being overwhelmed by its complexity and were discouraged from trying to
improve it.
Disclosing the implementation details of YOUR abstractions was also important from a pedagogic
perspective. Similar to the sample graphical program, each script in a component serves as example
code. Students inspected the scripts to understand what functions were offered by the YOUR
package and how they should be utilised. They used existing code as “raw material” to write new
scripts, often by cutting and pasting relevant chunks. Finally, some students studied the
implementation of functions in the Python package to learn more advanced robot programming
related concepts, for example socket communication. Besides making code accessible, a further
implication is to write it as if addressed to students. Like the sample graphical program, a script or
function should be limited in size—preferably so that its visible all at once; and be extensively
commented, even at the risk of being overly verbose.
Another lesson drawn from the DRS is to place more emphasis on teaching students how to manage
different programming activities in a systematic, explicit and disciplined way. 317 Students should be
instructed to specify the requirements 318 for their robot program if they are, as to be expected,
working collaboratively. Team members have to negotiate and agree upon the requirements of their
robot program in order to coordinate related computational design and physical tooling tasks,
Anthony Robins et al., “Learning and Teaching Programming: A Review and Discussion,” in Computer
Science Education 13 no. 2 (2003): 157.
317
According to Ko et al., end-user programmers engage in software engineering activities in an “unplanned,
implicit and opportunistic” manner.
Ko et al., “The State of the Art in End-user Software Engineering,” ACM Computing Surveys 43, no.17 (2011): 8.
318
Requirements describe “how a program should behave in the world.” Ko et al., “The State of the Art in Enduser Software Engineering,” 9.
316
149
especially if these tasks are carried out by different individuals. This can help to minimise problems
arising out of poor communication, which was fairly common amongst teams in the studio. 319
Students should be taught how to design their robot programs. Instructor support was mainly
directed towards implementation—helping students with coding or creating specific parts of their
graph. It was observed that teams engaged in exploratory prototyping rather than up-front design.
While this approach proved to be effective initially, the resulting programs were often difficult to
scale up. They could not be easily adapted to changing requirements and usually became unreadable
to all but the primary student developer. These issues, which will be further discussed in chapter 6,
in turn had an adverse impact on collaboration within the team. Consequently, more instruction in
this area should have been given to students, for example by teaching them how to structure their
graphical programs using secondary notation or developing abstractions for re-use.
In addition, students should be taught strategies to debug their programs. This includes knowing
how to: make appropriate adjustments while running a program to correct for imprecision; identify
the reasons why singularity errors occur; and plan the robot’s motion accordingly to avoid them. A
direct and effective way of imparting such strategic knowledge, which is essentially in tacit in nature,
is through physical demonstrations. These strategies would help students to reduce the considerable
amount of time spent on debugging.
Finally, one unresolved issue is that students had wide-ranging competency levels in robot
programming by the end of each studio. On one hand, teams usually had a member who was
responsible for developing the robot program, and became highly skilled at it. On the other hand,
some students lacked confidence in controlling the robot even after a year. This disparity arose as a
result of teams dividing computational design, physical tooling and robot programming tasks
amongst themselves. Those who were less accomplished at robot programming usually focused on
other tasks and simply lacked sufficient interaction with the machine. This issue may be addressed
by re-structuring the studio and for example, incorporating more individual assignments or
workshops that focus on robot programming. The latter is the subject of the following chapter,
which explores, in greater detail, the workshop as an alternative pedagogic setting.
For example, one student from the Mesh Towers team stated that “we had a problem because we tried to
separate again where he was working on [implementing robotic] foam-cutting and I was working on designing,
which didn’t work at all, because one [party] would say this is impossible to cut and the other [party] would
say this is not interesting to build … like running in circles.” A student from the Tiong Bahru Tower team
reported that “every time one of us changes something in the design, another has to change something in the
[programming and physical] robotic setup; this affected the design again, so we always ended up having a
huge fight.”
319
150
5 Case Study – Workshop
The second case study was a workshop entitled Programming Bespoke Robotic Processes. In this
workshop, students had to learn how to control a pre-defined robotic process using a given
Grasshopper program, and then subsequently extend the process by modifying the program. Unlike
in the Design Research Studios, they did not have to solve an explicit design problem or develop the
physical setup. The latter was prepared for them in advance so that they could focus on the robot
programming task. The workshop addressed the following questions. First, was it feasible for
students to progress, within a short amount of time, from learning to control a robot process to
extending it? And second, how does YOUR and the design of the sample program support or inhibit
their progress?
5.1 Workshop setup
The workshop ran for five days and was divided into two sessions. It involved ten second-year
architecture students, who were grouped in pairs or trios. Two groups attended each session, which
was divided into five 3 ½ hour blocks. Students were introduced to two robotic processes in the first
half of a session. Their task was to learn how to control each process using a prepared Grasshopper
program, and then produce a series of artefacts with it. In the second half of the session, they had to
select one process and develop it further by extending the robot program. Their goal was to explore
new formal possibilities or material effects that could be produced with the extended process.
The two processes involved cutting foam blocks (Figure 5-1) and crumpling plastic strips (Figure 5-2).
The former was identical to the process developed in Mesh Towers, while the latter was based on
the plastic bending process from Vertical Avenue. These processes were chosen as they could not be
replicated manually; the foam block had to be moved through a hot wire in a three dimensional path
at a steady speed, while the plastic strip had to be deformed at high temperatures. Students had to
learn how to program the robot in order to achieve successful fabrication results. Each process also
stressed different domain specific concepts—motion control in the case of foam-cutting, and poses
and local transformations in the case of plastic-crumpling.
151
1
2
3
4
5
6
Figure 5-1 For the foam-cutting process, the robot: 1) approaches the block; 2) picks it up; 3) proceeds to the
start of the cutting path; 4 and 5) moves the block through the hot-wire; and 6) retracts at the end of the
cutting path.
1
2
4
5
3
Figure 5-2 For the plastic-crumpling process, the robot: 1) approaches the strip; 2) grips the strip which is then
heated for twenty seconds by the hot air gun; 3) folds the strip; 4) pushes the strip towards the clamp; and 5)
releases the strip after it has cooled down by a stream of air for eighteen seconds.
152
The students had some prior programming experience. They were introduced to visual programming
in Grasshopper and scripting using its C# components in a previous design computation course.
However, when students were asked, at the start of the workshop, to rate their knowledge of visual
and text programming respectively, only three of them considered themselves to be proficient in
one or the other. The students had no prior robotic fabrication experience and would be using
robots for the first time in the workshop.
Figure 5-3 Fabrication process components: 20 cm x 8 cm x 5 cm foam block (left); cutting station (middle);
end-effector (right).
Figure 5-4 1.5 mm thick acrylic strip (left); heating station (middle); end-effector (right).
The workshop utilised the same robotic setup as the DRS. In this case though, students only worked
with the robotic arm, which remained in an upside down mounting configuration, while the axis
machine was fixed in a stationary position. The physical setup—end-effectors, clamping stations and
gluing stations—was prepared in advance for students (Figure 5-3 and Figure 5-4). This freed them
to concentrate on the robot programming task.
153
5.2 Robot programming setup
LoadPython
LoadFunction
Interface
SetTool
SetDigitalOut
Movements/Actions
MoveJoints
MoveLinear
MoveCircular
MoveLocal
MoveServo
MoveProcess
Fold
Action
Crumple
SequentialCut
Kinematics
Forward Kinematics
InverseKinematics
Communication
Sender
Listener
SpeedAdjust
Figure 5-5 Toolkit of YOUR Grasshopper user objects; components highlighted in grey were identical to those
used in the 2013 fall semester of the DRS, while the components highlighted in green were introduced for the
workshop.
Figure 5-5 shows an updated version of YOUR toolkit that was used in the workshop. There were
three notable additions. The first was a user object named LoadFunction. It allowed students to
select any Python scripting component and convert it to a function. 320 Thus it extended
Grasshopper’s node-in-code 321 functionality, which only applied to standard non-scripting
components. Figure 5-6 demonstrates how LoadFunction is used in an example. A custom script is
written in a Python component named Hello, which returns a concatenated string as its output.
When LoadFunction is toggled with Hello selected, a new function is created and added to a module
LoadFunction references a module named meta_grasshopper which was added to the YOUR package after
the DRS. This module contains the underlying functions and classes necessary for converting custom Python
scripting component into functions.
321
Grasshopper’s node-in-code functionality is further described in Chapter 2.1.3.
320
154
named custom_components. This module is imported (Figure 5-6–right; line 1) in the script of a
separate Python component and the Hello function is called with a different argument (line 4).
1
import custom_components as cc
2
3
my_name = "world"
4
a = cc.Hello(my_name)
Figure 5-6 The custom component “Hello” is dynamically loaded as a function in the custom_components
module.
Therefore LoadFunction provides students with the means to create reusable abstractions out of
modified YOUR components. Alternatively, they may save the component as a user object, which is
one of Grasshopper’s standard abstraction mechanisms, or encapsulate its script as a function in one
of YOUR’s modules. These approaches can be described as going from “node to node” and “code to
code” respectively, while LoadFunction bridges the gap between graphic and text programming
directly.
The second addition was a user object named SequentialCut (Figure 5-7). It was based on the
CutWall component developed by students in the Mesh Towers project. SequentialCut had four
required input parameters: a list of target planes that describe a motion path; a reference base 322;
and two lists of joint angles which specify the robot’s configurations prior to and after the cutting
motion. Lines 24–33 of the script encapsulated in SequentialCut transform all target planes in the
digital model to the robot’s coordinate system. Several YOUR movement functions—joint (lines 36
and 42), linear (line 38) and servo (line 40)—are subsequently called. Each function returns a string
formatted as a URScript command. A function named statements (line 35) accepts these strings as
arguments and concatenates them with newline delimiters separating each statement. This string—
named commands—is the output of the component.
322
The base describes a plane in physical space with respect to the robot’s coordinate system.
155
......
24
# Orient the cut planes with reference to robot base and create poses
25
matrix = rg.Transform.PlaneToPlane(rg.Plane.WorldXY, base)
26
27
initial_cut_plane = rg.Plane(targets[0])
28
initial_cut_plane.Transform(matrix)
29
initial_cut_pose = ur.pose_by_plane(initial_cut_plane)
30
31
cut_planes = targets[1:]
32
[cp.Transform(matrix) for cp in cut_planes]
33
cut_poses = [ur.pose_by_plane(cp) for cp in cut_planes]
34
35
commands = ur.statements(#1) Approach start position
36
ur.movej(start_joints,3.0, 3.0),
37
#2) Approach first cut pose
38
ur.movel(initial_cut_pose, 0.1, 0.1),
39
#3) Move through rest of cut poses
40
[ur.servoc(cp, cut_accel, cut_vel, cut_blend) for cp in cut_poses],
41
#4) Move to end position
42
43
ur.movej(end_joints, 0.1, 0.15))
a = commands
......
Figure 5-7 Sequential Cut component and its encapsulated script.
The third addition was a user object named Crumple (Figure 5-8). It was derived from the Fold
component introduced in the 2013 DRS. Crumple had 6 required inputs: the distance (approach) the
robot has to move forward in order to grip the plastic strip; a vector (rotation) describing the
position of a pivot point for the folding operation; the angle of the fold; the length (c_dist) of a
vector describing the pushing/crumpling movement; and the duration of heating and cooling. The
script encapsulated in Crumple was structured in four chunks corresponding to the following steps in
156
the robotic process—approach, fold, crumple and retract. Other than the third chunk, the script in
the Crumple and Fold components were identical. The introduced lines of code—47 to 53—instruct
the robot to push the strip in the direction of the pivot point after it has been folded, thus crumpling
it.
......
35
# Local Motion 1 - Approach and grip
36
v_approach = (0,0, approach)
37
pose_approach = ur.pose_by_vectors(v_approach,(0,0,0))
38
commands1 = ur.statements(ur.sleep(0.5),
#1) Slight pause
39
ur.move_local(pose_approach,0.1, 0.1),
#2) Approach
40
ur.set_digital_out(grip_io, True))
#3) Close gripper
41
# Local Motion 2 - Fold
42
pose_tcp_offset = ur.pose_by_vectors(rotation, (0,0,0))
43
pose_fold = ur.pose_by_vectors((0,0,0), (0,0, angle))
44
commands2 = ur.statements(ur.sleep(heat),
#1) Heat
45
ur.set_tcp(pose_tcp_offset),
#2) Set Rotation point
46
ur.move_local(pose_fold,fold_accel,fold_vel))
#3) Rotate
47
# Local Motion 3 - Crumple
48
v_crumple = (-crumple,0, 0)
49
pose_crumple= ur.pose_by_vectors(v_crumple, (0,0,0))
50
commands3 = ur.statements(ur.move_local(pose_crumple,crumple_accel,crumple_vel),
#1) Crumple
51
ur.set_digital_out(cool_io, True),
#2) Cool on
52
ur.sleep(cool),
#3) Sleep/wait
ur.set_digital_out(cool_io, False))
#4) Cool off
53
54
# Local Motion 4 - Retract
55
v_retract = (0,0,-approach)
56
pose_retract= ur.pose_by_vectors(v_retract,(0,0,0))
57
commands4 = ur.statements(ur.set_digital_out(grip_io, False),
#1) Open clamp
58
ur.sleep(0.5),
#2) Slight pause
59
ur.move_local(pose_retract),
#3) Retract
60
ur.set_digital_out(clamp_io, True))
#4) Open gripper
61
commands_all = ur.statements(commands1, commands2, commands3, commands4)
62
a = commands_all
Figure 5-8 Crumple component and its encapsulated script.
157
Students were also provided with a sample Grasshopper program and Rhino model of the robotic
setup for each process. Figure 5-9 shows the program for foam-cutting. It had a total graphic token
count of 431 and was structured in eight parts. Part 1 loaded YOUR related modules into
Grasshopper. Part 2 contained standard Grasshopper components that generated two lists of
planes: one describing the surface to be cut; and another describing a path the robot has to move
through, while gripping the foam block, to produce the cut. The surface was generated from two
referenced edge curves (2a) drawn in the digital model. Students could design a new surface by
manipulating these curves; and determine its location by adjusting a slider (2b) that shifts the foam
block closer or further away from the hot wire.
Figure 5-9 The foam-cutting Grasshopper program was organised in eight parts. Part 1 loaded YOUR; parts 2
and 3 generated the cut form and movement targets, as well as a visualisation of the process; parts 4, 5, 6, 7
and 8 were related to robot control and contained YOUR components.
Part 3 produced an approximate simulation of the process. YOUR kinematics components were used
to visualise the robot’s configuration at every specified movement target (Figure 5-10). Part 4
contained a Listener component, which can be used to query information about the robot’s realtime state. The reference base was specified in part 5. Its origin was set at the centre of the physical
158
hotwire. 323 Part 6 was used to move the robot to its start position. It contained MoveJoints and
SetDigitalOut components connected to a Sender. Part 7 was related to picking; it was nearly
identical to part 6 with the exception of an additional Action component, which was responsible for
generating picking-related instructions. Finally, part 8 was related to cutting and contained the
SequentialCut component connected to a Sender.
Figure 5-10 Screenshot showing a visualisation of the foam-cutting process.
Figure 5-11 shows the program for crumpling plastic. It had a graphic token count of 391 and was
structured in seven parts. Part 1 loaded YOUR. Part 2 contained sliders for specifying the parameter
values for the crumpling process—a rotation point, a folding angle and the pushing/crumpling
distance. The outputs of these sliders were connected to part 3, which generates an approximate
visualisation of the crumpled form and robot configuration (Figure 5-12). It provided students with
immediate visual feedback so that they can understand the effects of changing these parameters.
Parts 4 (listening) and 6 (move to start) were identical to their corresponding parts in the foamcutting program. Part 5 was for setting the IO values of the clamping station to either open or close
it. Finally, part 7 contained the Crumple component. The parameters from part 2 were connected
directly to it as inputs and in turn, it was wired to a Sender component.
The coordinates of the origin point were specified with respect to the robot’s base coordinate system. To
get these values, the robot is moved, either manually or through a program, until its tip is at the centre of the
wire. The position (x-y-z) component of its pose is then read from the teaching pendant or directly extracted
using the Listener component. Subsequently, when a move component is given a target located at the origin of
the digital model, as well as the specified base as a reference, the robot should move to the centre of the wire.
323
159
Figure 5-11 The plastic-crumpling Grasshopper program was organised in seven parts. Part 1 loaded YOUR;
parts 2 and 3 set the parameters for the crumpling process and generated as a visualisation of it; parts 4, 5, 6
and 7 were related to robot control and contained YOUR components.
Figure 5-12 Screenshot of the crumpling visualisation.
Both Grasshopper programs were comparably smaller than the final programs implemented by
teams in the DRS, which had, with the exception of Bent Striations, graphic token counts exceeding
1000. To further improve the readability of these programs, components were organised in groups,
which were named and colour coded, and scribbles were added to comment the graph. In fact,
160
groups and scribbles, which are forms of secondary notation, constituted about a third 324 of all
objects in both Grasshopper programs.
There were four ways to interact with the foam-cutting Grasshopper program. Students could
directly manipulate the profile curves drawn in the model; adjust the values of parameters defined
in the Grasshopper program; edit the graph; and modify the script within YOUR components. The
Crumple program was similar, except that students could not directly manipulate geometry. Both
programs were run once to demonstrate the robotic processes at the start of the workshop.
However, no explanations were given. Students were instructed to tinker 325 with the given
Grasshopper program and construct their understanding of how the program worked through that
process. Support was only given to teams when it was requested or when their progress had stalled.
5.3 Data collection
Data was collected in 3 ways. First, students were observed while carrying out their robot
programming task. The author and an assistant were attached to each group and took notes about
their programming activities. In particular, the moments when students encountered a problem and
support was given were recorded. Second, artefacts were collected in the form of students’
programs, sketches and their fabrication results. Students were instructed to work with only one
program and to save multiple versions throughout the session. These programs were subsequently
analysed, using the tools described Chapter 3.5, to specifically understand how they evolved.
Interviews were conducted with students after the workshop. There were three parts to the
interview (see Chapter 9.3—Appendix). First, students were asked to describe their programming
process chronologically using print-outs of their Grasshopper programs and scripts of modified YOUR
components as reference material. Second, they were asked a series of questions based on those
from Blackwell and Green’s Cognitive Dimensions Questionnaire 326. Each question addressed a
31% in the case for the plastic-crumpling program and 37% in the foam-cutting program.
Resnick and Rosenbaum describe tinkering as a playful, exploratory and iterative style of working. Mitch
Resnick and Eric Rosenbaum, “Designing for Tinkerability,” in Design, Make, Play: Growing the Next Generation
of STEM Innovators, ed. Margaret Honey and David Kanter (New York: Routledge, 2013), 164.
326
Alan Blackwell and Thomas Green, “A Cognitive Dimensions Questionnaire for Users,” in Proceedings of the
Twelfth Annual Meeting of the Psychology of Programming Interest Group, ed. Alan Blackwell and Eleonora
Bilotta (Corigliano Calabro: Edizioni Memoria, 2000), 137–152.
324
325
161
cognitive dimension, which describes, in non-specialist terms, a particular feature of a notation. 327
The purpose of asking these questions was to prompt students to evaluate the usability of the given
Grasshopper program and the YOUR toolkit. Third, they were asked to compare the difficulty of
learning different robotics domain specific concepts, and discuss factors that impeded or facilitated
their progress in completing the robot programming task.
The following chapters describe how each team learned to use the sample programs, then modified
and extended them. These descriptions integrate students’ responses from the first part of the
interview. The answers to the second and third part of the interview are discussed collectively
thereafter.
5.4 Results: Group 1
Figure 5-13 The students focused initially on editing the parameters of the plastic-crumpling process (A, B and
C), then on customising the Crumple component (D, E and F). Components highlighted in green were added to
the program, while those in yellow were modified.
The first group began with the plastic crumpling process. Figure 5-13 shows a trace of their
programming activities—components highlighted in green were added, and those in yellow were
modified. At first, students systematically adjusted the sliders that controlled the folding angle
Thomas Green and Marian Petre, “Usability Analysis of Visual Programming Environments: A ‘Cognitive
Dimensions Framework’,” in Journal of Visual Languages and Computing 7 no. 2 (1996): 138.
327
162
(Figure 5-13: A), pivot point location (Figure 5-13: B), and magnitude of the crumpling motion (Figure
5-13: C) in the sample program. They ran it after each adjustment and immediately produced a
series of plastic strips. Thereafter, the students began to examine the code in Crumple (Figure 5-13:
E). They first made a copy of the original (Figure 5-13: F) before proceeding to modify the script. As
the students had already executed the robotic process numerous times, they understood the role of
each code chunk—approach, fold, crumple and retract. However, the students were unclear about
specific lines of code.
First they did not understand what a pose was. Hence it was explained to them that a pose describes
the position and orientation of the robot’s tip and is represented through a custom data
abstraction 328. It can be created from 2 vectors 329, which describe the positional and rotational
components of a pose respectively. The concept of a pose was clearer to students once this relation
to vectors—a familiar abstraction—was drawn. The students also did not understand the set_tcp
function; first what it meant and second, how it was used. It was explained that the tcp—Tool Centre
Point—described the tip of an end effector and was represented as a pose. 330 The function was used
to create a virtual tool centre point corresponding with the pivot point for folding.
Figure 5-14 The students produced a set of strips with different crumpled forms by changing the positional
component of the target pose specified for the folding operation.
The team decided to modify the folding operation, and develop a new compound movement
combining translation and rotation. They specified a point, rather than zero-vector, as the positional
A pose is similar to a list with 6 values, the first three representing positional information and the latter
three representing orientation.
329
They are a position vector and a rotation vector.
330
It is described with respect to the frame of the robot’s final wrist. The poses describing the tcp for endeffectors used in the workshop had zero vectors as their rotation component. If the end-effector is inclined
though, as was the case with the 45 degree grippers in the DRS, then the pose will have a rotation vector.
328
163
component of the target pose. Subsequently, the robot will move towards this point as it folds the
strip. The point was created in Grasshopper and fed as an input to the modified Crumple component
(Figure 5-13: D). The students added three sliders to set its x-, y- and z-coordinates and explored the
material results of varying pointA’s position (Figure 5-14).
The team switched to the foam-cutting process in the second block of the session, and began by
adjusting parameters in the sample Grasshopper program shown in Figure 5-15. First, they changed
the inputs to the curve parameters (Figure 5-15: A). By simply mirroring one of the edge curves, the
team was immediately able to cut a torqued surface. Next, they redrew the edge curves and made
them slightly wavy. The resulting surface had a subtle undulating texture. Finally, the students
adjusted a slider (Figure 5-15: B) to offset the foam block. They re-sent the same cutting instructions,
and produced a thickened surface that could stand upright (Figure 5-16).
Figure 5-15 The students edited the parameters of the foam-cutting process (A and B), then experimented
with assembling YOUR components from scratch; and finally modified the SequentialCut component (D).
164
Figure 5-16 The team produced a thickened surface with an undulating texture by making two identical offset
cuts.
Next, the team focused on the robot control section of the sample program. First they created a
small sub-graph containing a SetDigitalOut component connected to a Sender (Figure 5-15: C) to try
out the process of assembling YOUR components from scratch. Thereafter, they began to examine
the script in SequentalCut. At this point, the different movement types were explained. The students
edited the script by specifying linear rather than servo movements. They discovered that linear
motions have acceleration and deceleration phases, which results in an uneven rate of cutting. This
causes the surface to have a striated texture. To control how pronounced these striations were, the
team created a slider to control the acceleration rate and wired it to modified SequentialCut (Figure
5-15: D).
The student pair chose to work with the plastic-crumpling process for the second half of the session.
They wanted to find out what type of forms would emerge after strip was crumpled multiple times,
and therefore extended the process in this direction. Their programming strategy was to decompose
the Crumple component into several simpler ones, edit the code of these new components, and then
re-assemble them to prototype a new process. It therefore involved switching from text back to
graphical programming.
165
Figure 5-17 The students decomposed the Crumple component into several simpler ones (C and D), and edited
their input parameters (A and B).
In the first stage, the team made 4 copies (Figure 5-17: D) of the original Crumple component (Figure
5-17: C) and renamed them—Approach, Fold, Crumple and Retract. For each component, the
students kept the code chunk related to the operation but deleted the rest, as well as removed input
parameters that were extraneous. 331 They subsequently adjusted the fold angle and folding point
(Figure 5-17: A), as well as crumpling distance (Figure 5-17: B) parameters that fed into these four
custom components; and wired their outputs to a new Merge (Figure 5-17: E) component to
concatenate the list of instructions.
Figure 5-18 The students created additional customised YOUR components (B, C and D) and focused on
sequencing the output instructions (E).
In the second stage, the team added a copy each of Fold (Figure 5-18: B) and Crumple (Figure 5-18:
C), as well as a new component called MoveX (Figure 5-18: D) to the graph. MoveX generated
Several components (Figure 5-17) were red, which reflected an exception in the code, because the script
referenced a parameter that had been deleted.
331
166
instructions for the robot to pull the strip from the clamping station so a flat section could be
crumpled again. The outputs from the seven custom components were wired to the Merge
component (Figure 5-18: E) in the following order: approach, fold, crumple, movex, fold, crumple,
and retract. The team changed the input angle (Figure 5-18: A) for the original Fold component such
that the robot performs the first and second folding operations in opposite directions. The students
used this program to produce the first crumpled strip; it had a sinusoidal form with a trough and a
crest.
Figure 5-19 The students created additional Fold components (A and B) and focused on sequencing the output
instructions (C).
In the third stage, the students added two more copies of Fold (Figure 5-19: A and B) with attached
sliders for setting new folding angles. The team wired the outputs from their nine custom
components 332 to a list of thirty-seven parameters, which were arranged in a single column and
connected in sequence to a Merge component (Figure 5-20: left). The first and last parameters were
named Approach and Retract respectively. There were seven groups of parameters added in
between. Each comprised five parameters with instructions for folding, crumpling, folding, crumpling
and pulling (MoveX) operations. The team copied and pasted a group each time they wanted to
produce a crumpled strip with an additional trough or crest (Figure 5-20: right).
332
There are actually 9 in the group, but Crumple2 was identical to Crumple1 and thus students did not use it.
167
Figure 5-20 The instructions generated by the custom components are stored in parameters that are wired to
a Merge component (left); which in turn outputs a list of instructions for repetitive crumpling operations
(right).
The student pair took around 3 ½ hours to produce the first strip in the series. At the start, they
spent most of their time debugging the script within their custom components. To test their program,
they simply ran it without inserting a strip into the clamp. Once the students had observed that the
robot’s motions were generally correct, they began to crumple strips. Initially, the team did not
produce the forms that they envisioned. However, they continually tuned parameter values, such as
folding angles, based on the feedback gained from the material results. Eventually, they gained more
control over the process and produced a series of increasingly crumpled and accurate strips (Figure
5-21).
168
Figure 5-21 The students produced a series of strips that were increasingly crumpled and accurate, in terms of
matching their expectations.
5.5 Results: Group 2
The second group focused on the foam-cutting process in the first block of the session. Unlike the
previous team, who immediately began running the sample program, they wanted to understand
how it worked first before making a cut. They systematically traced how data flowed through the
graph and ran into difficulties trying to understand a section (part 2) of it. They added panels (Figure
5-22: A) to inspect the output of particular components and scrolled (Figure 5-22: B) through the
different steps of the robot simulation to clear up their misunderstanding.
Figure 5-22 The state of the group’s robot program at the end of the first block in the session. The students
only added panels to inspect data (highlighted in green) and modified slider values for offsetting cuts and
scrolling through the visualisation (highlighted in yellow).
169
The student team only made their first cut after an hour. They then adjusted a slider (C) to shift the
block closer to the hotwire and recut the same surface to produce a thickened curved sheet. The
students repeated this process and realised a series of such sheets (Figure 5-23). Subsequently, they
began to examine the code in the SequentialCut component, but did not manage to edit it before
running out of time. In fact, other than add panels and adjust sliders, they left the program in its
original state.
Figure 5-23 The students produced a series of thickened surfaces by making repeated offset cuts.
The student trio worked with the plastic-crumpling process next. After gaining an overview of the
program, they shifted part 2, which contained control parameters, and part 7, which contained the
Crumple component, to the bottom of the canvas. Parts 2, 3 (visualisation) and 7 were originally
adjacent to one another, which suggested that the outputs of one part fed the inputs of another to
its immediate right. However, as the students correctly observed, there were no connections
between parts 3 and 7. The original layout of the notation was, in this respect, misleading.
170
Figure 5-24 The students adjusted the program’s layout, parameter values (A) and modified Crumple (B).
Once they had re-arranged the layout of the original program, the student team realised that they
could either manipulate sliders (Figure 5-24: A) or modify the Crumple component (Figure 5-24: B).
They started with the former and adjusted the slider values for the fold angle; x-, y- and zcoordinates of the pivot point; and the magnitude of the pushing/crumpling motion. Figure 5-25
shows a series of crumpled forms that they produced with different specified fold angles.
Figure 5-25 A series of crumpled strips produced by varying the folding angle.
171
The team proceeded to edit the script in Crumple. At this point, the pose and tool centre point (tcp)
concepts were explained. First, the students added another folding operation after crumpling. Next,
they reverted back to the original sequence but edited the crumpling operation. They changed the
vector that specifies the crumpling direction, thus causing the robot to push the strip towards the
pivot point at an angle. At this stage, the team was simply exploring, in a trial and error fashion, the
effects of modifying selected statements in the script.
Figure 5-26 A stepped surface is produced by repeatedly slicing the foam block in smaller sections.
The team chose to work with the foam-cutting process in the second half of the session. Their initial
concept was to create a stepped surface by iteratively subtracting layers from a foam block (Figure
5-26). For each successive layer, the robot shifts the block closer to the hot wire, and traverses a
smaller sub-section of the cut-path before pulling the block out. The students felt that this sequence
could be easily described using a for loop. Hence they decided to focus on scripting.
Figure 5-27 shows the state of their program when they succeeded in cutting a surface with three
steps for the first time. The students had adjusted the sliders for specifying the number of cut-planes
(Figure 5-27: A) and position of the block (Figure 5-27: B). However, the main modification was at
the bottom right corner of the graph (Figure 5-27: D). Instead of editing the script of the original
SequentialCut component (Figure 5-27: C), they decided, similar to previous group, to decompose it
into simpler components.
172
Figure 5-27 The team adjusted sliders to vary the number (A) and position (B) of slices; and created a subgraph (D) to generate the new cutting operations.
Figure 5-28 The students created three custom components to generate instructions for the approach (B),
iterative slicing (C) and exit (D) phases of the process. They replaced the original SequentialCut component (A).
Figure 5-28 shows this corner of the graph in greater detail. The students realised that the robotic
process consisted of an approach, cutting and exit phases. They created 3 custom components that
were each responsible for generating instructions for one of these phases. These instructions were
merged (Figure 5-28: E) together in a list and passed to the Sender (Figure 5-28: F). The first (Figure
173
5-28: B) and third (Figure 5-28: D) components were copies of SequentialCut. Their scripts were
edited to only include movement statements relating to the approach and exit phases respectively.
The second component (Figure 5-28: C) was a standard Python component. Its script was written
from scratch. The students also made another copy of SequentialCut (Figure 5-28: A) in case they
had to revert back to the original version.
Figure 5-29 shows a snippet of the code from the second Python component. In lines 13-16, the
robot is instructed to move through all the target planes on the specified cut-path. In lines 18-20, it
is instructed to pull the foam block out of the hotwire and then in line 21, return to a pre-cutting
start position. This chunk of code is responsible for subtracting one layer from the block. The
students copied and pasted this chunk twice to cut two more layers. Each time, they sliced the list of
target planes (lines 29 and 40) to shorten the cut-path. They also shifted the target planes 1 cm in
the y-direction (lines 30 and 41), resulting in a deeper cut.
......
13
for target_plane in targets[:24]:
14
target_pose = ur.pose_by_plane(target_plane)
15
command =ur.servoc(target_pose, cut_accel, cut_vel, cut_blend)
16
results.append(command)
17
# finaly move out
18
ur.sleep(0.01)
19
move_out_pose = ur.pose(0,0,-0.05,0,0,0)
20
results.extend(ur.move_local(move_out_pose,0.1,0.01))
21
results.append(ur.movej(start_joints,3.0, 3.0))
......
29
30
for target_plane in targets[:23]:
target_plane.OriginY += 0.01
......
40
41
for target_plane in targets[:22]:
target_plane.OriginY += 0.01
......
Figure 5-29 Code snippet from custom Python component.
Instead of continuing to copy and paste code, it was recommended to students that they first create
a nested list whose length corresponds with the number of layers to cut. Each successive item in the
list is a smaller slice of the input target list that corresponds with a shorter section of the original cutpath. Support was given to implement this solution. Thereafter, students iterated over the nested
174
loop and generated the movement instructions needed to subtract all layers in a continuous process.
They were able to produce a surface cut with six steps (Figure 5-30) before running into a singularity
error 333.
Figure 5-30 The foam block was sliced six times to produce the final stepped surface.
Towards the end of the session, the team came up with a new concept, which was to produce
serrated surfaces (Figure 5-31). Their strategy was to first create two lists of target planes, describing
cut-paths that are offset from one another, and have the robot alternate between them. The robot
would move from target 0 in the first list to target 0 in the second, then to target 1 in the first list
and so on.
Figure 5-31 A serrated surface is produced by making two offset zig-zag cuts.
Singularity errors occur when “robot axes are redundant … or when the robot is in certain configurations
that require extremely high joint rates to move at some nominal speed in Cartesian space.” Edward Red,
“Robotics Overview,” EAAL—Electronics Assembly and Automation Laboratory, accessed January 1st 2016,
http://eaal.groups.et.byu.net/html/RoboticsReview/body_robotics_review.html
333
175
The students decided to instantiate a new Python scripting component to generate instructions for
making a serrated cut, rather than modify the previous custom component responsible for the
slicing operation. After creating two lists of target planes in the script, they struggled to implement
the alternating movement idea in code. At this point, they were instructed to use the Python zip
function 334, which they were unaware of, to merge the two lists of planes in pairs. A loop was then
created to step through each pair in the sequence and generate linear movement instructions
accordingly. This solved the problem elegantly and students managed to cut the serrated surface
shortly after (Figure 5-32).
Figure 5-32 The second cut is offset from the first one (left) to produce a thickened serrated surface.
5.6
Results: Group 3
The third group began with the foam cutting process. They ran the program in its original state to
make their first cut. Subsequently, the team came up with the idea of cutting a more complex finger
joint. Figure 5-33 shows their modified version of the sample foam-cutting program. They added a
sub-graph (Figure 5-33: A) containing seven components and a slider. It copied the original list of cut
planes and offset each plane in the direction of its normal, and then wove the two lists together. 335
The resultant cut-path was visualised by drawing a polyline through the origins of each plane in the
zip is a built-in Python function that “returns a list of tuples, where the i-th tuple contains the i-th element
from each of the argument sequences or iterables.” “Built-in Functions,” Python, accessed January 1st 2016,
https://docs.python.org/2/library/functions.html#zip
335
The Weave component requires a ‘weaving’ pattern to be specified. This pattern is “a list of index values
that define the order in which input data is collected”.
334
176
woven list (Figure 5-34). The students connected the new target planes to the Inverse Kinematics
component and scrolled (Figure 5-33: B) through a simulation of the cutting process. They also
added panels to inspect both the input and output (Figure 5-33: C) of the SequentialCut component
(Figure 5-33: D).
Figure 5-33 The team added a sub-graph (A) to part 2 in order to generate the new cutting path; and modified
the SequentialCut component (D) in the foam-cutting program.
Figure 5-34 Screenshot showing the visualised cut-path.
177
At first, the team did not manage to cut their surface successfully. This was because the script in
SequentialCut specified circular servo movements with a pre-set blend radius. These movements
were ideal for cutting smooth, but not angular surfaces. At this point, the different robot motion
types were explained. The students subsequently specified linear movements instead and managed
to accurately cut their finger-joint surface as a result (Figure 5-35).
Figure 5-35 The team modified the sample program to cut a finger joint.
The student pair switched to the crumpling process next. They began adjusting sliders in the original
Grasshopper program that controlled the folding angle (Figure 5-36: A), location of the pivot point
(Figure 5-36: B), and magnitude of the crumpling motion (Figure 5-36: C). Meanwhile, they correctly
guessed how the robot control section of the graph (parts 4, 5,6 and 7) worked; as it was structured
in a similar way to the previous foam-cutting program and only contained one new component—
Crumple.
178
Figure 5-36 The students adjusted sliders in the plastic-crumpling program (A, B and C); and then modified the
script in the Crumple component.
Figure 5-37 The plastic strip is produced as a result of modifying the original folding and crumpling motions.
After the students had systematically tested the effect of changing each parameter in the program,
they decided to explore the idea of twisting the strip. Since a “twist” parameter was not provided,
they began to examine the script in Crumple (Figure 5-36: D). The students could understand what
each code chunk in the script was for, but were unclear about specific lines that involved poses,
which were subsequently explained to them. The students made two modifications to the script.
179
First, they changed the rotation vector 336 for the folding movement, and second, edited the vector
that describes the crumpling direction A twisted strip is produced as a result of these two modified
movements (Figure 5-37).
The team chose to work with the crumpling process in the second half of the session. Instead of
continuing to simply twist the strip though, they decided to crumple it repeatedly. The team felt that
it would be easier to implement such procedural logic in textual code, 337 and thus focused on
scripting within the Crumple component. First, they undid earlier changes made to the script for
producing twisted strips. Second, they added a code chunk describing a new pulling operation. It
instructed the robot to open the clamp (Figure 5-38: line 65), and pull the strip out (Figure 5-38: line
67) before clamping it again (Figure 5-38: line 69). Next, they copied and pasted code chunks relating
to folding, crumpling and pulling operations; and extended the final list of instructions with those
generated by these chunks. Consequently, the robot approaches and grips the strip; folds, pulls and
crumples it repeatedly; and finally releases it; thus producing the type of crumpled forms shown in
Figure 5-39.
......
61
Local Motion 3 - Pull Acrylic
62
v_crumple = rg.Vector3d(crumple,0,0)
63
v_crumple.Rotate(-angle,rg.Vector3d.ZAxis)
64
pose_crumple= ur.pose_by_vectors(v_crumple, (0,0,0))
65
cmd_pull = ur.statements(ur.set_digital_out(clamp_io, True),
#1) Open Clamp
66
ur.sleep(0.5),
67
ur.move_local(pose_crumple,crumple_accel,crumple_vel), #2) Pull from clamp
68
ur.sleep(0.5),
69
ur.set_digital_out(clamp_io, False))
#3) Close clamp
......
Figure 5-38 The team added a code chunk to the script in Crumple describing a new pulling operation.
The rotation vector represents orientation. Its direction describes the axis of rotation, while its magnitude
denotes the angle of rotation.
337
When asked why they chose a scripting based approach, one student replied that “Grasshopper is good for
small things”, but complex logics are easier [to express] in code.”
336
180
Figure 5-39 The team modified the component to repeatedly crumple a section of the strip.
At this stage, the students decided to alter the fabrication process; they wanted to distribute the
crumpled folds along the length of a strip rather than concentrate them at a localised section. The
students introduced a new operation, named retract, where the robot releases the strip and returns
to a predefined start position. It was inserted after an approach-fold-crumple-pull sequence. A new
code chunk related to this operation was added to the script (Figure 5-40: lines 61–68). At the same
time, the students refactored their code to improve its readability. They deleted previous copiedand-pasted code chunks, which were essentially redundant, thus making the script more concise.
Using their modified Crumple component, the team managed to produce folded strips with straight
sections and crumpled corners (Figure 5-41).
......
61
# Local Motion 4 - Retract
62
v_retract = (0,0,-approach)
63
pose_retract= ur.pose_by_vectors(v_retract,(0,0,0))
64
cmd_retract = ur.statements(ur.set_digital_out(grip_io, False),
#1) Open grip
65
ur.sleep(0.5),
#2) Slight pause
66
ur.move_local(pose_retract),
#3) Retract
67
ur.movej([4.06569, -1.39741, 1.97041, -0.57166, 2.49420, -3.13756],
68
3.00, 0.75, 0.00, 0.00))
......
92
commands_all = ur.statements(cmd_approach,cmd_fold,cmd_crumple1,cmd_pull,cmd_retract,
93
cmd_approach,cmd_fold,cmd_crumple1,cmd_pull,cmd_retract,
94
cmd_approach,cmd_fold,cmd_crumple1,cmd_pull,cmd_retractend)
......
Figure 5-40 The team added a code chunk (lines 61–68) describing a new retract operation to the script in
Crumple; and extended the list of instructions (lines 92–94) to reflect the new sequence of operatrions.
181
Figure 5-41 Strips with straight segments and u-shaped crumpled folds at their corners.
In the final phase of their exploration, the students wanted to produce a new s-shaped crumpled
fold. They introduced a new unfold operation, and sequenced it after the crumpling operation in the
script (Figure 5-42). It was responsible for straightening out the strip (Figure 5-43: left) and forming
the second bend in the ‘s’. Implementing the sequential logic of this process proved to be
straightforward. However, the students had to conduct extensive empirical tests in order to finetune the fabrication parameters, such as the heating and cooling times. 338 Only then did they
achieve accurate s-shaped crumpled folds (Figure 5-43: right).
......
132
commands_all = ur.statements(cmd_approach,cmd_fold1,cmd_crumple,cmd_unfold1,cmd_pull1,cmd_retract,
133
cmd_approach,cmd_fold2,cmd_crumple,cmd_unfold2,cmd_pull2,cmd_retract,
134
cmd_approach,cmd_fold1,cmd_crumple,cmd_unfold1,cmd_pull1,cmd_retract,
135
cmd_approach,cmd_fold2,cmd_crumple,cmd_unfold2,cmd_retractend)
......
Figure 5-42 The team specified repeated approach-fold-crumple-unfold-pull-retract operations in the final
version of the script in their modified Crumple component.
One student described the serendipitous manner in which they arrived at a solution. “We thought why not
give it (the strip) time after heating, give it time to rest before crumpling, then accidentally we found that this
slight pause makes the perfect fold.”
338
182
Figure 5-43 Final set of strips with s-shaped folds (left); fold detail (right).
5.7 Results: Group 4
The final group worked with the crumpling process first and began by adjusting sliders in the original
program. They increased the folding angle (Figure 5-44: A) and crumpling distance (Figure 5-44: C),
as well as shifted the pivot point (Figure 5-44: B) closer to the clamping station. After exploring the
effects of changing these parameters incrementally, students began to examine the script
encapsulated in Crumple (Figure 5-44: D). As with other groups, the pose and tool coordinate system
concepts were explained at this point.
Figure 5-44 Students adjusted sliders in the plastic-crumpling program first (A, B and C), then modified the
Crumple component (D).
183
The team changed the original folding movement by specifying a new rotation vector (Figure 5-45:
line 43) in the script. It was no longer aligned with the z-axis of the tool coordinate system, thus
causing the robot to twist the strip (Figure 5-46: left). In addition, they introduced a second folding
operation with a different rotation vector (Figure 5-45: lines 55–58). Figure 5-46 (right) shows the
material results of carrying out this extended sequence of fold-crumple-fold operations.
......
41
# Local Motion 2 - Fold
42
pose_tcp_offset = ur.pose_by_vectors(rotation, (0,0,0))
43
pose_fold = ur.pose_by_vectors((0,0,0), (angle,-angle,0))
44
commands2 = ur.statements(ur.sleep(heat),
45
46
#1) Heat
ur.set_tcp(pose_tcp_offset),
#2) Set Rotation point
ur.move_local(pose_fold,fold_accel,fold_vel))
#3) Rotate
......
54
# Local Motion 4 - Fold
55
pose_tcp_offset = ur.pose_by_vectors(rotation, (0,0,0))
56
pose_fold = ur.pose_by_vectors((0,0,0), (-angle,0,0))
57
commands4 = ur.statements(ur.set_tcp(pose_tcp_offset),
58
ur.move_local(pose_fold,fold_accel,fold_vel))
#1) Set Rotation point
#2) Rotate
......
Figure 5-45 The team altered the rotation axis for the original folding movement (line 43) and added a code
chunk related to a second folding operation in the script of Crumple.
Figure 5-46 The robot twists the strip because the rotation vector is nolonger aligned with the z-axis of its tool
coordinate system (left); strips produced as a result of fold-crumple-fold operations (right).
184
The team switched to the foam-cutting process for the rest of the workshop. Figure 5-47 shows the
state of their Grasshopper program during the initial phase of their exploration. They started off by
drawing new curves in the digital model that described the profile of the surface they wanted to cut.
These curves were then set as inputs parameters to the graph (Figure 5-47: A) and the students then
ran the program to cut their first surface. Next, they adjusted another slider (Figure 5-47: B) to shift
the foam block closer to the hotwire, thus resulting in a deeper cut. By continually shifting the block,
they realised a series of thin curved slices.
Figure 5-47 Students changed the input curve parameters (A) and adjusted the slider for shifting the foam
block (B) in the original foam-cutting program.
Figure 5-48 The team generated different surfaces to cut by changing their profile curves from splines (left) to
polylines with six segments (middle) and finally, to polylines with 2 segments (right).
185
Thereafter, the team wanted to explore what other types of surfaces could be produced. They
switched from smooth profile curves (Figure 5-48: left) to segmented polylines (Figure 5-48: middle),
and generated an angular, faceted surface. Figure 5-49 shows the state of their foam-cutting
program during this phase of their exploration. The team edited the initial curve parameters (Figure
5-49: A) to reference these newly drawn polylines. At this stage, they were also advised to switch to
a linear type motion. However, when the team attempted to cut the faceted surface, they
repeatedly ran into an error that forced the robot to shut down.
Figure 5-49 The state of the team’s foam-cutting program at the end of the session. The students referenced
different input profile curves (A); created a new subgraph that generated target planes (B); and modified the
inputs (C) and script of the SequentialCut component (D and E).
It was explained that the error was caused by a joint rotating at excessive speeds. This could occur if
the robot has to abruptly change the orientation of its pose, as is the case when it transitions from
cutting one facet of the surface to another. Two solutions were attempted. First, the speed and
acceleration parameters for the movel function were lowered. This was changed in the script of the
SequentialCut component (Figure 5-49: D) Second, the configuration of the robot, in terms of joint
angles, at the start of the cutting motion was adjusted (Figure 5-49: C). However, neither solution
worked. This prompted a change in strategy; the team would revert to a simpler surface and then
raise its formal complexity incrementally. Hence each input curve was changed to a 2 segment
polyline (Figure 5-48: right).
186
Furthermore, the team was advised to reduce the number of cut planes and to position them at the
edge between facets. They consequently added a sub-graph to the program to generate these new
cut planes (Figure 5-49: B). The students managed to cut the simplified surface as a result (Figure
5-50: left). They were also encouraged to experiment with a different type of motion, whereby the
robot moves linearly in joint space rather than tool space, as was previously the case. This motion
reduces the risk of a speed limit violation error occurring. However, the cut-path also becomes less
predictable since the tool is no longer constrained to a line. To specify this motion type, the movej
function was called instead of movel in the script of the SequentialCut component. Figure 5-50 (right)
shows a surface that was cut after this modification was implemented.
Figure 5-50 Surfaces cut using linear (left) and joint-based motions (right).
Switching to joint-based motions proved to be a breakthrough. It allowed the team to avoid
singularity errors and they consequently managed to make fifteen cuts within the next two hours.
The joint-based motion also produced curvilinear forms. The team decided to explore this
unanticipated aesthetic result further. They cut a more complex surface by referencing the earlier
segmented polylines (Figure 5-49: middle) as input curve parameters in their foam-cutting program.
The students shifted the block and re-executed the program, producing a thickened wavy sheet as a
result (Figure 5-51: left). Finally, they explored the concept of superimposing two different cuts to
create a curved form with contrasting front and back surfaces (Figure 5-51: right).
187
Figure 5-51 A thickened wavy surface is produced by making two offset joint motion cuts (left); A thickened
surface with contrasting front and back faces is produced by superimposing two different joint motion cuts
(right).
5.8 Interview results
In the post workshop interview, students were asked a series of questions regarding the usability of
the Grasshopper programs prepared for them. Each question corresponded to a cognitive
dimension. 339 Several questions elicited longer responses from students, suggesting that those
dimensions were more relevant to them. The responses to these questions will be discussed here.
The first related to visibility 340. Students were asked how easy or difficult it was to see all parts of the
program at once. They stated that it was possible to get an overview of the program if you zoomed
out far enough; but at this level, it was possible to distinguish groups, but not individual components.
They noted that certain aspects and items in the graphical notation stood out: colours, scribbles, and
objects with iconic representations such as number sliders 341. By comparison, all components looked
alike and could only be identified through their names or symbols. One student suggested that YOUR
components should be further differentiated from standard Grasshopper ones by for example, using
colour or at the very least, adding a prefix to their names.
The dimensions addressed in the questionnaire are: visibility, viscosity, diffuseness, hard mental operations,
error proneness, closeness of mapping, roles expressiveness, hidden dependencies, progressive evaluation,
premature commitment, consistency, secondary notation and abstraction.
340
Visibility refers to whether “every part of the code is simultaneously visible.” Green and Petre, “Usability
Analysis of Visual Programming Environments,”139.
341
One student stated that “number sliders are really visible … you don’t have to dig into the code to find the
line that sets the parameter value.”
339
188
Students also responded to the question relating to role-expressiveness 342. They were asked how
easy or difficult it was to infer the purpose of: different sections of the graph, individual YOUR
components, and the code they encapsulated. Most of them interpreted this question as whether
they could understand the given program. A majority of the students considered the graphical
notation to be less comprehensible than the scripts. They reported difficulty tracing data flow
through the graph, especially in the upstream section. 343 Students tried to infer the role of YOUR
components by their names. 344 However, they reported that some names, such as Action, were too
generic 345 or, in the case of SetDigitalOut, used technical terms that they were unfamiliar with.
Students described the internal scripts of YOUR components as being well-structured. A script was
organised in small blocks that were extensively commented, thus making it easier to understand.
Furthermore, unlike the graphical notation, there was no ambiguity about how a script should be
read, which was from top to bottom.
When asked a question concerning secondary notation 346, students responded that cues such as
groupings, colours and (scribble) comments did help to make the Grasshopper program more
readable. Some of them pointed out the parts of the graph related to robot control (e.g. Listen, Set
IO, Go To Start) as examples where the notation was well-designed. 347 Each part only contained a
few components, was well labelled, and could be run separately. Students had more difficulties
reading the parts that generated the form (foam) or visualisation (crumple). Compared to the robot
control parts, they had larger graphic token counts and a lower percentage of objects dedicated to
secondary notation. 348
Another question related to viscosity 349. Students were asked how it easy or difficult it was to
change the graphical program and the code encapsulated by YOUR components. Students replied
that it was very easy to make individual changes, for example adjusting slider and panel values, or
Role-expressiveness refers to whether the reader can “see how each component of a program relates to
the whole.” Green and Petre, “Usability Analysis of Visual Programming Environments,”139.
343
One student replied that “I can go through one by one but I don’t feel like doing it because it is tedious.”
Another stated that “with Grasshopper, I always have a difficulty tracing the wires.”
344
One student stated that “I just assume a component does what its name suggests.”
345
The student criticised the name as being “too generic” and stated that in comparison, she could at least
guess that the SequentialCut component was related to cutting foam.
346
Secondary notation refers to “layout, colour, [or] other cues [that] convey extra meaning, above and
beyond the ‘official’ semantics of the language.” Green and Petre, “Usability Analysis of Visual Programming
Environments,”139.
347
One student stated that “this part (robot control) is easier to read because there are fewer components,
but more descriptive headings.”
348
For example, in the plastic-crumpling program, the visualisation group had a graphic token count of 103,
while the largest robot control group (Go to Start) had a token count of 36. Moreover, the former had a lower
percentage of secondary notation related objects (21%) compared to the latter (36%).
349
Viscosity refers to “how much effort is required to perform a single change.” Green and Petre, “Usability
Analysis of Visual Programming Environments,”139.
342
189
re-ordering the wires that enter the Weave component. However, it was harder to make structural
changes to the Grasshopper program, especially to parts that they did not understand. Students
found the script easy to change. As the code was well-commented, they knew which chunks to focus
on. In most cases, teams only made selective edits to the script, for example by commenting out a
few statements or invoking a different type of movement function.
The last cognitive dimensions related question that students responded to concerned abstraction.
Students were asked whether they created new components or functions, and if they used the
LoadFunction component. Only group 2 created custom components from scratch, while the others
copied and modified existing ones. Teams generally expressed confidence in being able to create
custom components so long as they could reference an existing one as they implemented it. 350 No
team used the LoadFunction component. However, a student from the first group reported that she
was planning to use it later on to re-organise their Grasshopper program, but their priority was to
implement the program correctly first. 351 A student from group 3 stated that he would decompose
the script in their modified component into several functions and add them to YOUR if he were to
work with their process further. 352
In the final part of the interview, students were asked to evaluate the difficulty of various robotics
domain specific concepts, and to discuss factors that impeded or facilitated their progress in
completing the robot programming task. According to them, the different motion concepts—linear,
joints, local, servo—were easy to understand because each of them could be demonstrated
physically with the robot. Meanwhile, students were less clear about poses, axis-angles and
reference bases. However, these concepts were rendered less abstract when explained in relation to
planes, vectors and transformations, which students were familiar with from a previous design
computation course. Three of the four teams considered singularities to be the most difficult
concept. 353 According to them, the reasons why these singularity errors occurred were opaque and
therefore have to be clearly explained; they suggested that prevention strategies be explicitly taught,
preferably through demonstrations.
A student stated that “with something existing (the script in a YOUR component), it is easy to edit. But if
you ask me to write the code for a new compound action from scratch, I don’t think I can do it.”
351
The team created 4 custom components. The plan was to instantiate a new Python component and invoke
functions equivalent to these components in the script. Hence this new Python component would replace the
previous four, allowing them to, in their words, “clean up the Grasshopper program.”
352
The student stated he would like to create “a different function for every robotic operation [in their
process] because right now, [their] component was hard-coded and [therefore] not very flexible.”
353
Group 1 was the exception. They did not experience any singularity errors during workshop and thus did
not comment on it.
350
190
Students considered the need to learn the aforementioned domain concepts as the main
impediment to their progress in developing the robot program. Understanding these concepts, even
partially, helped to unlock new possibilities in terms of achieving more sophisticated fabrication
results. Each team was, on a whole, proficient enough in Grasshopper and Python scripting to
implement their fabrication concept. However, group 2, who focused on scripting, spent a
considerable amount of time fixing syntax errors in their code and required additional assistance in
this regard.
According to students, timely intervention by the author or assistants was a key factor that helped
them to progress. On one hand, students reported that it was difficult to understand concepts, such
as movement types or singularities, if they were introduced too early in advance by instructors. This
was because they could not relate the concept to a concrete example. On the other hand, they
reported being frustrated when intervention was delayed, as the only way they could solve a
problem was through trial-and-error. 354 In addition, students identified the possibility of producing
unanticipated material outcomes as a factor that motivated them to develop their programs
further. 355
5.9 Pedagogic issues
For the workshop, students had to learn how to control a prepared robotic fabrication process using
a sample program, and thereafter, customise the process by extending the program. It was unclear if
these tasks were realistic considering the short duration of the workshop, and the fact that students
had no previous robotics knowledge and limited programming experience. Yet groups 1 and 3
managed to accomplish the tasks almost independently. They only requested help from the author
or assistants to explain specific domain concepts, such as poses or movement types, which they
could not understand from looking at the code or observing the robot alone. Groups 2 and 4
required similar explanations, as well as additional support in terms of coding and solving singularity
errors respectively.
A member of group 4 noted that at one point of the workshop, his group was “stuck” and they did not know
how to “bridge the difference between what we envisioned [in terms of the robot’s movement] and how the
machine actually reacted [an error].” Another student stated that “we really needed you (the instructor) to
step in to explain the different motion types, or else we simply would not know [how to proceed].”
355
One student stated that the unpredictability of the result was “very exciting”, while another was driven to
find out how the “motions [we] designed translated to a material outcome.”
354
191
Furthermore, students exceeded expectations by producing material results that were surprising in
their extent and inventiveness. For example, group 4 realised a wider range of cut forms than the
Mesh Towers team did, and in a far shorter amount of time. Similarly, group 3 produced acrylic
sheets with more complex deformations than the Vertical Avenue team did, and with an equivalent,
if not greater, degree of precision. To be fair, students in the workshop did not have to develop a
physical setup nor design a high-rise at the same time. And they were re-using knowledge produced
in the DRS that was captured in the form of YOUR abstractions.
Nonetheless, these results prove that students, with no robotics experience, can learn how to
control a robotic process and achieve material results within a short period of time—in this case, less
than a single day. 356 The caveat is that students should already be familiar with programming,
though they need not be proficient at it, and with computational geometry concepts such as vectors
and planes. Further empirical studies have to be conducted to determine whether students, who do
not have such knowledge, can achieve similar outcomes; and if so, how long it would take.
The results also suggest that students are able to acquire knowledge of underlying robotics domain
concepts that while difficult to grasp, can be transformative in nature. 357 For example, groups 1 and
3 began producing more complex crumpled forms only after learning how to create and manipulate
a pose abstraction. Moreover, students can re-apply this fundamental knowledge if they are to work
with different robotic processes in future. Therefore, a workshop should explore processes that
embody such concepts so that students can be exposed to them. In this regard, subtractive and
formative processes are suitable choices, but not pick-and-place.
In a workshop setting with tight time constraints, students should be tasked to modify a working
sample program, rather than create a new one from scratch. This allows them to immediately
engage with the robotic process, thus avoiding the problem of delayed gratification 358, which can
demotivate learning. There were several observable patterns in the way teams modified a program.
Initially, they tried to understand how it worked by making small, incremental changes and
observing the results. It was a common strategy to read the graphical program in chunks rather than
at a more granular component level. Next, teams transitioned to a lower level of abstraction to
investigate what additional changes could be made. They inspected the script in YOUR components
Students were producing a series of artefacts with both sample programs in the first day of the session.
They focused on extending the program in the second day.
357
Rountree at al. state that pivotal threshold concepts are challenging for the learner to understand, but can
transform their view of a subject matter. Janet Rountree et al., “Elaborating on threshold concepts,” in
Computer Science Education 23 no. 3 (2013): 265.
358
Thomas Green and Marian Petre, “Usability Analysis of Visual Programming Environments: A ‘Cognitive
Dimensions’ Framework,” in Journal of Visual Languages and Computing 7 no. 2 (1996): 165.
356
192
and explored what new functions were available for use. Thereafter, teams either chose to continue
with a scripting-centric approach (groups 2 and 3) or revert back to graphical programming (groups 1
and 4).
The design implications for the sample program are as follows. It should offer students the flexibility
to work textually or graphically, as both are equally valid. In this regard, a member of group 1
criticised the lack of graphical equivalents to code chunks in the main YOUR component’s (Crumple
and SequentialCut) script. They had to create such components on their own first before switching
back to graphical programming. In addition, the program should offer both a higher level of
abstraction, where students can focus on changing parameters to quickly get varied results; and a
lower one, where they can specify the individual steps of the robotic process in more detail. Finally,
to ease comprehension, programs should be structured in chunks—by grouping related components
and organising code in blocks. The role of each chunk should be clearly described in
scribbles/comments. Further ways of improving the comprehensibility of a program are discussed in
the following chapter.
The results also suggest that it the strategy of encouraging students to learn through tinkering can
be effective in a workshop setting. Students were emancipated from having to solve a defined
problem and directed their explorations according to material discoveries. This helped to explain the
wide variety of results achieved by teams at the end of the workshop. They reported that it was
motivating to be able to change a part of the program and directly see the physical effect. Some
students noted that this tight feedback loop helped them to understand abstract domain concepts
better. However, it is also important to note the limitations of this tinkering-based strategy. Firstly,
intervention from instructors is still necessary because certain concepts have to be explicitly taught.
Furthermore, it has to be well-timed; this implies close observation of students throughout the
programming process, which is only realistic in certain pedagogic settings.
Second, not all students are inclined to tinker. For example, group 2 was hesitant about running a
program without fully understanding it first. One member expressed his preference for a more
structured approach, whereby a goal is first defined and then steps are taken to achieve it. He
conformed more to the “planner” rather than “bricoleur” type of novice programmer identified by
Turkle and Papert. 359 Instructors should be aware of these different tendencies in students and tailor
their assistance accordingly. For example, planners should be encouraged early on to modify the
Sherry Turkle and Seymour Papert, “Epistemological Pluralism: Styles and Voices within the Computer
Culture,” in Signs 16, no. 1 (1990): 128–157.
359
193
program by reassuring them that there is no correct solution; while tinkerers should be discouraged
from “making changes more or less at random” 360 by asking them to periodically identify a goal.
A final lesson learnt from the workshop is that there is a creative aspect to the act of programming
robotic fabrication processes. In the previous case study, students in the DRS tended to design a
geometric form first and then program a robotic process to reproduce it materially. In the workshop,
students designed the robotic process first and then explored what forms could emerge. This latter
approach led to unanticipated material results, thus opening new avenues for creative exploration.
This theme will be further discussed, amongst others, in the following chapter.
Anthony Robins et al., “Learning and Teaching Programming: A Review and Discussion,” in Computer
Science Education 13 no. 2 (2003): 155.
360
194
6 Discussion
The results of the case studies are interpreted and organised thematically for discussion. The first
topic addresses the problem of scalability with regards to implementing robot programs. Thereafter,
the importance of end-user extensibility as a requirement for a novice robot programming system is
discussed. Next, the dichotomy between visual and text programming is questioned and an
argument is made in favour of a hybrid approach instead. The final two topics discuss the creative
aspect of programming robotic fabrication processes, and the potentials of a collaborative humanrobot approach to building.
6.1 A problem of scale
The Design Research Studio (DRS) represented one of, if not, the first attempt to build high-rise
models by robotic means. Consequently, it was unclear, at the beginning, how complex the robot
programs would have to be. And it turned out that students implemented, at least initially, programs
that were larger than anticipated. For example, the Rochor Tower, Tiong Bahru Tower and Lakeside
Tower teams created Grasshopper programs that were approximately 22, 12 and 4 ½ times the size
of the sample program given out at the start of the 2012 DRS (see Table 6-1: 2012 DRS spring
semester).
One reason was simply because students were developing more complex high-rise designs. The
Rochor Tower team’s program was significantly larger than the rest because it was the only one that
also described the algorithmic logic for generating the high-rise design. The increased complexity of
designs led to another problem, common to all teams, which was the management of data.
Significant portions of students’ Grasshopper programs were dedicated solely to parsing geometric
data, originating from a baked model or generated upstream from the design section of the program,
to extract information necessary for robotic operations; or in the case of Lakeside Tower, sorting the
data and re-sequencing it according to the assembly logic. Finally, students had to implement robust
adjustment mechanisms in their programs to correct for inaccuracies discovered in the physical
model fabrication process.
195
Project
Graphic
Textual Token
Total number of YOUR
Number of
Token
Count
components (standard
different custom
and custom)
YOUR components
Count
2012 Spring semester
Sample
282
3013
8
-
Tiong Bahru Tower
3,440
11,515
31
0
Lakeside Tower (vert.)
1,056
3,779
11
3
Lakeside Tower (hor.)
1,646
5,118
11
3
Rochor Tower
6,334
24,803
46
1
2012 Fall semester
Nested Voids
Bent Striations
Undulating Terraces
4,096
18,926 361
57
1
196
3,164
5
2
1,083
2,444
11
2
2013 Spring semester
Sample
3,399
6,604
46
-
Mesh Towers
3,270
8,134
61
0
2013 Fall semester
Sequential Frames
747
11,376
34
2
Mesh Towers
1,215
5,398
14
8
Vertical Avenue (pre-fab)
1,465
12,777
15
2
1,424
8,862
34
1
Vertical Avenue
(assembly)
Table 6-1 The graphic and textual token count, as well as number of YOUR components used in the final
programs developed by student teams in the DRS.
In general, large graphical programs suffer from readability issues. Petre and Green state that
“unlike text, which is always amenable to a straight, serial reading, graphics requires the reader to
identify an inspection strategy.” 362 In the case of a Grasshopper program, this involves finding the
starting node of the graph, then systematically tracing how data flows downstream. Complications
arise when the graph is large. The starting node may be difficult to find. The graph could have
The textual token count was still very high because the team continued to use the old components (script
did not reference the external python libraries).
362
Marian Petre and Thomas Green, “Learning to Read Graphics: Some Evidence that ‘Seeing’ an Information
Display is an Acquired Skill,” in Journal of Visual Languages and Computing 4 (1995): 63.
361
196
numerous branches, each with a large number of nodes that the reader has to walk through. Wires
cross more frequently, making it harder to determine exactly which components are connected. 363
Beyond a certain density, they begin to dominate the representation, causing it to look like a “rat’s
nest” 364 of tangled lines.
Poor readability leads to a related issue—maintainability. Students had to frequently modify their
robot programs over the course of the semester, because the high-rise design or end-effector had
been changed. According to Green et al.’s “parsing-gnirsrap” model, 365 programmers have to “read
and understand what has been written so far, in order to knit … new material in.” 366 Students had to
parse the entire program and identify which components needed to be deleted, substituted or rewired. Viscosity is a measure of how difficult it is to make a change in a program. A large
Grasshopper program is viscous because the components that have to be edited are hard to
locate; 367 there may be several of them; and they could be dispersed throughout the graph.
In most cases, programs were only comprehensible to the primary student developer. This inhibited
collaboration within the team. Other members could not help to develop the program, even if they
were proficient in programming. 368 Moreover, they hesitated to take over running the program in
the final production phase if they did not understand how it worked. To be sure, there are other
valid reasons why students may choose not to develop a program collaboratively, but ensuring
common readability could help to foster greater teamwork in this digital age where programs can be
easily shared.
The readability of a Grasshopper program may be significantly improved through secondary
notation—perceptual cues that “convey extra meaning, above and beyond the ‘official’ semantics of
the language.” 369 Grouping components was an effective way of making Grasshopper programs
Furthermore, users do not have direct control over how wires are laid out on the Grasshopper canvas. They
can only hide the wires or make them fainter. Other visual programming systems, such as Max/MSP, allow the
user to manipulate the wires to minimise crossings.
364
The term ‘rat’s nest’ has been used to describe messy and unstructured code. John Bentley, Programming
Pearls, 2nd ed. (Reading: Addison-Wesley, 1999), 22.
365
“Gnirsrap” is “Parsing” backwards. Thomas Green et al., “Parsing and Gnisrap: A Model of Device Use,”
in Empirical studies of programmers: second workshop, ed. Gary Olson et al. (New Jersey: Ablex Publishing,
1987), 132–146.
366
Thomas Green and Marian Petre, “Usability Analysis of Visual Programming Environments: A ‘Cognitive
Dimensions Framework’,” in Journal of Visual Languages and Computing 7 no. 2 (1996): 135.
367
While the entire program can be viewed at once by zooming out sufficiently, this is accompanied by a loss
in detail—the icons and names that distinguish components are no longer visible. An automatic searching
functionality would be a very useful addition to Grasshopper.
368
For example, the student who wrote the script for generating the Sequential Frames tower’s design stated
that “I simply cannot work with such a messy thing” in reference to the Grasshopper-based robot program
implemented by his team-mate.
369
Green and Petre, “Usability Analysis of Visual Programming Environments,” 139.
363
197
more readable. This is equivalent to encoding bits of information (component) into meaningful
chunks (groups). 370 The layout of the diagram can also provide further cues; for example, groups or
components that are in close proximity are usually related. 371 Students used colour to distinguish
between groups that have different functions 372 or to highlight importance. 373 Finally, descriptive
names and comments added via scribbles helped to express the roles of components and groups
more clearly.
Yet it is important to note that secondary notation is not neutral and “can [also] mislead and
confuse.” 374 One solution, which was implemented in the DRS, is to develop a graphical style
guide 375 which codifies the use of secondary notation. For example, Figure 6-1 shows a part of the
guide that specifies conventions for how colour and grouping should be applied. This guide helped to
ensure that programs developed by different teams were consistent. 376 Another solution would be
to use automatic tools to check that programs adhere to standards. Such tools exist for various text
programming languages, 377 but not for Grasshopper. However, the analysis tools developed by the
author already contain functionalities for changing a Grasshopper program’s appearance, and can
potentially be extended in this direction. 378
Miller recommends this strategy (bits to chunks) for improving information processing in general. George
Miller, “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing
Information,” Psychological Review: The Centennial Issue 101, no. 2 (1994): 343-352.
371
For example, all components for controlling the robotic fabrication process were usually placed in one
section of the graph and became the ‘control panel’.
372
For example, in the Mesh Towers project, the groups were colour-coded according to the type of robotic
operation. See Chapter 4.9.2
373
This is the concept of ‘signaling’. John Pane and Brad Myers, Usability Issues in the Design of Novice
Programming System (Pittsburgh: Carnegie Mellon University, 1996): 6.
374
Petre and Green, “Learning to Read Graphics,” 57.
375
Style guides have been created for various text languages to aid programmers in improving the clarity of
their code. For example, Google has their own guide for scripting in Python. Ami Patel et al., “Google Python
Style Guide,” Google, accessed January 1st 2016, https://googlestyleguide.googlecode.com/svn/trunk/pyguide.html
376
There were some instances where one team shared their program with another. For example, the Bent
Striations team referred to the Rochor Tower team’s program to understand how they implemented a folding
solution.
377
For example, pylint is a code analyser for the Python programming language, which can be used to check
that code is written to the standards specified in the PEP8 style guide. “Pylint,” accessed January 1st 2016,
http://www.pylint.org/
378
For example, it could check the notation against a ‘style sheet’ and make appropriate modifications.
370
198
Figure 6-1 Part of an exemplary style guide that illustrates what colours should be used when students
developed their robot programs.
Readability and maintenance issues may be addressed through creating new abstractions. There are
two ways to do this: collapse a section of the graph into a cluster component or replace it with a
scripting component that replicates its functionality. In either case, the size of the program is
reduced in terms of graphic token count (Bent Striations was exemplary in this regard; see Table 6-1).
Readability is improved because tracing dataflow through a smaller graph is easier. Moreover, the
program appears less visually complex because components, and more importantly, wires from the
original sub-graph are no longer drawn to the canvas. The program is easier to maintain because
“many components can [now] be treated as a group.” 379 Instead of editing the graph in several
places, programmers can focus on changing a single abstraction.
Only two teams in the DRS created clusters, but in a limited fashion, while three teams created
custom scripting components with the intention to improve the readability of their programs or to
ease maintenance. Some of them hesitated to create new abstractions, either because they wanted
the entire program to be visible 380 or it involved additional work. 381 Nonetheless, nearly every teams
created custom components (see Table 6-1: 2012 and 2013 fall semesters), but the main reason, as
will be discussed in the next Chapter, was for extending YOUR.
Green and Petre, “Usability Analysis of Visual Programming Environments: A ‘Cognitive Dimensions’
Framework,” 145.
380
This was a comment from the Nested Voids team member in reference to clusters. The sub-graph within a
cluster cannot be viewed simultaneously with the graph of the parent Grasshopper program.
381
This was especially the case for teams who were not so confident with scripting.
379
199
6.2 Extending YOUR
It was difficult to anticipate how students would use the robot since its production capabilities are
essentially open-ended. 382 Hence, the question as to what abstractions should be designed and
made available to them, was a recurring one as YOUR was being developed. At first, the answer was
clear because the focus was solely on using the robot for assembly. The initial version of YOUR was
therefore designed to offer components that mapped directly to picking, gluing and placing
operations in the assembly process. This toolkit proved to be accessible and functional, so long as
students continued to use the robot in this prescribed manner. However, it turned out to be limiting
once teams started to explore alternative fabrication processes, because the components were
overly specific.
The immediate solution was to add lower level abstractions, such as motion primitives, to the toolkit.
Students managed to assemble these “fine-grained” components together to program their bespoke
robotic processes as a result. 383 Unexpectedly, they also began to modify components; this was
possible because the encapsulated scripts could be accessed. This latter result suggested that it
could be a viable strategy to encourage students to extend YOUR on the own, rather than rely on the
author to develop the abstractions for them. YOUR was consequently re-designed to facilitate them
in doing so. 384
This strategy proved to be successful. Students managed to extend YOUR in several ways. First, they
modified the behaviour of existing components by changing selected statements in their script, or by
adding chunks of code from other components. These modified components were then renamed to
describe their new roles. Second, students, who became more proficient in scripting, created custom
YOUR components from scratch. They instantiated new Python components, imported YOUR
libraries, and began to call functions from the libraries within the script. Finally, advanced students
added new modules, containing custom functions, to the underlying YOUR package. Subsequently,
these functions could be called from other YOUR components. By making YOUR extensible, students
could refine existing abstractions or develop new ones according to their individual needs. 385
In theory, the robot can carry out any physical fabrication process, so long as it is given the proper
instructions and is equipped with an appropriate end-effector.
383
The Tiong Bahru Tower team relied heavily on movement and IO components. See Chapter 4.3.1.
384
See Chapter 4.4.
385
When asked what components were missing from the YOUR toolkit, a student from the Bent Striations
team replied, “I did not get the feeling that something was missing, because with the library, I could program it
myself.”
382
200
At one point, the option of compiling YOUR was explored. This means that components would
become “black-boxes” that only expose inputs and outputs, and functions defined in the Python
package would have their implementation details hidden. However, a decision was made to leave
the code in YOUR components and the underlying package accessible. In fact, the code was later refactored to improve its readability to students. This decision was made on pedagogic grounds. A key
reason why students were able to transition from visual programming to scripting was because they
had access to code examples. 386 Students learnt how a script should be implemented by inspecting
the structure of code inside a component. They learned to modify existing components by making
incremental changes to the script. In this way, even students who were intimidated by the prospect
of coding were able to successfully create custom components. 387 Finally, they learned how to
develop new abstractions by referring to the implementation details of similar components or
functions.
By involving students in the development process, some of the responsibility for designing YOUR
abstractions was delegated to them. However, it is important to note that the functions and
components developed by students were often overly specific or suffered from other quality issues.
This was because students were usually unconcerned about reuse 388 and lacked sufficient
programming knowledge. For example, the Bent Striations team’s plastic bending component could
only be used with their particular physical setup; its script was also verbose and poorly documented,
making it, in fact, harder to understand than the original Grasshopper program it was based on. Thus
the role of the developer—the author in this case—is to vet these abstractions and refactor them if
necessary, for example by making them more general. 389 Thereafter, these abstractions—each
capturing a reusable “kernel of knowledge” 390—can be added to the programming system to make it
more complete.
Ko et al. state that code examples are “among the greatest sources of help for discovering, understanding,
and coordinating reusable abstractions.”Andrew Ko et al., “The State of the Art in End-User Software
Engineering,” in ACM Computing Surveys 43, no.17 (2011): 17.
387
One student from the Sequential Frames team, who was not comfortable with scripting, stated that “I could
just take what is existing, and understand where I need to [make] changes … if I have to write it by myself it
would be impossible.”
388
The Undulating Terraces team was the exception. They tried to create functions that were re-usable.
389
The Fold component used in the second semester of the 2012 DRS was originally derived from the bending
component developed by Bent Striations team.
390
Robert Aish, “The Ghost in the Machine,” in Architectural Review no. 1389 (2012): 20.
386
201
6.3 Challenging the dichotomy between text and visual programming
In his blog post “Programming, conflicting perspectives” 391, Rutten, the developer of Grasshopper,
addressed the criticism that it was not a true programming language. He stated:
The main reason for this [criticism] seems to be the fact that one does not write
textual code when using Grasshopper, rather instructions are combined in a visual
fashion using primarily the mouse rather than the keyboard.
Whether text or visual programming languages are more suitable for architectural design remains a
matter of debate; with the supporters of one language arguing its merits over the other. However,
Green and Petre challenged the underlying notion that one programming language can be
universally superior to all others. 392 They showed empirically that particular aspects of a language
will “support or hinder different programming tasks.” 393 In other words, it cannot be best for all
purposes. The case study results appear to support Green and Petre’s assertion. Students were
observed to switch between programming in Grasshopper and Python scripting—within components
or in a separate text editor—depending on the task they were engaged in. In fact, some of them
were observed to keep the graphical notation juxtaposed side-by-side with a scripting window
whenever possible.
Some teams deliberately chose visual programming for exploratory prototyping. One reason was to
work at a high level of abstraction. Assembling existing components was more straightforward than
writing an equivalent script, because the latter involved planning how individual statements have to
be written and sequenced into larger blocks. Meanwhile, Grasshopper supported a more provisional
way of creating a program, as students could add/wire/delete components in any order and assess
the immediate results.
Some teams, for example Bent Striations, shifted to scripting within components once they had
“sketched” out a working program, and verified the basic fabrication concept. As discussed earlier,
one reason for doing so was to reduce the size of the graphical program and thus improve its
David Rutten, “Programming, conflicting perspectives,” I Eat Bugs for Breakfast, April 1st, 2012,
https://ieatbugsforbreakfast.wordpress.com/2012/04/01/programming-conflicting-perspectives/
392
The “expectation that a particular kind of programming system is superior to another for all programming
tasks” is known as superlativism. John Pane and Brad Myers, Usability Issues in the Design of Novice
Programming System (Pittsburgh: Carnegie Mellon University, 1996), 3.
393
Thomas Green et al., “Comprehensibility of visual and textual programs: A test of superlativism against
the’match-mismatch’conjecture,” In Empirical Studies of Programmers: Fourth Workshop, ed. Jurgen
Koenemann-Belliveau et al. (New Jersey: Ablex Publishing, 1991), 125.
391
202
readability. In addition, by working at this lower level of abstraction, they gained access to a wider
range of YOUR abstractions 394 and could specify robotic operations individually.
Students also switched to a programming language if it offered a specific abstraction that was wellsuited for solving the problem at hand. For example, Group 3 in the workshop case implemented
their finger-joint foam-cutting concept in Grasshopper because they could use its weaving
component to generate the cut-path in an elegant way. 395 Conversely, teams switched to Python
scripting in order to make use of control (for loops) or functional-style abstractions (list
comprehensions) that are absent in Grasshopper. For example, the Undulating Terraces team chose
to script from the outset because their process involved conditional and repetitive logics that would
be harder to express in Grasshopper.
When running their programs, nearly all teams focused exclusively on a particular section of their
visual Grasshopper program. This section, which functioned like a control interface, usually
contained panels storing key fabrication-related parameter values. Such information could be hard
to access if it was buried in lines of code within a component, yet became highly visible when placed
in a coloured, labelled panel. 396 The control interface also included widgets, such as sliders and
buttons, which the user had to interact with while running the program, as well as the Sender and
Listener components for communicating with the robot.
The analogy of a visual notation as control interface finds some parallels in the domain of computer
music programming. In the Max/MSP environment, programmers design the algorithm that
produces the music (patch) as well as the interface that they will use to perform it (presentation
mode). In fact, a set of widgets were designed for Grasshopper by the author that were inspired by
their Max/MSP equivalents. 397 For example, Figure 6-2 shows the Max/MSP style button and toggle
components. Unlike the standard Grasshopper versions, they can be resized and colour-coded; these
added dimensions can be used to convey extra meaning. 398
The graphical components only map a subset of functions defined in the underlying package.
Group 2 also tried to implement a similar concept via scripting. However, they spent significant amount of
time correcting syntax errors in their code, and struggled to implement similar ‘weaving’ functionality. As a
consequence, they progressed slower than Group 3. See Chapters 5.5 and 5.6.
396
One student from the case study workshop remarked “we just bring the parameter [from the script] out to
the canvas—easier to see it that way.”
397
The widget kit was named MaxInspired. It was tested in another workshop that is not discussed in this
research.
398
For example, students designed ‘panic’ buttons (a concept borrowed from Max/MSP) that stops the entire
robotic process when pressed. These buttons were much larger than the rest of the components and coloured
red for added emphasis.
394
395
203
Figure 6-2 MaxInspired buttons (top row) and toggles (bottom row) can be resized and colour-coded.
Finally, the choice of visual or textual programming language may also be influenced by personal
preferences. Several students in the workshop case study were inclined towards one or the other,
despite having previously received the same programming instruction. 399 Turkle and Papert argued
for epistemological pluralism—the recognition of multiple valid ways of knowing and thinking with
regards to programming. 400 It could be argued that Grasshopper encourages programmers to think
relationally and in terms of data flow, while Python scripting encourages them to think procedurally
and in terms of abstractions. Students may simply choose the programming style that they are most
comfortable and proficient in, and this in turn, enables them to be more productive.
Modern visual programming systems such as Grasshopper allow users to program in a textual
language as well. In both case studies, students benefitted from being able to mix the visual
dataflow and textual imperative programming paradigms. These results suggest that the dichotomy
between the two is a false one. As Nickerson states, “programming activity might need more than
one language in more than one modality.” 401 Instead of asking whether students should learn one or
the other, 402 the more pertinent question is: how can they be supported in transitioning between
the two? Mechanisms that automatically convert between node and code 403 should be helpful in this
All the students took the same design computation course, yet one student said that “for me coding is
much simpler” and another stated that “I definitely prefer to work visually.”
400
Sherry Turkle and Seymour Papert, “Epistemological Pluralism: Styles and Voices within the Computer
Culture,” in Signs 16, no. 1 (1990): 129.
401
Jeff Nickerson, “Visual Programming,” (PhD diss., New York University, 1994), 229.
402
For example, Leitao concludes that it would be more productive for novices to begin learning a modern text
programming language—VisualScheme—from the outset. António Leitão et al., “Programming Languages for
Generative Design: A Comparative Study,” in International Journal of Architectural Computing 10, no. 1 (2012):
139-162.
403
See Chapter 2.1.4.
399
204
regard, and they were briefly explored in the workshop case study, though the results were
inconclusive. Further empirical research in this direction is clearly needed.
6.4 Flipping the digital design to physical production chain
The fundamental basis for design computation is the recognition that algorithms
are an expressive medium of design. 404
Architects, who leverage computation while designing, do not create a digital model directly, using
the commands offered by the software, but develop an algorithm, using the abstractions offered by
a programming library, to generate it instead. The focus is therefore on describing the process
rather than the end state. 405 This algorithmic approach to design has mostly been limited to the
digital medium thus far. However, with the introduction of programmable robots, it can be extended
to the physical realm as well. In the DRS, students were essentially developing algorithms, using the
abstractions offered by YOUR, to construct physical representations of their computational designs.
Students may have to invest significant time in developing their programs, before being able to
achieve any meaningful results. However, the payoff is an enhanced ability to explore variation,
which can be precisely controlled. Once the program is implemented, they can produce alternative
physical forms by simply changing the parameters that drive the algorithm. 406 By comparison, the
same amount of effort is required to build each instance by hand. In effect, students are able to
design the physical fabrication process from the ground up, as they have explicit control over how
each step in the process is defined.
Like design computation, robot programming promotes algorithmic thinking, but it also fosters two
additional modes of thought. 407 The first is in terms of motion, which is the fundamental abstraction
offered by any robot programming library. Algorithms have to be implemented using movementbased functions as their building blocks. While this may appear restrictive, designers gain an
Robert Aish, “Tools of Expression: Notation and Interaction for Design Computation,” in reform()—Building
a Better Tomorrow: Proceedings of the 29th Annual Conference of the Association for Computer Aided Design
in Architecture, (Chicago: The School of the Art Institute of Chicago, 2009), 30.
405
Simon describes the difference between state and process descriptions. Herbert Simon, The Sciences of the
Artificial, 3rd edition (Cambridge/MA: MIT Press, 1996), 210.
406
Teams in the DRS did not build variations of complete towers primarily because it took too long. However,
most produced variations of single elements or parts of the model. In the workshop case study, teams were
more systematic in terms of producing a series of physical artefacts.
407
Picon raises the question as to what the epistemological role of robots is. Antoine Picon, “Robots and
Architecture: Experiments, Fiction, Epistemology,” Architectural Design 84, no. 3 (2014): 59.
404
205
opportunity to generate novel forms derived from the characteristic movements of a robot. For
example, group 4 in the workshop case, utilised joint-based rotational movements to cut curved
forms that were difficult to reproduce digitally. Second, robot programming forces designers to think
in material terms, because the robot acts upon physical matter. While computational geometry is
abstract and thus inert, matter has inherent form-generating capabilities which a designer may
exploit. 408 For example, in the workshop case, group 1 controlled how plastic strips deformed by
adjusting the heating conditions and direction of applied forces to produce irregular sinusoidal
shapes. The deformation behaviour could not be accurately modelled in a digital simulation and thus
the final forms were truly unanticipated.
If one recognises a creative value in being able to design physical constructive processes via robot
programming, then a question arises: should it precede computational design? The DRS case study
illustrates two opposite scenarios. Most teams adopted one approach where they first designed
their high-rise computationally in the digital medium, then subsequently designed the robotic model
fabrication process to materialise it. However, one drawback with this approach is that fabrication
constraints are identified after the generative algorithm has already been developed. Consequently,
parts of the model could end up being un-constructible by robotic means. At the same time, teams
would uncover new design opportunities while developing their fabrication process, yet were often
reluctant to alter their original high-rise designs at such a belated stage. 409
A few teams 410 adopted a second approach where they began by designing a physical constructive
process. The discovery of novel formal possibilities, which can only be achieved through robotic
means, opened up new avenues for creative exploration. The range of producible forms may be
systematically explored in order to derive a subsequent vocabulary of parts to design with. Teams
would eventually have to switch back to the digital medium, and develop a generative algorithm to
explore how these parts can be combined into a larger assembly and visualise the result; however,
they do so with an implicit understanding of the fabrication process’s constraints. This approach
reverses the normative digital design to physical production sequence, and offers the potential to
maximise the design impact of using robots.
Manuel De Landa, “Philosophies of Design: the Case of Modelling Software,” in Verb Processing
(Architecture Boogazine), ed. Jaime Salazar (Barcelona: ACTAR, 2002), 135.
409
A member of the Sequential Frames team stated that the process of developing the model construction
process “naturally opens up new ideas for what can be done [in design terms].” He noted though that in their
case “it was not so easy to integrate [the opportunities offered by] the robotic process into the design” since
latter was almost finalised at that point. See Chapter 4.9.1.
410
They include Bent Striations and Undulating Terraces. See Chapters 4.5.2 and 4.5.3.
408
206
6.5 The limits of automation and the promise of collaborative building
There was an expectation, prior to the start of the DRS, that robots could be used to fully automate
the model production process. They would receive a set of instructions, for fabricating the entire
model, and execute them autonomously. Students would continue to develop their design as the
model was being built, and have a new instruction set ready when it was finished. In this way, they
could complete more iterations of the design-build cycle. However, this did not work in practice.
First, there was a degree of imprecision in the physical production process. 411 Some errors would
accumulate if the entire model was built in one go. Second, end-effectors were more challenging to
engineer once the fabrication process involved multiple operations. For example, the Undulating
Terraces team chose to develop a complex end-effector that could grip, shift, staple, and unroll
paper in order to automate the strip-production process. However, they did not succeed despite
spending significant time on the problem. Finally, the robot could only perform one action at a time
in a sequential fashion. This limited the speed of the model production process.
To a certain extent, some of these problems may be addressed through technical solutions. For
example, the addition of sensor feedback, which was briefly explored in the 2013 DRS fall semester,
can help to mitigate accuracy issues. Using multiple robots in parallel would speed up the production
process. However, the approach adopted by students in the DRS was methodological in nature; they
shifted towards a collaborative mode of production, involving both robotic and manual building.
At one level, this simply involves alternating the two during the fabrication process. In many projects,
the robot assembles all the walls on one storey; once it has finished, students place the floor slabs
manually; and the process repeats. Instructions were generated and sent in small batches, usually on
a storey-by-storey basis. At another level, robotic and manual building takes place concurrently
while the program is running. In some projects, the robot pauses, allowing student to perform an
action such as gluing (Figure 6-3), then resumes after a fixed amount of time or upon receipt of a cue.
In this case, the robot program includes time-based and event-related code. 412
This imprecision was caused by structural and material behavior of the model, as well as the robotic arm’s
inaccuracy. For a more detailed description, see Chapter 4.3.1.
412
Sleep commands were used to pause the robot for a certain amount of time. Events were handled by
writing threading statements.
411
207
Figure 6-3 The robot moves to a pre-defined position and waits for the student to apply glue to the strip,
before it places it.
Using a collaborative building approach, students could correct for precision issues in between
sending chunks of instructions by adjusting parameters in their program; simplify the design of their
end-effectors; and carry out manual tasks in parallel to speed up the model production process.
While this approach originated as a pragmatic response to compensate for deficiencies in the
fabrication process, it also reflected a general shift in attitude towards the use of automation at the
design stage.
Students became more critical about what tasks should be automated. They recognised that the
time and effort invested in implementing the robot program and end-effector could detract from
their focus on the architectural design task. 413 Hence their approach was to identify strategic points
in the fabrication process where the value of using a robot is maximised. Individual tasks in the
process were arranged in a hierarchical order, ranging from the simplest to perform manually to the
most difficult, or even impossible. The key was to identify a tipping point or threshold along this
continuum, beyond which tasks would be performed robotically. For example, the Mesh Towers
team leveraged the robot’s ability to trace complex three dimensional paths in order to cut curved
surfaces, but otherwise performed simple straight cuts by hand. If such a tipping point does not exist,
then the fabrication process should not be automated at all; or its underlying concept has to be
revised in order to more fully exploit the advantages of using robots.
This was indeed the case for the Undulating Terraces team, who arguably invested too much time in
developing their end-effectors. One member remarked that the “[fall] semester was all about gripper design”.
413
208
Picon sketched out a future where designers and robots conversed with each other as partners. 414
The collaborative building approaches adopted in the studio are a small step in this direction.
Though the robot is not emancipated from the student’s instructions, the relationship between the
two is not one-sided either. Students had to take their cues from the robot’s actions and react
accordingly. The conception of the robot begins to shift away from being an automaton that simply
substitutes for manual labour, and towards what Picon terms the “significant other” 415 in the
production process.
While the topic of collaborative building is promising, it remains relatively under-explored, primarily
because of safety risks posed by industrial robots. Nonetheless, research in this area should gain
impetus as manufacturers continue to introduce progressively safer robots, which are designed to
enable closer forms of man-machine interaction. 416 Furthermore, advances in areas such as machine
learning may pave the way towards truer forms of collaboration in future, where the robot acquires
some degree of intelligence and can respond to human actions in less prescribed ways.
Picon, “Robots and Architecture: Experiments, Fiction, Epistemology,” 59.
Ibid.
416
Tanya Anandan, “Robotics in 2014: Market Diversity, Cobots and Global Investment,” Robotics Industries
Association, accessed January 1st 2016, http://www.robotics.org/content-detail.cfm/Industrial-RoboticsIndustry-Insights/Robotics-in-2014-Market-Diversity-Cobots-and-Global-Investment/content_id/4614
414
415
209
7 Conclusion
This research began with the question: How should novice robot programming systems be designed
for use in the architecture domain? This resulted in the development of YOUR. In its eventual form,
it consists of a toolkit of add-on components for the Grasshopper visual programming environment
and an underlying Python package that they reference. It extends the Rhinoceros/Grasshopper CAD
platform so that it can be used as both a design and robot programming environment.
As YOUR was being developed, other parties implemented similar Grasshopper-based solutions in
parallel; proving that the concept of extending an established visual programming system with robot
programming functionalities was a valid one. However, YOUR has two distinguishing characteristics.
First, it is designed with the intention to support end-users in adopting a hybrid approach to robot
programming; it gives them the freedom to switch between, and combine, visual dataflow and textbased imperative programing. Second, YOUR is designed to be extended by end-users. The initial
toolkit of components are not meant to be exhaustive, rather users are expected to eventually
create their own custom components using those provided as a reference. Simply put, the ethos of
YOUR is “to make it your own.”
Two cases were set up to study how students carried out fabrication-related robot programming
tasks using YOUR. The resulting data—drawn from interviews, direct observation and collected
artefacts from the programming process—helped to inform the development of YOUR, and to fill a
gap in the current understanding of how novices learn and carry out robot programming in the
domain of architectural design. In addition, several findings emerged from the case studies.
First, scalability was identified to be a significant issue if the robot programming system is visual in
nature. Secondary notation may help to ameliorate this problem, but a more effective solution lies in
providing support for text programming as well. Second, robot programming systems should be
open and extensible by end-users, whose needs cannot be fully anticipated by the developer. At the
same time, it is important, from a pedagogic perspective, to reveal the implementation details of
these systems. Third, there were merits to combining visual and textual approaches in robot
programming. Research should be directed towards finding ways to combine both paradigms, rather
than continue to frame them in opposing terms. Fourth, the creative impact of using robots may be
maximised if the conventional digital design to physical production sequence is reversed, such that
the act of robot programming precedes digital or computational design. Finally, a collaborative
210
human-robot building approach was discovered to be a potent strategy for model production, as
compared to fully automating the fabrication process.
7.1 Outlook
There are two general directions in which this research may be further developed. The first is
technical in nature. YOUR is currently not well-suited for programming robotic processes that involve
sensor feedback or events. However, such functionalities will be necessary if human-collaborative
building approaches are to be explored further. In this regard, music programming systems, with
their emphasis on real-time interaction, could be a pertinent model to study. YOUR could also be
ported to software environments such as Dynamo, to test if its core concepts are generalizable to
other platforms. Attention should also be directed towards the issue of transitioning between visual
and text programming languages; for example, by implementing mechanisms to do so, as well as
empirically testing them.
Another research direction relates to setting up further experiments. As this research was focused
on an in-depth study with a small sample size of users, more studio and workshop cases should be
conducted in order to validate the results. Future studies could adopt a comparative approach and
measure the performance of students on a task when equipped with YOUR versus other visual or
text-based robot programming solutions. Educational settings could be expanded to include
students from other domains, such as industrial design, to test if YOUR is effective for a wider
audience. Finally, pedagogic recommendations such as applying tinkering-based instructional
methods in a workshop setting, have to be evaluated through further experiments.
In his book Scripting Cultures 417, which was published in 2011, Burry described how the practice of
computer programming for architectural design had now been firmly established in both the
academic and professional domains. Compared to computers though, robots have been used in a
similar context for a far shorter period of time; and they consequently remain, to some extent, a
novelty. But the signs point towards more architects utilising robots for fabrication in future,
especially as these machines become progressively cheaper and safer. In this regard, educational
institutions have and are playing an instrumental role by firstly, investigating the topic of robotic
fabrication, and secondly, preparing future generations of architects to use such technology. It is
Mark Burry, Scripting Cultures: Architectural Design and Programming (West Sussex: Wiley and Sons, 2011),
8-11.
417
211
hoped that by sharing the tools that were developed and the discourse surrounding their use, this
research will contribute to fostering a similar culture around the practice of robot programming in
architecture.
212
8 Bibliography
8.1 Publications
Aish, Robert and Robert Woodbury. “Multi-level Interaction in Parametric Design.” In Smart Graphics:
Proceedings of the 5th International Symposium SG 2005, edited by Andreas Butz, Brian Fisher, Antonio
Krüger, and Patrick Olivier, 151–162. Berlin: Springer, 2005.
Aish, Robert, Sam Joyce, Al Fisher, and Andrew Marsh. “Progress towards Multi-Criteria Design Optimisation
using DesignScript with SMART Form, Robot Structural Analysis and Ecotect Building Performance Analysis.” In
Synthetic Digital Ecologies: Proceedings of the 32nd Annual Conference of the Association for Computer Aided
Design in Architecture, edited by Jason Kelly Johnson, Mark Cabrinha, and Kyle Steinfeld, 47–56. San Francisco:
California College of the Arts, 2012
Aish, Robert. “DesignScript: Origins, Explanation, Illustration.” In Computational Design Modelling: Proceedings
of the Design Modelling Symposium Berlin 2011, edited by Christoph Gengnanel, Axel Kilian, Norbert Palz, and
Fabian Scheurer, 1–8. Berlin: Springer, 2012.
Aish, Robert. “DesignScript: Scalable Tools for Design Computation.” In Computation and Performance:
Proceedings of the 31st eCAADe Conference, Volume 2, edited by Rudi Stouffs and Sevil Sariyildiz, 87–95. Delft:
Delft University of Technology, 2013.
Aish, Robert. “From Intuition to Precision.” In Digital Design: The Quest for New Paradigms 23rd eCAADe
Conference Proceedings, edited by José Duarte, Gonçalo Ducla-Soares, and Zita Sampaio, 10–14. Lisbon:
Technical University of Lisbon, 2005.
Aish, Robert. “The Ghost in the Machine.” Architectural Review 1389, no. 11 (2012): 20–22.
Aish, Robert. “Tools of Expression: Notation and Interaction for Design Computation.” In ReForm( )—Building a
Better Tomorrow: Proceedings of the 29th Annual Conference of the Association for Computer Aided Design in
Architecture, edited by Tristan de Estree Sterk, Russell Loveridge, and Douglas Pancoast, 30–31. Chicago: The
School of the Art Institute of Chicago, 2009.
Ayres, Lioness. “Semi-Structured Interview.” In The Sage Encyclopedia of Qualitative Research Methods, edited
by Lisa Given, 810–811. Thousand Oaks: Sage publications, 2008.
Bärtschi, Ralph, Michael Knauss, Tobias Bonwetsch, Fabio Gramazio, and Matthias Kohler. “Wiggled Brick
Bond.” In Advances in Architectural Geometry, edited by Cristiano Ceccato, Lars Hesselgren, Mark
Pauly, Helmut Pottmann, and Johannes Wallner, 137–147. Vienna: Springer, 2010.
213
Basili, Victor and Albert Turner. “Iterative Enhancement: A Practical Technique for Software Development.”
IEEE Transactions of Software Engineering 1, no. 4 (1975): 390–396.
Bentley, John. Programming Pearls, 2nd edition. Reading: Addison-Wesley, 1999.
Blackwell, Alan and Thomas Green. “A Cognitive Dimensions Questionnaire for Users.” In Proceedings of the
Twelfth Annual Meeting of the Psychology of Programming Interest Group, edited by Alan Blackwell and
Eleonora Bilotta, 137–152. Corigliano Calabro: Edizioni Memoria, 2000.
Blackwell, Alan and Thomas Green. “Notational Systems: The Cognitive Dimensions of Notations Framework.”
In HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science, edited by John Carroll, 103–134.
San Francisco: Morgan Kaufmann, 2003.
Bökesoy, Sinan and Patrick Adler. “1city1001vibrations: Development of a interactive sound installation with
robotic instrument performance.” In Proceedings of the International Conference on New Interfaces for
Musical Expression, edited by Alexander Refsum Jensenius, Anders Tveit, Rolf Inge Godøy, and Dan Overholt,
52–55. Oslo: University of Oslo and Norwegian Academy of Music, 2011.
Bonwetsch, Tobias, Fabio Gramazio, and Matthias Kohler. “Towards a Bespoke Building Process.” In
Manufacturing the Bespoke, edited by Bob Sheil, 78–87. Chichester: John Wiley & Sons, 2012.
Bonwetsch, Tobias. “Robotic Assembly Processes as a Driver in Architectural Design.” Nexus Network Journal
14, no. 3 (2012): 483–494.
Boshernitsan, Marat and Michael Downes. Visual Programming Languages: A Survey (Technical report No.
UCB/CSD-04-1368). Berkeley: University of California, Berkeley, 2004.
Brell-Cokan, Sigrid and Johannes Braumann. “A New Parametric Design Tool for Robot Milling.” In LIFE
in:formation, On Responsive Information and Variations in Architecture: Proceedings of the 30th Annual
Conference of the Association for Computer Aided Design in Architecture, edited by Aaron Sprecher, Shai
Yeshayahu, and Pablo Lorenzo-Eiroa, 357–363. New York: Cooper Union and Pratt Institute, 2010.
Brell-Cokan, Sigrid and Johannes Braumann. “Industrial Robots for Design Education: Robots as Open
Interfaces beyond Fabrication.” In Global Design and Local Materialization: 15th International Conference,
CAAD Futures 2013, edited by Jianlong Zhang and Chengyu Sun, 109–117. Berlin: Springer, 2010.
Budig, Michael, Jason Lim, and Raffael Petrovic. “Integrating Robotic Fabrication in the Design Process.” In
Architectural Design: Made by Robots, edited by Fabio Gramazio and Matthias Kohler, 22–43. London: John
Wiley & Sons, 2014.
Budig, Michael, Willi Lauer, Raffael Petrovic, and Jason Lim. “Design of Robotic Fabricated High Rises.” In
Robotic Fabrication in Architecture, Art and Design 2014, edited by Wes MGee and Monica Ponce de Leon,
111–130. New York: Springer, 2014.
214
Burnett, Margaret, Marla Baker, Carisa Bohus, Paul Carlson, Sherry Yang, and Pieter van Zee. “Scaling Up Visual
Programming Languages.” Computer 28, no. 3 (1995): 45–54.
Burry, Mark. Scripting Cultures: Architectural Design and Programming. West Sussex: John Wiley & Sons, 2011.
Carpo, Mario. “Revolutions: Some New Technologies in Search of an Author.” Log 15 (2009): 59–54.
Celani, Gabriela and Carols Vaz. “CAD Scripting and Visual programming Languages for Implementing
Computational Design Concepts: A Comparison from a Pedagogical Point of View.” International Journal of
Architectural Computing 10, no. 1 (2012): 121–137.
Ceroni, José, and Simon Nof. “Robotics Terminology.” In Handbook of Industrial Robotics, 2nd edition, edited by
Shimon Nof, 1261–1317. New York: John Wiley & Sons, 1999.
Clarke, Steven. “How Usable Are Your APIs?” In Making Software: What really Works, and Why We Believe It,
edited by Andy Oram and Greg Wilson, 545–565. Sebastapol: O’Reilly Media, 2011.
Creswell, John. Qualitative Inquiry and Research Design: Choosing among Five Approaches, 2nd edition. London:
Sage Publications, 2007.
Creswell, John. Research Design: Qualitative, Quantitative and Mixed Method Approaches, 2nd edition. London:
Sage Publications, 2003.
Davis, Daniel. “Modelled on Software Engineering: Flexible Parametric Models in the Practice of Architecture.”
PhD diss., RMIT University, 2013.
Davis, Daniel, Jane Burry, and Mark Burry. “Understanding Visual Scripts: Improving Collaboration through
Modular Programming.” International Journal of Architectural Computing 9, no. 4 (2011): 361–376.
Davis, Daniel, Mark Burry, and Jane Burry. “Untangling Parametric Schemata: Enhancing Collaboration through
Modular Programming.” In Designing together—CAADFutures 2011: Proceedings of the 14th International
Conference on Computer Aided Architectural Design, edited by Pierre Leclercq, Ann Heylighen, and Geneviève
Martin, 55–68. Liège: Les Éditions de l'Université de Liège, 2011.
De Landa, Manuel. “Philosophies of Design: the Case of Modelling Software.” In Verb Processing (Architecture
Boogazine), edited by Jaime Salazar, 130–143. Barcelona: ACTAR, 2002.
Diaz, Frederico. “Outside Itself: Interactive Installation Assembled by Robotic Machines Untouched by Human
Hands.” In Robotic Fabrication in Architecture, Art and Design, edited by Sigrid Brell-Cokcan and Johannes
Braumann, 180–183. Vienna: Springer, 2012.
Donmoyer, Robert. “Quantitative Research.” In The Sage Encyclopedia of Qualitative Research Methods, edited
by Lisa Given, 713–718. Thousand Oaks: Sage publications, 2008.
215
Dörfler, Kathrin, Florian Rist, and Romana Rust. “Interlacing: An Experimental Approach to Integrating Digital
and Physical Design Methods.” In Robotic Fabrication in Architecture, Art and Design, edited by Sigrid BrellCokcan and Johannes Braumann, 82–91. Vienna: Springer, 2012.
Elashry, Khaled and Ruairi Glynn. “An Approach to Automated Construction Using Adaptive Programming.” In
Robotic Fabrication in Architecture, Art and Design 2014, edited by Wes McGee and Monica Ponce de Leon,
51–66. New York: Springer, 2014.
Felleisen, Matthias, Robert Bruce Findler, Matthew Flatt, and Shriram Krishnamurthi. How to Design Programs:
An Introduction to Programming and Computing. Cambridge/MA: MIT press, 2001.
Gramazio, Fabio and Matthias Kohler. Digital Materiality in Architecture. Baden: Lars Müller Publishers, 2008.
Gramazio, Fabio, Matthias Kohler, and Jan Willmann. The Robotic Touch: How Robots Change Architecture.
Zurich: Park Books, 2014.
Green, Thomas and Marian Petre. “Usability Analysis of Visual Programming Environments: A ‘Cognitive
Dimensions’ Framework.” Journal of Visual Languages and Computing 7, no. 2 (1996): 131–174.
Green, Thomas and Marian Petre. “When Visual Programs are Harder to Read than Textual Programs.” In
Human-Computer Interaction: Tasks and Organization, Proceedings 6th European Conference on Cognitive
Ergonomics, edited by Gerrit van der Veer, Michael Tauber, Sebastiano Bagnara, and Miklòs Antalovits, 167–
180. Rome: CUI, 1992.
Green, Thomas, Marian Petre, and Rachel Bellamy. “Comprehensibility of Visual and Textual programs: A Test
of Superlativism against the ’Match-Mismatch’ Conjecture.” In Empirical Studies of Programmers: Fourth
Workshop, edited by Jurgen Koenemann-Belliveau, Thomas Moher, and Scott Robertson, 121–146. New
Jersey: Ablex Publishing, 1991.
Green, Thomas, Rachel Bellamy, and J.M. Parker. “Parsing and Gnisrap: A Model of Device Use.” In Empirical
studies of programmers: second workshop, edited by Gary Olson, Sylvia Sheppard, and Elliot Soloway, 132–146.
New Jersey: Ablex Publishing, 1987.
Green, Thomas. “Cognitive Dimensions of Notations.” In People and Computers V, edited by Alistair Sutcliffe
and Linda Macaulay, 443–460. Cambridge: Cambridge University Press, 1989.
Hägele, Martin, Klas Nilsson, and Noberto Pires. “Industrial Robotics.” In Springer Handbook of Robotics, edited
by Bruno Siciliano and Oussama Khatib, 963–986. Berlin: Springer, 2008.
International Organisation for Standardisation. ISO/IEC/IEEE 24765: Systems and software engineering—
Vocabulary, 1st edition. Geneva: ISO/IEC, 2010.
216
Johnston, Wesley, J.R. Hanna, and Richard Millar. “Advances in Dataflow Programming Languages.” ACM
Computing Surveys 36, no. 1 (2004): 1–34.
Kelleher, Caitlin and Randy Pausch. “Lowering the Barriers to Programming: A Taxonomy of Programming
Environments and Languages for Novice Programmers.” ACM Computing Surveys 37, no.2 (2005): 86.
Ko, Andrew, Robin Abraham, Laura Beckwith, Alan Blackwell, and Margaret Burnett. “The State of the Art in
End-User Software Engineering.” ACM Computing Surveys 43, no.17 (2011): 1–44.
Ko, Andrew. “Understanding Software Engineering through Qualitative Methods.” In Making Software: What
really Works, and Why We Believe It, edited by Andy Oram and Greg Wilson, 55–63. Sebastopol: O’Reilly Media,
2011.
Larman, Craig and Victor Basili. “Iterative and Incremental Developments: A Brief History.” Computer 36, no. 6
(2003): 47–56.
Leitão, António and Jose Santos. “Programming Languages for Generative Design: A Comparative Study.”
International Journal of Architectural Computing 10, no. 1 (2012): 139–162.
Leitão, António and Luis Santos. “Programming Languages for Generative Design: Visual or Textual.” In
Respecting Fragile Places: 29th eCAADe Conference Proceedings, edited by Tadeja Strojan Zupancic, Matevz
Juvancic, Spela Verovsek, and Anja Jutraz, 549–557. Ljubljana: University of Ljubljana, 2011.
Leitão, Antonio. “Teaching Computer Science for Architecture: A Proposal.” In Future Traditions: Rethinking
Traditions and Envisioning the Future in Architecture through the Use of Digital Technologies, edited by José
Sousa and João Xavier, 95–104. Porto: University of Porto, 2013.
Levitin, Anany. “How to Measure Software Size, and How Not To.” In Proceedings of IEEE COMPSAC 1986, 314–
318. Washington D.C.: IEEE Computer Society Press, 1986.
Lewis, Clayton and Gary Olson. “Can Principles of Cognition Lower the Barriers to Programming?” In Empirical
studies of programmers: second workshop, edited by Gary Olson, Sylvia Sheppard, and Elliot Soloway, 282–263.
New Jersey: Ablex Publishing, 1987.
Lopes, José, and António Leitão, “Portable Generative Design for CAD Applications.” In Integration through
Computation: Proceedings of the 31st Annual Conference of the Association for Computer Aided Design in
Architecture (ACADIA), edited by Joshua Taron, Vera Parlac, Branko Kolarevic and Jason Johnson, 196–203.
Banff: The University of Calgary, 2011.
Marx, Sherry. “Rich Data.” In The Sage Encyclopedia of Qualitative Research Methods, edited by Lisa Given,
794–795. Thousand Oaks: Sage publications, 2008.
217
McKechnie, Lynne. “Participant Observation.” In The Sage Encyclopedia of Qualitative Research Methods,
edited by Lisa Given, 598–599. Thousand Oaks: Sage publications, 2008.
Menges, Achim. “Instrumental Geometry.” Architectural Design 76, no. 2 (2006): 42–53.
Miller, George. “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing
Information.” Psychological Review: The Centennial Issue 101, no. 2 (1994): 343–352.
Mitchell, William, Robin Liggett, and Thomas Kvan. The Art of Computer Graphics Programming: A Structured
Introduction for Architects and Designers. New York: Van Nostrand Reinhold, 1987.
Mitchell, William. “Afterword: The Design Studio of the Future.” In The Electronic Design Studio, edited by
Malcolm McCullough, William Mitchell and Patrick Purcell, 479–494. Cambridge/MA: MIT Press, 1990.
Mühe, Henrik, Andreas Angerer, Alwin Hoffmann, and Wolfgang Reif. “On Reverse-engineering the KUKA
Robot Language.” Paper presented at the 1st International Workshop on Domain-Specific Languages and
models for Robotic systems, Taipei, October 22nd, 2010.
Myers, Brad. “Taxonomies of Visual Programming and Program Visualization.” Journal of Visual Languages
and Computing 1, no.1 (1990): 97–123.
Nardi, Bonnie. A Small Matter of Programming. Cambridge/MA: MIT Press, 1993.
Neugebauer, Clemens and Martin Kölldorfer. “Fabricating the Steel Bull of Spielberg.” In Robotic Fabrication in
Architecture, Art and Design, edited by Sigrid Brell-Cokcan and Johannes Braumann, 130–136. Vienna: Springer,
2012.
Nickerson, Jeff. “Visual Programming.” PhD diss., New York University, 1994.
Nickerson, Jeff. “Visual Programming: Limits of Graphic Representation.” In Proceedings of IEEE Symposium on
Visual Languages, edited by Allen Ambler and Takayuki Kimura, 178–179. Los Alamitos: IEEE Computer Society
Press, 1994.
Pane, John and Brad Myers. Usability Issues in the Design of Novice Programming System. Pittsburgh: Carnegie
Mellon University, 1996.
Petre, Marian and Thomas Green. “Learning to Read Graphics: Some Evidence that ‘Seeing’ an Information
Display is an Acquired Skill.” Journal of Visual Languages and Computing 4, no. 1 (1993): 55–70.
Picon, Antoine. “Robots and Architecture: Experiments, Fiction, Epistemology.” Architectural Design 84, no. 3
(2014): 54–59.
218
Resnick, Mitch and Eric Rosenbaum. “Designing for Tinkerability.” In Design, Make, Play: Growing the Next
Generation of STEM Innovators, edited by Magaret Honey and David Kanter, 163–181. New York: Routledge,
2013.
Robins, Anthony, Janet Rountree and Nathan Rountree. “Learning and Teaching Programming: A Review and
Discussion.” Computer Science Education 13, no. 2 (2003): 137–172.
Rountree, Janet, Anthony Robins, and Nathan Rountree. “Elaborating on Threshold Concepts.” Computer
Science Education 23, no. 3 (2013): 265–289.
Rutten, David. RhinoScript 101 for Rhinoceros 4.0. Seattle: Robert McNeel & Associates, 2007.
Schindler, Christoph “Ein architektonisches Periodisierungsmodell anhand fertigungstechnischer Kriterien,
dargestellt am Beispiel des Holzbaus.” PhD diss., ETH Zurich, 2009.
Schmitt, Gerhard. Microcomputer Aided Design for Architects and Designers. New York: John Wiley & Sons,
1988.
Schwartz, Thibault. “HAL: Extension of a Visual Programming Language to Support Teaching and Research on
Robotics Applied to Construction.” In Robotic Fabrication in Architecture, Art and Design, edited by Sigrid BrellCokcan and Johannes Braumann, 98–101. Vienna: Springer, 2012.
Simon, Herbert. The Sciences of the Artificial, 3rd edition. Cambridge/MA: MIT Press, 1996.
Turkle, Sherry and Seymour Papert. “Epistemological Pluralism: Styles and Voices within the Computer Culture.”
Signs 16, no. 1 (1990): 161–191.
Verth, James and Lars Bishop. Essential Mathematics for Games and Interactive Applications: A Programmer’s
Guide, 2nd edition. Burlington: Morgan Kaufmann, 2008.
Waldron, Kenneth and James Schemiedeler. “Kinematics.” In Springer Handbook of Robotics, edited by Bruno
Siciliano and Oussama Khatib, 9–33. Berlin: Springer, 2008.
Williams, Laurie. “Pair Programming.” In Making Software: What Really Works, and Why We Believe It, edited
by Andy Oram and Greg Wilson, 311–328. Sebastapol: O’Reilly Media, 2011.
Willman, Jan, Fabio Gramazio, Matthias Kohler, and Silke Langenberg. "Digital by Material: Envisioning an
extended performative materiality in the digital age of architecture." In Robotic Fabrication in Architecture, Art,
and Design, edited by Sigrid Brell-Cokcan and Johannes Braumann, 12–27. Vienna: Springer, 2012.
Winslow, Leon. “Programming Pedagogy—A Psychological Overview.” ACM SIGCE Bulletin 28, no. 3 (1996): 17–
22.
219
Yong, Y. F. and Maurice Bonney. “Off-line Programming.” In Handbook of Industrial Robotics, 2nd edition,
edited by Shimon Nof, 353–371. New York: John Wiley & Sons, 1999.
Yuen, Belinda. “Romancing the High-rise in Singapore.” Cities 22, no. 1 (2005): 3–13.
8.2 Online resources
ABB Robotics. “ABB Robotics.” Accessed 1st January 2016. http://www.abb.com/robotics
ABB Robotics. “ABB Robotics Historical Milestones.” Accessed January 1st 2016.
http://new.abb.com/products/robotics/home/about-us/historical-milestones
ABB Robotics. “RobotStudio.” Accessed January 1st 2016. http://new.abb.com/products/robotics/robotstudio
Anandan, Tanya. “Robotics in 2014: Market Diversity, Cobots and Global Investment.” Accessed January 1st
2016. http://www.robotics.org/content-detail.cfm/Industrial-Robotics-Industry-Insights/Robotics-in-2014Market-Diversity-Cobots-and-Global-Investment/content_id/4614
Association for Robots in Architecture. “Association for Robots in Architecture.” Accessed 1st January 2016.
http://www.robotsinarchitecture.org/kuka-prc
Australian Institute of Architects. “Becoming an architect.” Accessed 1st January 2016.
http://www.architecture.com.au/architecture/national/becoming-an-architect
Autodesk Inc. “Autocad Architecture.” Accessed 1st January 2016.
http://www.autodesk.com/products/autocad-architecture/overview
Bachmann Engineering AG. “Bachmann Engineering AG.” Accessed 1st January 2016. http://www.bachmannag.com/
Baer, Steve. “ghPython—New Component and parallel modules.” Accessed 1st January 2016.
http://stevebaer.wordpress.com/2013/12/11/ghpython-node-in-code/
Bentley Systems. “About Generative Components.” Accessed 1st January 2016.
https://www.bentley.com/en/products/product-line/modeling-and-visualizationsoftware/generativecomponents
Bentley Systems. “MicroStation.” Accessed 1st January 2016. http://www.bentley.com/enUS/Products/MicroStation/
Crane Robotics. “Crane Robotics.” Accessed 1st January 2016. http://cranerobotics.com/
220
Cycling 74. “MAX.” Accessed 1st January 2016. http://cycling74.com/products/max/
DesignScript. “DesignScript.” Accessed 1st January 2016. http://designscript.ning.com/
Future Cities Laboratory. “Future Cities Laboratory.” Accessed 1st January 2016.
http://www.futurecities.ethz.ch/
Google. “Google Python Style Guide.” Accessed 1st January 2016. https://googlestyleguide.googlecode.com/svn/trunk/pyguide.html
Gramazio Kohler Research. “Curved Folding.” Accessed 1st January 2016.
http://gramaziokohler.arch.ethz.ch/web/e/lehre/207.html
Gramazio Kohler Research. “Gramazio Kohler research.” Accessed 1st January 2016.
http://gramaziokohler.arch.ethz.ch/
Gramazio Kohler Research. “Procedural Landscapes 1.” Accessed 1st January 2016.
http://gramaziokohler.arch.ethz.ch/web/e/lehre/208.html
Gramazio Kohler Research. “Shifted Frames.” Accessed 1st January 2016.
http://gramaziokohler.arch.ethz.ch/web/e/lehre/228.html
Gramazio Kohler Research. “Spatial Aggregations 1.” Accessed 1st January 2016.
http://gramaziokohler.arch.ethz.ch/web/e/lehre/242.html
Güdel. “2 axis linear modules.” Accessed 1st January 2016. http://www.gudel.com/products/linear-axes/linearaxis-one-and-multi-axis-with-rack-drive/2-axis-type-zp/
HAL Robotics Ltd. “HAL Robotics.” Accessed 1st January 2016. http://www.hal-robotics.com/
IronPython. “IronPython: the Python programming language for the .NET Framework.” Accessed 1st January
2016. http://ironpython.net/
KUKA Industrial Robots. “KUKA.” Accessed 1st January 2016. http://www.kuka-robotics.com/en/
KUKA Industrial Robots. “KUKA—History.” Accessed January 1st 2016. http://www.kukarobotics.com/en/company/group/milestones/1996.htm
KUKA Industrial Robots. “Simulation-Planning-Optimization software.” Accessed January 1st 2016.
http://www.kuka-robotics.com/en/products/software/simulation/
Microsoft. “Microsoft Developer Network —C# Reference.” Accessed January 1st 2016.
https://msdn.microsoft.com/en-us/library/618ayhy6.aspx
221
National Instruments. “LabVIEW System Design Software.” Accessed 1st January 2016.
http://www.ni.com/labview/
PyLint. “Pylint.” Accessed 1st January 2016. http://www.pylint.org/
Python. “Python Glossary.” Accessed 1st January 2016. https://docs.python.org/2/glossary.html
Python. “Built-in Functions.” Python. Accessed January 1st 2016.
https://docs.python.org/2/library/functions.html#zip
Python-urx. “GitHub python-urx.” Accessed 1st January 2016. https://github.com/oroulet/python-urx
Racket. “Racket—A programmable programming language.” Accessed 1st January 2016. http://racket-lang.org/
Red, Edward. “Robotics Overview; EAAL—Electronics Assembly and Automation Laboratory.” Accessed January
1st 2016. http://eaal.groups.et.byu.net/html/RoboticsReview/body_robotics_review.html
Rob Technologies AG. “Rob Technologies AG.” Accessed 1st January 2016. http://www.robtechnologies.com/en/home
Robert McNeel and Associates. “Grasshopper: Algorithmic modelling for Rhino.” Accessed 1st January 2016.
http://www.grasshopper3d.com
Robert McNeel and Associates. “McNeel Grasshopper Developer forum.” Accessed January 1st 2016.
http://discourse.mcneel.com/c/grasshopper-developer
Robert McNeel and Associates. “Rhino’s market share?” Accessed 1st January 2016.
http://www.grasshopper3d.com/forum/topics/rhino-s-market-share
Robert McNeel and Associates. “Rhinoceros.” Accessed 1st January 2016. http://www.rhino3d.com
Robert McNeel and Associates. “RhinoScript Wiki.” Accessed 1st January 2016.
http://wiki.mcneel.com/developer/rhinoscript
Robots.IO. “ROBOTS.IO.” Accessed 1st January 2016. http://robots.io/wp/
Rosetta. “rosetta-lang” Accessed 1st January 2016. https://code.google.com/p/rosetta-lang/
Rutten, David. “Programming, conflicting perspectives.” Accessed 1st January 2016.
https://ieatbugsforbreakfast.wordpress.com/2012/04/01/programming-conflicting-perspectives/
S.A.N.S Wiki. “S.A.N.S Wiki.” Accessed 1st January 2016. https://sites.google.com/site/sanswikipage/
Scorpion robotics. “Scorpion robotics.” Accessed 1st January 2016. http://scorpion-robotics.com/
Scratch, “Scratch.” Accessed 1st January 2016. http://scratch.mit.edu/
222
Singapore Institute of Architects. “What is an architect.” Accessed 1st January 2016.
http://www.sia.org.sg/who-is-an-architect
Squeak. “Welcome to Squeak.” Accessed 1st January 2016. http://www.squeak.org/
Stäubli Robotics. “Stäubli Robotics.” Accessed 1st January 2016. http://www.staubli.com/en/robotics/
Tierney, Patrick. “DesignScript is now Dynamo.” Accessed 1st January 2016.
http://dynamobim.com/designscript-is-now-dynamo/
TIOBE. “TIOBE index for January 2016.” Accessed January 31st 2016. http://www.tiobe.com/tiobe_index
Universal Robots. “About Universal Robots—Our History.” Accessed January 1st 2016. http://www.universalrobots.com/about-universal-robots/our-history/
Universal Robots. “Universal Robots.” Accessed 1st January 2016. http://www.universal-robots.com/
Universal Robots. “UR5 Robot.” Accessed 1st January 2016. http://www.universalrobots.com/en/products/ur5-robot/
Van Rossum, Guido. “What is Python? Executive Summary.” Accessed 1st January 2016.
https://www.python.org/doc/essays/blurb/
Zaha Hadid Architects. “Venice Architecture Biennale.” Accessed 1st January 2016. http://www.zahahadid.com/design/contribution-to-2012-venice-biennale-theme-‘common-ground’/
223
9 Appendix
9.1 Interview: 2012 Design Research Studio fall semester
Part 1
Please refer to the set of Grasshopper programs that I have collected from your group.
1) Can you describe how the programs evolved?
2) Can you point out the most important developments?
3) Can you identify all instances when you modified YOUR components, created new ones or made
changes to the package?
Part 2
1) Was YOUR successful in making robot programming accessible? Why?
2) Can you compare the experience of using the different versions of YOUR?
3) In your opinion, should the focus in the next studio be on teaching students how to develop customized
components rather than on providing an expanded YOUR toolkit?
4) What support/ features do you consider to be important and currently missing in YOUR?
5) What prevented your team from completing more design-build cycles? Which was the most demanding
task: computational design, physical tooling or robot programming?
224
9.2 Interview: 2013 Design Research Studio fall semester
Part 1
Please refer to the set of Grasshopper programs that I have collected from your group.
1) Can you describe how the programs evolved?
2) Can you point out the most important developments?
3) Can you point out all instances when you modified YOUR component, developed custom ones or
made changes to the package?
Part 2
1) How did your experience of programming the robot differ between semesters? Did you learn any new
concepts? Did you adopt a different programming approach?
2) What functionalities do you feel are critical for robot programming but are currently missing from YOUR?
3) What was the most challenging aspect of implementing the robot program? Was it: coming up with the
underlying sequence and logic; specifying the movements (for example for folding); dealing with
exceptional cases; accounting for building inaccuracy; and/or fine-tuning parameters? What other
difficulties were there?
4) How did the development of the model fabrication process impact your design process?
5) Given another 2-3 months, what would you like to have developed further and how would you improve
your robot program?
225
9.3 Interview: Workshop
Part 1
With reference to your Grasshopper programs, please describe your overall programming process. Please take
note of the following questions:
1.
2.
Was there an aim that guided your process? Or was this vaguely defined at the start and became clear
later? Point out what was achieved at the end of each session (with reference to the artefacts) and
state whether this was expected.
Why did you choose to work graphically (component) or textually (code) in terms of programming.
Can you point out when you switch from one mode to another? What reasons motivated you to do
so? Is it easy to transition from text to graphical programming?
Part 2
Please answer the following questions. After you have finished, can you suggest other ways in which the design
of the sample Grasshopper program and/or YOUR could be improved?
Visibility
How easy is it to see or find various parts/components of the given Grasshopper program
while it is being changed or created?
What about when you work with the internal code of the components?
Viscosity
How easy is it to change the initial definition? How easy is it to change the code within
YOUR components?
Are there any particular changes you made to the initial programs that were particularly
difficult?
Diffuseness
Using YOUR components, can you create your robot program reasonably briefly or is it
‘long-winded’
Can this be done more efficiently in code?
Hard mental
What operations require the most mental effort when using YOUR components?
operations
Do you find that you have to try work it out in your head or on a piece of paper instead?
Error-proneness
Do some mistakes seem particularly common or easy to make when working with YOUR
components? For example, giving the wrong robot ID?
Closeness of
How well do you think YOUR components describe the robotic process?
226
mapping
What about the functions offered by the library?
For example, does it allow you to write code in a natural way that is similar to how you
would describe the process conversationally?
Role-
When reading the sample programs, is it easy to tell what each part is for in the overall
expressiveness
scheme?
Are there any parts that are particularly difficult to interpret?
What about individual components?
What about when reading the internal code of YOUR components?
Hidden
Did you find dependencies between components in the sample program easy to see? For
dependencies
example, the move components need to be connected to a base? Or are they hidden?
Progressive
How easy is it to stop programming and check your work? Can this be done anytime you
evaluation
like?
Can you try out partially completed versions of your program?
Provisionality
Is it easy to sketch things out when you are playing around with ideas, or when you aren’t
sure how to proceed? Did the components help you out in this way? Or was it easier to
test it out in code?
Premature
When working with the sample program, could you go about your task in any order you
commitment
like? Are you forced to think ahead and make certain decisions first?
Consistency
Do you find any inconsistency between YOUR components? Between YOUR components
and normal Grasshopper components?
Were there other inconsistencies in the sample program? For example, between
visualization and the robot programming parts?
Secondary
Did you create notes in your program? For example, by adding scribbles, panels, colours,
notation
groups in Grasshopper.
For example, by adding comments in code?
Did such notation help you to understand the sample program?
Abstraction
Are you able to define new components or create new functions? Did you do so?
227
Part 3
1.
2.
3.
4.
Did you find the foam-cutting process or the folding process more difficult? Can you explain why you felt
this way?
Which of the following robotics related concepts do you find difficult to understand: base, motion types,
singularity, pose and axis-angle? Can you suggest how these concepts can be better explained/taught?
What were the sources of motivation as you were learning to program the robot?
Conversely, what was the greatest source of frustration or barrier that inhibited your progress?
228
10 Project credits
Design Research Studio 2012: Design of Robotic Fabricated High Rises 1
Collaborators: Michael Budig (project lead), Dr. Silke Langenberg (Senior Researcher), Norman Hack,
Willi Lauer, Jason Lim, Raffael Petrovic , Dr. Tobias Bonwetsch, Ena Lloret, and Dr. Jan Willmann
Students: Sebastian Ernst, Pascal Genhart, Patrick Goldener, Sylvius Kramer, Sven Rickhoff, Silvan
Strohbach, Michael Stünzi, Martin Tessarz, Florence Thonney, Alvaro Valcarce Romero, Fabienne
Waldburger, and Tobias Wullschleger
Design Research Studio 2013: Design of Robotic Fabricated High Rises 2
Collaborators: Michael Budig + Raffael Petrovic (project lead), Willi Lauer, and Jason Lim
Students: Johan Julius Petrus Aejmelaeus-Lindström, Pun Hon Chiang, Kai Qi Foong, Yuhang He,
David Jenny, Lijing Kan, Ping Fuan Lee, Jean-Marc Stadelmann, and Andre Wong
Workshop: Programming Bespoke Robotic Processes (2014)
Collaborator: Jason Lim
Assistants: Lennard Ong and Ping Fuan Lee
Students: Clover Chen, Xia Tian, Amanda Yeo, Eileen Lim, Goh Yiqian, Amanda Mak, Clifford Kosasih,
Lau Jiehao, Leon Cher, and William Saputra
229
11 List of figures
Figure 1-1 An example of movement function in KUKA Robot Language (KRL).
3
Figure 2-1 Spiral sub-routine written in RhinoScript
11
Figure 2-2 Spiral sub-routine written in Python.
12
Figure 2-3 Spiral function written in DesignScript.
13
Figure 2-4 Spiral function written in Racket.
14
Figure 2-5 A Generative Components model of the conic spiral.
16
Figure 2-6 A Grasshopper implementation of the conic spiral.
17
Figure 2-7 An example drawing program.
22
Figure 2-8 Drawing program written in URScript.
23
Figure 2-9 Drawing program written in KUKA Robot Language (KRL).
24
Figure 2-10 Drawing program written in RAPID.
25
Figure 2-11 Example KUKA|prc Grasshopper program.
30
Figure 2-12 Example HAL Grasshopper program.
31
Figure 2-13 Example Crane Grasshopper program.
32
Figure 2-14 Example Godzilla Grasshopper program.
33
Figure 2-15 Example Scorpion Grasshopper program.
34
Figure 3-1 Analysis scripts example results.
44
Figure 3-2 Differences between two marked-up Grasshopper programs.
45
Figure 4-1 Prevailing public housing towers in Singapore. Image credit: Michael budig.
47
Figure 4-2 The custom robotic setup with a 4m tall x 1.7m wide x 2.7m working envelope.
48
Figure 4-3 YOUR Grasshopper toolkit comprising ten Python scripting components.
51
Figure 4-4 Pick-glue-place process.
51
Figure 4-5 Downstream production-related section of sample Grasshopper program.
52
Figure 4-6 1:50 models representing the intermediate and final design proposals. Image credit: Pascal
Genhart, Patrick Goldener, Florence Thonney, and Tobias Wullschleger.
Figure 4-7 Grasshopper program for assembling the final Tiong Bahru tower.
53
53
Figure 4-8 The floor assembly process: pick (left); move to safety point (middle); and place (right). Image
credit: Pascal Genhart, Patrick Goldener, Florence Thonney, and Tobias Wullschleger.
57
Figure 4-9 The wall assembly process: pick (left); glue (middle); and place (right). Image credit: Pascal
Genhart, Patrick Goldener, Florence Thonney, and Tobias Wullschleger.
57
Figure 4-10 The control interface.
58
Figure 4-11 The final fabricated model. Image credit: Callaghan Walsh.
59
Figure 4-12 1:50 models representing three design iterations (from left to right). Image credit: Sylvius
Kramer, Alvaro Romero, Michael Stünzi, and Fabienne Waldburger.
230
60
Figure 4-13 The end-effector. Image credit: Sylvius Kramer, Alvaro Romero, Michael Stünzi, and Fabienne
60
Waldburger.
Figure 4-14 Arc-shaped and circular modelling elements.
61
Figure 4-15 The VerticalGrowth Grasshopper program.
62
Figure 4-16 The subgraph in part 4 of the VerticalGrowth program.
62
Figure 4-17 Script in the modified Place component.
64
Figure 4-18 The HorizontalGrowth Grasshopper program.
64
Figure 4-19 The final fabricated model. Image credit: Callaghan Walsh.
65
Figure 4-20 1:50 models representing the intermediate and final design proposals. Image Credit: Sebastian
Ernst, Sven Rickhoff, Silvan Stohbach and Martin Tessarz.
66
Figure 4-21 The custom end-effector. Image Credit: Sebastian Ernst, Sven Rickhoff, Silvan Stohbach and Martin
67
Tessarz.
Figure 4-22 The Grasshopper program for designing and fabricating the final model.
67
Figure 4-23 The production related section of Grasshopper program.
68
Figure 4-24 The floor assembly process. Image Credit: Sebastian Ernst, Sven Rickhoff, Silvan Stohbach and
69
Martin Tessarz.
Figure 4-25 Folding correction logic.
70
Figure 4-26 The wall assembly process. Image Credit: Sebastian Ernst, Sven Rickhoff, Silvan Stohbach and
70
Martin Tessarz.
Figure 4-27 The control interface of the robot program.
71
Figure 4-28 The final fabricated model. Image credit: Callaghan Walsh.
72
Figure 4-29 YOUR package comprising of five Python modules.
73
Figure 4-30 Two iterations of the MoveJoints component.
74
Figure 4-31 YOUR Grasshopper toolkit comprising eighteen Python scripting components.
75
Figure 4-32 Previous (left) and new Pick component(right).
76
Figure 4-33 1:50 model representing the final design proposal (left). Image credit: Callaghan Walsh.
77
Figure 4-34 Twist variations (left); and louvered screens (right). Image credit: Pascal Genhart and Tobias
78
Wullschleger.
Figure 4-35 The Grasshopper program used to fabricate the final Nested Voids model.
78
Figure 4-36 Strip twisting process. Image credit: Pascal Genhart and Tobias Wullschleger.
81
Figure 4-37 Diagram of strip twisting process.
82
Figure 4-38 The model construction process (left); final model (right). Image credit: Pascal Genhart, Tobias
83
Wullschleger and Callaghan Walsh.
Figure 4-39 The strip bending process. Image credit: Michael Stünzi and Sylvius Kramer.
84
Figure 4-40 1:50 model representing the final design proposal. Image credit: Callaghan Walsh, Michael Stünzi
85
and Sylvius Kramer.
Figure 4-41 An initial implementation of the robot program.
86
Figure 4-42 3 YOUR components used in the bending process.
87
231
Figure 4-43 The final implemented robot program.
88
Figure 4-44 A portion of the script in the PlanesToURScript custom component.
89
Figure 4-45 The model construction process (left); final model (right). Image credit: Callaghan Walsh, Michael
90
Stünzi and Sylvius Kramer.
Figure 4-46 Paper strip deformation process. Image credit: Sebastian Ernst, Sven Rickhoff and Silvan Stohbach. 91
Figure 4-47 Section of the high-rise. Image credit: Callaghan Walsh and Raffael Petrovic.
91
Figure 4-48 Evolution of the end-effector. Image credit: Sebastian Ernst, Sven Rickhoff and Silvan Stohbach.
92
Figure 4-49 The end-effector grips the two strips, shifts one of them, and then staples them. Image credit:
Sebastian Ernst, Sven Rickhoff and Silvan Stohbach.
93
Figure 4-50 The final Grasshopper program was structured in nine parts.
94
Figure 4-51 The script in the custom paper deformation component.
97
Figure 4-52 Strip deformation process.
98
Figure 4-53 The model fabrication process (left); final model (right). Image credit: Sebastian Ernst, Sven
Rickhoff, Silvan Stohbach and Callaghan Walsh.
99
Figure 4-54 The compiled version of MoveLinear.
103
Figure 4-55 YOUR Grasshopper toolkit comprising eighteen user objects.
104
Figure 4-56 Sample Grasshopper program given out at start of 2013 studio.
105
Figure 4-57 1:50 models representing iterations Sequential Frames high-rise design. Image credit: David Jenny,
Jean-Marc Stadelmann, He Yuhang and Raffael Petrovic.
107
Figure 4-58 1:50 models representing iterations of the Mesh Towers design. Image credit: Petrus Aejmelaeus
Lindström, Chiang Punhon, Lee Pingfuan and Raffael Petrovic.
108
Figure 4-59 1:50 models representing iterations of the Vertical Avenue high-rise design. Image credit: Kan
Lijing, Foong Kaiqi, Andre Wong and Raffael Petrovic.
108
Figure 4-60 Modified robot program by the Vertical Avenue team.
109
Figure 4-61 Cardboard-folding process. Image credit: Petrus Aejmelaeus Lindström, Chiang Punhon and Lee
109
Pingfuan.
Figure 4-62 Robot program implemented by the Mesh Tower team.
110
Figure 4-63 YOUR package comprising of two Python modules.
112
Figure 4-64 Implementation details of the pose_by_plane function.
112
Figure 4-65 A simple Dynamo program with equivalent YOUR components.
113
Figure 4-66 YOUR Grasshopper toolkit comprising sixteen user objects.
114
Figure 4-67 Evolution of a wall element.
116
Figure 4-68 An early version of the robot program used to test folding-cutting operations.
116
Figure 4-69 The fold and cut component (left); steps in the process (right).
117
Figure 4-70 The extended sub-graph for the cutting process.
118
Figure 4-71 4 variants of a 90 degree folded wall.
119
Figure 4-72 Section (left) and ground plan (right) for the final high-rise design. Image credit: David Jenny and
120
Jean-Marc Stadelmann.
232
Figure 4-73 The 5 different wall types (left) and their distribution in a prototypical floor (right).
120
Figure 4-74 The program used for fabricating the final tower was organised in seven parts.
120
Figure 4-75 Fold-cut-place process. Image credit: David Jenny and Jean-Marc Stadelmann.
123
Figure 4-76 The model production process (left) and the final model (right). Image credit: David Jenny, JeanMarc Stadelmann and Callaghan Walsh.
124
Figure 4-77 The foam cutting setup (left); one half of a wall (right). Image credit: Petrus Aejmelaeus Lindström,
Chiang Punhon and Lee Pingfuan.
125
Figure 4-78 Initial program for carrying out foam-cutting tests.
126
Figure 4-79 The student team conducted over a hundred test cuts. Image credit: Petrus Aejmelaeus Lindström,
127
Chiang Punhon and Lee Pingfuan.
Figure 4-80 Cross-section of final tower and figure-ground plan. Image credit: Petrus Aejmelaeus Lindström,
128
Chiang Punhon and Lee Pingfuan.
Figure 4-81 The program used for fabricating the final tower was organised in 9 parts.
128
Figure 4-82 The custom PyCutPrep component.
131
Figure 4-83 The hot-wire foam-cutting process. Image credit: Petrus Aejmelaeus Lindström, Chiang Punhon
132
and Lee Pingfuan.
Figure 4-84 The custom CutWall component generated and sent instructions for cutting a wall surface.
132
Figure 4-85 The script encapsulated in CutWall.
133
Figure 4-86 The model production process (left) and the final model (right). Image credit: Petrus Aejmelaeus
Lindström, Chiang Punhon, Lee Pingfuan and Callaghan Walsh.
134
Figure 4-87 The robotic sensor-based assembly process. Image credit: Kan Lijing and Foong Kaiqi.
135
Figure 4-88 Section and ground plan for the final high-rise design. Image credit: Kan Lijing and Foong Kaiqi.
135
Figure 4-89 The final program used for fabricating the façade elements.
136
Figure 4-90 Design visualisation in Rhinoceros.
139
Figure 4-91 Plastic bending process. Image credit: Kan Lijing and Foong Kaiqi.
139
Figure 4-92 Code snippet from the SecondFold component.
140
Figure 4-93 Final program used for assembling the tower.
140
Figure 4-94 The robot descends and indicates the alignment of the façade. Image credit: Kan Lijing and Foong
143
Kaiqi.
Figure 4-95 MoveSense component.
144
Figure 4-96 Model production (left) and final model (right). Image credit: Kan Lijing, Foong Kaiqi and Callaghan
145
Walsh.
Figure 5-1 The foam-cutting process,
152
Figure 5-2 The plastic-crumpling process.
152
Figure 5-3 Fabrication process components.
153
Figure 5-4 1.5 mm thick acrylic strip (left); heating station (middle); end-effector (right).
153
Figure 5-5 Toolkit of YOUR Grasshopper user objects.
154
Figure 5-6 The custom component “Hello” is dynamically loaded as a function.
155
233
Figure 5-7 Sequential Cut component and its encapsulated script.
156
Figure 5-8 Crumple component and its encapsulated script.
157
Figure 5-9 The foam-cutting Grasshopper program was organised in eight parts.
158
Figure 5-10 Screenshot showing a visualisation of the foam-cutting process.
159
Figure 5-11 The plastic-crumpling Grasshopper program was organised in seven parts.
160
Figure 5-12 Screenshot of the crumpling visualisation.
160
Figure 5-13 Initial state of students’ plastic-crumpling program.
162
Figure 5-14 The students produced a set of strips with different crumpled forms.
163
Figure 5-15 Initial state of students’ foam-cutting program.
164
Figure 5-16 The team produced a thickened surface with an undulating texture.
165
Figure 5-17 The students decomposed the Crumple component into several simpler ones.
166
Figure 5-18 The students created additional customised YOUR components.
166
Figure 5-19 The students created additional Fold components.
167
Figure 5-20 The instructions generated by the custom components are stored in parameters.
168
Figure 5-21 The students produced a series of strip s that were increasingly crumpled and accurate.
169
Figure 5-22 The state of the group’s robot program at the end of the first block in the session.
169
Figure 5-23 The students produced a series of thickened surfaces by making repeated offset cuts.
170
Figure 5-24 The students adjusted the program’s layout, parameter values (A) and modified Crumple (B).
171
Figure 5-25 A series of crumpled strips produced by varying the folding angle.
171
Figure 5-26 A stepped surface is produced by repeatedly slicing the foam block in smaller sections.
172
Figure 5-27 Initial state of students’ foam-cutting program.
173
Figure 5-28 The students created three custom components.
173
Figure 5-29 Code snippet from custom Python component.
174
Figure 5-30 The foam block was sliced six times to produce the final stepped surface.
175
Figure 5-31 A serrated surface is produced by making two offset zig-zag cuts.
175
Figure 5-32 The second cut is offset from the first one (left) to produce a thickened serrated surface.
176
Figure 5-33 The team added a sub-graph and modified the SequentialCut component (D).
177
Figure 5-34 Screenshot showing the visualised cut-path.
177
Figure 5-35 The team modified the sample program to cut a finger joint.
178
Figure 5-36 Initial state of students’ plastic-crumpling program.
179
Figure 5-37 The plastic strip is produced as a result of modifying the original folding and crumpling motions.
179
Figure 5-38 The team added a code chunk to the script in Crumple describing a new pulling operation.
180
Figure 5-39 The team modified the component to repeatedly crumple a section of the strip.
181
Figure 5-40 The team added a code chunk and extended the list of instructions.
181
Figure 5-41 Strips with straight segments and u-shaped crumpled folds at their corners.
182
Figure 5-42 The team specified repeated approach-fold-crumple-unfold-pull-retract operations.
182
Figure 5-43 Final set of strips with s-shaped folds (left); fold detail (right).
183
Figure 5-44 Students adjusted sliders then modified the Crumple component (D).
183
234
Figure 5-45 The team altered the rotation axis and a second folding operation in the script of Crumple.
184
Figure 5-46 The robot twists the strip.
184
Figure 5-47 Initial state of students’ foam-cutting program.
185
Figure 5-48 The team generated different surfaces to cut by changing their profile curves.
185
Figure 5-49 The state of the team’s foam-cutting program at the end of the session.
186
Figure 5-50 Surfaces cut using linear (left) and joint-based motions (right).
187
Figure 5-51 A thickened wavy surface is produced.
188
Figure 6-1 Part of an exemplary style guide.
199
Figure 6-2 MaxInspired buttons (top row) and toggles (bottom row) can be resized and colour-coded.
204
Figure 6-3 The robot moves to a pre-defined position and waits for the student to apply glue.
208
Table 3-1 Evolution of the graphical YOUR toolkit.
41
Table 6-1 The graphic and textual token count, number of YOUR components used in the final programs.
235
196
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement