null  null
SOFTIMAGE®|XSI®
Basics
Copyright and Disclaimer
© 1999–2007 Avid Technology, Inc. All rights reserved.
Avid, Avid Mojo, dotXSI, Behavior, Elastic Reality, FACE ROBOT, meta–clay, Painterly
Effects, SOFTIMAGE, XSI, and the XSI logo are either registered trademarks or trademarks
of Avid Technology, Inc. in the United States and/or other countries. mental ray and mental
images are registered trademarks of mental images GmbH in the U.S.A. and some other
countries. mental ray Phenomenon, mental ray Phenomena, Phenomenon, Phenomena,
Software Protection Manager, and SPM are trademarks or, in some countries, registered
trademarks of mental images GmbH. Alienbrain, the Alienbrain logo and NXN are
trademarks of NXN Software AG. Syflex is a trademark of Syflex LLC. AGEIA and physX
are trademarks of AGEIA Technologies, Inc. Activision is a registered trademark of
Activision, Inc. © 1998 Activision, Inc. Battlezone is a trademark of and © 1998 Atari
Interactive, Inc., a Hasbro company. All rights reserved. Licensed by Activision.
Digimation and Digimation Model Bank are either registered trademarks or trademarks
of Digimation, Inc. in the United States and/or other countries. Copyright © 2005 by
Paramount Pictures Corporation and Viacom International Inc. All Rights Reserved.
Nickelodeon, Barnyard and all related titles, logos and characters are trademarks of
Viacom International Inc. All other trademarks contained herein are the property of their
respective owners.
SOFTIMAGE|XSI uses JScript and Visual Basic Scripting Edition from Microsoft
Corporation.
SOFTIMAGE|XSI includes Zbump shader © 2006–2007 Ben Rogall. All rights reserved.
SOFTIMAGE|XSI includes Open Dynamics Engine ("ODE") software copyright (c) 20012003, Russell L. Smith. All rights reserved. Redistribution and use in source and binary
forms of ODE, with or without modification, are permitted provided that the following
conditions are met: [1.] Redistributions of ODE source code must retain the above
copyright notice, this list of conditions and the following disclaimer. [2.] Redistributions
of ODE binary code must reproduce the above copyright notice, this list of conditions and
the following disclaimer in the documentation and/or other materials provided with the
distribution. [3.] Neither the names of ODE's copyright owner nor the names of its
contributors may be used to endorse or promote products derived from this software
without specific prior written permission.
THE ODE SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA,
OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
SOFTIMAGE|XSI includes software developed by the University of California, Berkeley
and its contributors. Copyright (c) 1989, 1993, 1994 The Regents of the University of
California. All rights reserved. This University of California, Berkeley ("UCB") code is
derived from software contributed to Berkeley by Guido van Rossum. Redistribution and
use of the UCB code in source and binary forms, with or without modification, are
permitted provided that the following conditions are met: [1.] Redistributions of the UCB
source code must retain the above copyright notice, this list of conditions and the
following disclaimer. [2.] Redistributions of UCB binary code must reproduce the above
copyright notice, this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution. [3.] All advertising materials
mentioning features or use of this UCB code must display the following
acknowledgement: This product includes software developed by the University of
California, Berkeley and its contributors.
[4.] Neither the name of the University of California, Berkeley, nor the names of its
contributors may be used to endorse or promote products derived from this software
without specific prior written permission.
THE UCB CODE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS''
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS
OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
SOFTIMAGE|XSI contains modified portions of grid control software © 1998–1999 Chris
Maunder.
SOFTIMAGE|XSI includes the Python Release 2.3.2 software. The Python software is:
Copyright © 2001, 2002, 2003 Python Software Foundation. All rights reserved.
Copyright © 2000 BeOpen.com. All rights reserved.
Copyright © 1995-2000 Corporation for National Research Initiatives. All rights reserved.
Copyright © 1991-1995 Stichting Mathematisch Centrum. All rights reserved.
SOFTIMAGE|XSI uses the OpenEXR software © 2002, Industrial Light & Magic, a
division of Lucas Digital Ltd. LLC All rights reserved. Redistribution and use in source and
binary forms, with or without modification, are permitted provided that the following
conditions are met: [1] Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer. [2] Redistributions in binary
form must reproduce the above copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided with the distribution.
[3] Neither the name of Industrial Light & Magic nor the names of its contributors may be
used to endorse or promote products derived from this software without specific prior
written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
Basics • 3
Copyright and Disclaimer
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA,
OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
SOFTIMAGE|XSI includes the libtiff library. The libtiff software is:
Copyright (c) 1988-1997 Sam Leffler
Copyright (c) 1991-1997 Silicon Graphics, Inc.
Permission to use, copy, modify, distribute, and sell this software and its documentation
for any purpose is hereby granted without fee, provided that (i) the above copyright
notices and this permission notice appear in all copies of the software and related
documentation, and (ii) the names of Sam Leffler and Silicon Graphics may not be used in
any advertising or publicity relating to the software without the specific, prior written
permission of Sam Leffler and Silicon Graphics.
THE SOFTWARE IS PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND,
EXPRESS, IMPLIED OR OTHERWISE, INCLUDING WITHOUT LIMITATION, ANY
WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
IN NO EVENT SHALL SAM LEFFLER OR SILICON GRAPHICS BE LIABLE FOR ANY
SPECIAL, INCIDENTAL, INDIRECT OR CONSEQUENTIAL DAMAGES OF ANY
KIND, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA
OR PROFITS, WHETHER OR NOT ADVISED OF THE POSSIBILITY OF DAMAGE,
AND ON ANY THEORY OF LIABILITY, ARISING OUT OF OR IN CONNECTION
WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
SOFTIMAGE|XSI includes the libpng-1.2.5 library. The libpng software is: Copyright (c)
2000-2002 Glenn Randers-Pehrson, and is distributed according to the same disclaimer
and license as libpng-1.0.6 with the following individuals added to the list of Contributing
Authors: Simon-Pierre Cadieux, Eric S. Raymond, and Gilles Vollant.
There is no warranty against interference with your enjoyment of the library or against
infringement. There is no warranty that our efforts or the library will fulfill any of your
particular purposes or needs. This library is provided with all faults, and the entire risk of
satisfactory quality, performance, accuracy, and effort is with the user.
THE PNG REFERENCE LIBRARY IS SUPPLIED "AS IS". THE CONTRIBUTING AUTHORS AND
GROUP 42, INC. DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT
LIMITATION, THE WARRANTIES OF MERCHANTABILITY AND OF FITNESS FOR ANY PURPOSE.
THE CONTRIBUTING AUTHORS AND GROUP 42, INC. ASSUME NO LIABILITY FOR DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES, WHICH MAY
RESULT FROM THE USE OF THE PNG REFERENCE LIBRARY, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
SOFTIMAGE|XSI includes the libjpeg library. This software is based in part on the work
of the Independent JPEG Group. The libjpeg library is: Copyright 1991-1998, Thomas G.
Lane. All Rights reserved.
Permission is hereby granted to use, copy, modify, and distribute this software (or
portions thereof) for any purpose, without fee, subject to these conditions: [1] If any part
of the source code for this software is distributed, then this README file must be
included, with this copyright and no-warranty notice unaltered; and any additions,
deletions, or changes to the original files must be clearly indicated in accompanying
documentation. [2] If only executable code is distributed, then the accompanying
documentation must state that "this software is based in part on the work of the
Independent JPEG Group". [3] Permission for use of this software is granted only if the
4 • SOFTIMAGE|XSI
user accepts full responsibility for any undesirable consequences; the authors accept NO
LIABILITY for damages of any kind.
SOFTIMAGE|XSI Mod Tool includes software developed by the OpenSSL Project for use
in the OpenSSL Toolkit (http://www.openssl.org/). The OpenSSL toolkit is: Copyright (c)
1998-2003 The OpenSSL Project. All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are
permitted provided that the following conditions are met: [1.] Redistributions of source
code must retain the above copyright notice, this list of conditions and the following
disclaimer. [2.] Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution. [3.] All advertising materials mentioning
features or use of this software must display the following acknowledgment: "This product
includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit.
(http://www.openssl.org/)" [4.] The names "OpenSSL Toolkit" and "OpenSSL Project"
must not be used to endorse or promote products derived from this software without
prior written permission. For written permission, please contact [email protected] [5.] Products derived from this software may not be called "OpenSSL"
nor may "OpenSSL" appear in their names without prior written permission of the
OpenSSL Project. [6.] Redistributions of any form whatsoever must retain the following
acknowledgment: "This product includes software developed by the OpenSSL Project for
use in the OpenSSL Toolkit (http://www.openssl.org/)".
THIS SOFTWARE IS PROVIDED BY THE OPENSSL PROJECT ``AS IS'' AND ANY
EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OPENSSL
PROJECT OR ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
This product includes cryptographic software written by Eric Young ([email protected]).
This product includes software written by Tim Hudson ([email protected]).
The SSLeay library is: Copyright (C) 1995-1998 Eric Young ([email protected]) All
rights reserved.
This package is an SSL implementation written by Eric Young ([email protected]). The
implementation was written so as to conform with Netscapes SSL.
This library is free for commercial and non-commercial use as long as the following
conditions are aheared to. The following conditions apply to all code found in this
distribution, be it the RC4, RSA, lhash, DES, etc., code; not just the SSL code. The SSL
documentation included with this distribution is covered by the same copyright terms
except that the holder is Tim Hudson ([email protected]).
Copyright remains Eric Young's, and as such any Copyright notices in the code are not to
be removed. If this package is used in a product, Eric Young should be given attribution as
the author of the parts of the library used. This can be in the form of a textual message at
program startup or in documentation (online or textual) provided with the package.
Redistribution and use in source and binary forms, with or without modification, are
permitted provided that the following conditions are met: [1.] Redistributions of source
code must retain the copyright notice, this list of conditions and the following disclaimer.
[2.] Redistributions in binary form must reproduce the above copyright notice, this list of
conditions and the following disclaimer in the documentation and/or other materials
provided with the distribution. [3.] All advertising materials mentioning features or use of
this software must display the following acknowledgement: "This product includes
cryptographic software written by Eric Young ([email protected])". The word
'cryptographic' can be left out if the routines from the library being used are not
cryptographic related. [4.] If you include any Windows specific code (or a derivative
thereof) from the apps directory (application code) you must include an
acknowledgement: "This product includes software written by Tim Hudson
([email protected]ft.com)"
THIS SOFTWARE IS PROVIDED BY ERIC YOUNG ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA,
OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
The licence and distribution terms for any publically available version or derivative of this
code cannot be changed. i.e. this code cannot simply be copied and put under another
distribution licence [including the GNU Public Licence.]
SOFTIMAGE|XSI includes software developed by the Apache Software Foundation (http:/
/www.apache.org/). The Xerces software is: Copyright (c) 1999-2003 The Apache Software
Foundation. All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are
permitted provided that the following conditions are met: [1.] Redistributions of source
code must retain the above copyright notice, this list of conditions and the following
disclaimer. [2.] Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution. [3.] The end-user documentation included
with the redistribution, if any, must include the following acknowledgment: "This product
includes software developed by the Apache Software Foundation (http://www.apache.org/
)." Alternately, this acknowledgment may appear in the software itself, if and wherever
such third-party acknowledgments normally appear. [4.] The names "Xerces" and
"Apache Software Foundation" must not be used to endorse or promote products derived
from this software without prior written permission. For written permission, please
contact [email protected] [5.] Products derived from this software may not be called
"Apache", nor may "Apache" appear in their name, without prior written permission of
the Apache Software Foundation.
THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESSED OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE APACHE SOFTWARE FOUNDATION OR
ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
This software consists of voluntary contributions made by many individuals on behalf of
the Apache Software Foundation and was originally based on software copyright (c) 1999,
International Business Machines, Inc., http://www.ibm.com. For more information on
the Apache Software Foundation, please see http://www.apache.org/.
This document is protected under copyright law. An authorized licensee of
SOFTIMAGE|XSI may reproduce this publication for the licensee’s own use in learning
how to use the software. This document may not be reproduced or distributed, in whole
or in part, for commercial purposes, such as selling copies of this document or providing
support or educational services to others. This document is supplied as a guide for
SOFTIMAGE|XSI. Reasonable care has been taken in preparing the information it
contains. However, this document may contain omissions, technical inaccuracies, or
typographical errors. Avid Technology, Inc. does not accept responsibility of any kind for
customers’ losses due to the use of this document. Product specifications are subject to
change without notice.
Documentation Team
Judy Bayne, Grahame Fuller, Amy Green, Edna Kruger, and Naomi Yamamoto.
Print-On-Demand By LuLu.com
Part No. 0130-07196-02 . 03 2007
Basics • 5
Copyright and Disclaimer
6 • SOFTIMAGE|XSI
Contents
Welcome to SOFTIMAGE®|XSI® . . . . . . . . . . . . . . . . . . . . . 11
Section 4
Organizing Your Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Section 1
Introducing XSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Where Files Get Stored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Scenes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Importing and Exporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The XSI Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Getting Commands and Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Setting Values for Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Working with Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Working in 3D Views. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Exploring Your Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sample Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
15
18
19
22
31
35
Section 2
Elements of a Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
What’s Inside a Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Selecting Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Components and Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Parameter Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
39
45
50
52
54
Section 3
Moving in 3D Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Coordinate Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Center Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Freezing Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Resetting Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Setting Neutral Poses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Transform Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Transformations and Hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . .
Snapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
62
68
68
68
68
69
69
70
72
73
75
76
78
Section 5
General Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Overview of Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Geometric Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Accessing Modeling Commands . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Starting from Scratch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Operator Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Modeling Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Attribute Transfer (GATOR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Manipulating Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Deformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Section 6
Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
About Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Drawing Curves. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Manipulating Curve Components . . . . . . . . . . . . . . . . . . . . . . . .
Modifying Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating Curves from Other Objects . . . . . . . . . . . . . . . . . . . . . .
Importing EPS Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
102
102
105
108
108
109
Basics • 7
Contents
Section 7
Polygon Mesh Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Overview of Polygon Mesh Modeling . . . . . . . . . . . . . . . . . . . . . . 112
About Polygon Meshes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Converting Curves to Polygon Meshes . . . . . . . . . . . . . . . . . . . . . 116
Drawing Polygons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Subdividing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Drawing Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Extruding Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Removing Polygon Mesh Components . . . . . . . . . . . . . . . . . . . . . 121
Combining Polygon Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Symmetrizing Polygons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Cleaning Up Meshes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Subdivision Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Section 8
NURBS Surface Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
About Surfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Building Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Modifying Surfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Projecting and Trimming with Curves . . . . . . . . . . . . . . . . . . . . . . 131
Surface Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Section 9
Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Bringing It to Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Playing the Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Previewing Animation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Animating with Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Animating Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Editing Keys and Function Curves . . . . . . . . . . . . . . . . . . . . . . . . . 146
Layering Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Constraints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Path Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Linking Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Expressions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8 • SOFTIMAGE|XSI
Copying Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Scaling and Offsetting Animation . . . . . . . . . . . . . . . . . . . . . . . . . 156
Plotting (Baking) Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Removing Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Section 10
Character Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Character Animation in a Nutshell . . . . . . . . . . . . . . . . . . . . . . . . 160
Setting Up Your Character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Building Skeletons for Characters . . . . . . . . . . . . . . . . . . . . . . . . . 164
Enveloping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Rigging a Character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Animating Characters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Walkin’ the Walk Cycle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Motion Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Section 11
Shape Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Things are Shaping Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Using Construction Modes for Shape Animation. . . . . . . . . . . . . . 188
Creating and Animating Shapes in the Shape Manager . . . . . . . . 189
Selecting Shape Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Storing and Applying Shape Keys . . . . . . . . . . . . . . . . . . . . . . . . . 191
Using the Animation Mixer for Shape Animation . . . . . . . . . . . . . 192
Mixing the Weights of Shape Keys . . . . . . . . . . . . . . . . . . . . . . . . 193
Section 12
Actions and the Animation Mixer . . . . . . . . . . . . . . . . . . . 195
What Is Nonlinear Animation? . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Overview of the Animation Mixer . . . . . . . . . . . . . . . . . . . . . . . . . 197
Storing Animation in Action Sources. . . . . . . . . . . . . . . . . . . . . . . 198
Working with Clips in the Animation Mixer . . . . . . . . . . . . . . . . . 200
Mixing the Weights of Action Clips. . . . . . . . . . . . . . . . . . . . . . . . 201
Modifying and Offsetting Action Clips . . . . . . . . . . . . . . . . . . . . . 202
Sharing Animation between Models . . . . . . . . . . . . . . . . . . . . . . . 204
Adding Audio to the Mix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Section 13
Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Dynamics and Particle Effects . . . . . . . . . . . . . . . . . . . . . . . . . . .
Making Things Move with Forces . . . . . . . . . . . . . . . . . . . . . . . .
Particles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hair and Fur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Rigid Body Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Cloth Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Soft Body Dynamics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
208
208
210
216
221
226
228
Section 14
Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
The Shader Library. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Connecting Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Render Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Building Shader Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Editing Shader Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
230
234
236
238
240
Section 15
Materials and Surface Shaders . . . . . . . . . . . . . . . . . . . . . 241
About Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Material Libraries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating and Assigning Materials . . . . . . . . . . . . . . . . . . . . . . . .
The Material Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Surface Shaders. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Basic Surface Color Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reflectivity, Transparency, and Refraction . . . . . . . . . . . . . . . . . .
242
243
244
246
247
249
250
Section 16
Texturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
How Surface and Texture Shaders Work Together . . . . . . . . . . . .
Types of Textures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Applying Textures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Texture Projections and Supports. . . . . . . . . . . . . . . . . . . . . . . . .
Editing Texture Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
UV Coordinates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
254
255
256
257
262
264
Editing UV Coordinates in the Texture Editor . . . . . . . . . . . . . . . .
Texture Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Bump Maps and Displacement Maps. . . . . . . . . . . . . . . . . . . . . .
Reflection Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Baking Textures with RenderMap . . . . . . . . . . . . . . . . . . . . . . . .
Painting Color at Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
265
266
268
270
271
272
Section 17
Lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Types of Lights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Placing Lights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Setting Light Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Selective Lights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Global Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Caustics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Final Gathering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Light Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Image-Based Lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
274
275
276
278
278
280
282
283
284
286
Section 18
Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
Types of Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Camera Rig. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Working with Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Setting Camera Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Lens Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Motion Blur. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
288
289
290
291
292
294
Section 19
Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Rendering Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Render Passes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Setting Render Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Selecting a Rendering Method. . . . . . . . . . . . . . . . . . . . . . . . . . .
Different Ways to Render . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
296
297
301
303
305
Basics • 9
Contents
Section 20
Compositing and 2D Paint . . . . . . . . . . . . . . . . . . . . . . . . . . 307
XSI Illusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Adding Images and Render Passes . . . . . . . . . . . . . . . . . . . . . . . . 309
Adding and Connecting Operators . . . . . . . . . . . . . . . . . . . . . . . . 310
Editing and Previewing Operators . . . . . . . . . . . . . . . . . . . . . . . . . 312
Rendering Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
2D Paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Vector Paint vs. Raster Paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Painting Strokes and Shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Merging and Cloning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
Section 21
Customizing XSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Plug-ins and Add-ons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Toolbars and Shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Custom and Proxy Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Key Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Other Customizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
10 • SOFTIMAGE|XSI
Welcome to SOFTIMAGE®|XSI®
SOFTIMAGE|XSI is the next-generation 3D system that integrates
modeling, animation, simulation, compositing, and rendering into a
single, seamless environment. XSI incorporates many standard 3D
tools and functions, but goes far beyond that in terms of tool
sophistication and artistic control.
Modeling
The modeling tools are designed for creating and editing seamless
animated models of any sort. XSI offers many tools for creating,
editing, and deforming polygons and subdivision surfaces, as well as
NURBS curves and surfaces.
Animation
XSI provides you with a complete set of both low-level and high-level
animation tools. All the fundamental low-level tools are there with
keyframing, fcurve editor, dopesheet, constraints, linked parameters,
and expressions. You can also layer keyframe animation on top of any
other type of animation.
Shape animation is achieved using a number of techniques and tools,
including the popular and easy-to-use shape manager.
For high-level animation, you have the animation mixer which lets you
mix, transition, and combine all forms of animation, shapes, and audio
in a nonlinear and non-destructive manner.
Character Animation
Copyright © 2005 by Paramount Pictures Corporation and Viacom
International Inc. All Rights Reserved. Nickelodeon, Barnyard and all related
titles, logos and characters are trademarks of Viacom International Inc.
Building and animating characters is fully supported with all the
regular animation tools, as well as special character tools such as
skeletons that use inverse kinematics, envelopes and weight maps, and
easy-to-create character rigs and rigging tools. As well, you can retarget
any type of animation, including mocap data, to any type of rig.
The Interface
Shaders and Texturing
XSI’s interface is laid out in a way that gives you both a large viewing
area as well as easy access to all the tools you need, all the time. You can
easily resize any panel or viewport in the XSI interface, as well as
customize its layout to exactly what you want.
Using a graphical node-based connection tool called the render tree,
you can create an unlimited range of materials by connecting any type
of shader to any object. You can also project 2D and 3D textures into
texture spaces, which can then be manipulated like a 3D object.
Basics • 11
Welcome to SOFTIMAGE®|XSI®
Rendering
About this Guide
Drawing upon the integration of mental ray® rendering technology,
XSI offers full-resolution, interactive rendering, caustics, global
illumination, and motion blur, not only for the final render, but also
within a render region that can be drawn in any XSI viewport. It
renders everything in XSI, letting you adjust your render parameters at
any stage of modeling, animating, or even during playback.
This guide provides an overview of the main features, tools, and
workflows of XSI, helping you get a headstart in understanding and
using XSI:
As well, you can embed unlimited render passes into a single scene and
for each pass, generate multiple rendered channels such as specular or
reflections. XSI’s render passes are extremely easy to create, customize,
and edit.
Painting and Compositing
XSI’s built-in compositor, called XSI Illusion, is based on Avid’s
Matador, Media Illusion, and Elastic Reality products. XSI Illusion is
designed to edit textures and image-based lighting in real time. You can
use it to rough out final shots, touch up your textures, morph, warp
and rig images, create custom mattes, and tweak the results of a multipass render, all within XSI.
• If you’re new to XSI, it gives you a foot in the proverbial XSI door.
You may be new to 3D, or just new to XSI but familiar with other
3D software packages. Either way, you can skim through this guide
and quickly see what’s possible in XSI, as well as discover what the
different tools and elements are called.
• If you’re an old hand at XSI, this guide may provide you with a
quick start for areas of XSI that you’ve never needed to use before.
For example, if modeling is your thing and now you have to do
some animation, this guide can help you get a sense of what’s
possible in animation and what tools you can use.
This guide has been updated for XSI version 6.01, but because it’s a
guide to the fundamental concepts and workflows of XSI, the
information it contains will apply to XSI well beyond this version.
If you’re eager to take XSI for a spin, there’s enough information in this
guide to get you started without needing to do more homework. Many
workflow overviews are included, as well as command names that tell
you where to find things.
Remember that all the detailed information and procedures are
covered in the XSI Guides available from the Help menu on the main
menu bar in XSI (or press the F1 key): we’ve just filtered out the main
goodies for you in this guide.
Now, go fire up XSI and have some fun!
The XSI Documentation Team
12 • SOFTIMAGE|XSI
Section 1
Introducing XSI
New to XSI? Take a quick guided tour through the
interface and basic operations.
What you’ll find in this section ...
• The XSI Interface
• Getting Commands and Tools
• Setting Values for Properties
• Working with Views
• Working in 3D Views
• Exploring Your Scene
• Sample Content
Basics • 13
Section 1 • Introducing XSI
The XSI Interface
Welcome to your new home—the XSI interface. The interface is
composed of several toolbars and panels surrounding the viewports
that display the elements in your scene. Each part of the interface is
designed to help you accomplish different aspects of your work. The
image below shows the default XSI Standard layout. Take a minute to
become familiar with the names and locations of the parts of the
Title bar
Displays the version of XSI, your license type,
and the name of the open project and scene.
interface. You can toggle parts of the standard layout using View >
Optional Panels. Other layouts are available from the View > Layout
menu.
XSI has many preferences for many tools, editors, and working
methods (choose File > Preferences). If you want to change something,
chances are there’s a preference for it!
Viewports let you view the contents of your scene in different ways.
There are also other tools and editors available from the View menu
that are displayed in floating windows that you can move around.
Main menu bar
You can choose other readymade layouts from the View >
Layouts menu, or create your
own layout for a customized
workflow.
Main command panel
contains all the commands and
tools you use most frequently.
Toolbar
Displays one of five toolbars
used for modeling, animation,
rendering, simulation, and hair.
Commands and tools that have
a similar purpose are grouped
into panels, such as for
selection, transformations,
snapping, constraints, and
general object editing.
Press the 1, 2, 3, 4, and Ctrl+2
keys to switch between
these toolbars.
You can also access the toolbars
from the main menu bar.
Right-click a panel label to
collapse and hide it.
Click the KP/L tab at the
bottom to switch to the keying
panel and layer control.
Icons
Switch between toolbar and
other panels and layouts.
Click the MAT tab to switch to
the material panel.
Controls at the bottom of the interface include a command box, script editor icon,
the mouse/status line, the timeline, the playback panel, and the animation panel.
14 • SOFTIMAGE|XSI
Getting Commands and Tools
Getting Commands and Tools
The image below shows how to access different types of menus in XSI.
Each menu typically contains a mixture of commands and tools:
• Commands have an immediate effect on the scene, for example,
duplicating the selected object.
• Tools activate a mode that requires mouse interaction, for example,
selecting elements, translating an object, orbiting the camera, or
drawing polygons and curves. A tool stays active until you
deactivate it by pressing Esc or by activating a different tool.
A tear-off menu in a floating window.
A drop-down menu
from the main menu bar.
The menu from clicking
the menu button in the
Select panel.
A menu from clicking a menu
button on the toolbar.
Clicking the triangle in the corner
of a button opens a menu.
Middle-click the menu button to
repeat the last command selected.
Choose a command from the
menu with the mouse
or press a key that corresponds to
an underlined letter in the menu
to instantly activate the command.
A context menu for
an element.
In the explorer or
schematic view, rightclick on an element to
open its context menu.
In a 3D view, Alt+rightclick (Ctrl+Alt+rightclick on Linux) on an
element to open its
context menu.
Basics • 15
Section 1 • Introducing XSI
Switching between the Main Toolbar and Other
Tools
The three buttons at the lower left switch between the main toolbar, the
weight paint panel, and the palette:
Main toolbar
Palette
Weight paint panel
• The main toolbar is where you’ll do most of your work.
• The weight paint panel contains a specialized layout for editing
envelope weights. See The Weight Paint Panel on page 169.
• The palette contains some wire color and display type presets, as
well as a custom toolbar where you can store custom commands.
Switching Toolbars
The main toolbar on the left side of the interface
can display categories for modeling, animation,
rendering, simulation, and hair. You can switch
between these categories by clicking on the
toolbar’s title as shown at right, or by pressing 1,
2, 3, 4, or Ctrl+2 (use the number keys at the top
of the keyboard, not on the numeric keypad).
If you prefer, you can also access the same
commands from the main menu bar:
16 • SOFTIMAGE|XSI
Switching between the MCP, KP/L, and MAT Panels
The three tabs at the bottom of the panel on the
right side of the interface switch between the
MCP, KP/L, and MAT panels:
• MCP is the main command panel. It is
divided into sub-panels with controls for
selection, transformation, constraints,
snapping, and editing.
• KP/L contains the keying panel as well as controls for working with
animation and scene layers. See Keying Parameters in the Keying
Panel on page 141, Layering Animation on page 149, and Scene
Layers on page 49.
• MAT is the material panel. It provides similar controls to the
texture layer editor, but in a different arrangement. See Texture
Layers on page 266.
Collapsing MCP Panels
You can collapse panels in the MCP by rightclicking on their main menu buttons. To expand
a collapsed panel, simply right-click on it again.
This is useful when working on small monitors,
like on laptops.
Getting Commands and Tools
Tearing Off Menus
To tear off a menu, click on the dotted line at the top
of a menu or submenu and drag to any area in the
interface.
The menu is loaded into a floating window for the
current session.
Hotkeys: Sticky or Supra
Using hotkeys, tools can be activated in either of two
modes:
• Sticky: Press and release the key quickly. The
tool stays active until you press and release the
same key, activate a different tool, or press Esc.
• Supra: Press and hold the key to temporarily override the current
tool. The new tool stays active only while the key is held down.
When you release the key, the previous tool is reactivated.
Repeating Commands and Tools
Press . (period) to repeat the last command, and press , (comma)
to reactivate the last tool (other than selection, navigation, or
transformation).
Basics • 17
Section 1 • Introducing XSI
Setting Values for Properties
Property editors are where you’ll find an element’s properties. They are
a basic tool that you use constantly to define and modify elements in a
scene. Select an object or property and press Enter to open its property
editor, or click its icon in an explorer. In addition to property editors,
Animation controls
Set keys on parameters in the property
editor, auto key, and move between
keys for animated parameters.
Shows the name of the
element being edited.
Displays multi when
multiple elements are
selected for editing.
you can enter values in many of the text boxes in the main command
panel, such as the Transform panel, and use virtual sliders to change
values for marked parameters in the explorer.
Moves among the
sequence of property
editors (up a level,
previous, and next).
Type a numerical value in a text box to change
the parameter’s values. You can sometimes enter
values beyond the slider range.
Property Sets
Can be expanded and collapsed. To
get help on parameters, click the
corresponding ?.
• Drag the mouse in a circular motion over the text
box to change values (scrubbing). Scrub
clockwise to increase and counterclockwise to
decrease.
Property Page tabs
Switches between sets of grouped
parameters within a property set.
• Increment values using [ and ]. Ctrl and Shift
change the increment size. For example, press
Ctrl+] to increment by 10. You can also press Ctrl
or Shift with the arrow keys to change values by
increments.
Animation icon
Shows if and how the parameter
is animated. Right-click it to
access animation commands for a
single parameter.
• Enter relative values with the addition (+),
subtraction (-), multiplication (*), and
division (/) symbols. For example, 2- decreases
the value by 2.
• With multiple elements, use l(min, max) for a
linear range, r(min, max) for random values,
and g(mean, var) for a normal distribution.
Click a color box to open the
color editors, from which you can
pick or define the colors you want.
Toggle a check box to turn options on or off.
Click the label below the box to
change the color space for the sliders.
Drag a slider to change values.
You can copy colors by dragging and
dropping one color box onto another.
To change values for all three color sliders simultaneously,
press Ctrl while dragging.
18 • SOFTIMAGE|XSI
Focuses property editor on properties of the same type.
Recycles property editor with currently selected element.
Locks property editor for the currently selected element.
Connection icon links the parameter
value to a shader, weight map, or texture
map which modulates it.
Working with Views
Virtual Sliders
Working with Views
Virtual sliders let you do the job of a slider without
having to open up a property editor. Select one or
more objects, mark the desired parameters, then
press F4 and middle-drag in a 3D view. Use Ctrl,
Shift, and Ctrl+Shift to change increments, and
Alt to extend beyond the slider’s display range.
Views provide a window into the current scene, whether they display a
3D view of the geometric objects such as in the Camera view or a
hierarchical view of the data such as in the explorer. Views can be
displayed docked in a viewport, or floating in separate windows.
Color Editors
There are four viewports in the view manager at the center of the
default XSI layout. Each viewport is identified by a letter. When you
start XSI, viewport A (top left) shows the Top orthographic view,
viewport B (top right) shows the Camera perspective view, viewport C
(bottom left) shows the Front orthographic view, and viewport D
(bottom right) shows the Right orthographic view.
Views Docked in the Viewports
Instead of using the RGB color sliders, you can click on a color box to
open a color editor.
Slider bar
Color spectrum
Switching Views in the Viewports
You can change the view displayed by a viewport using the menu on
the left of its title bar.
Current color box
Changes color
model.
Color preview box
Color picker selects a
color from anywhere
within XSI’s window.
Select a view to display in a viewport
from the Views menu. Middle-click to
display the previous view.
Opens the full
color editor.
Basics • 19
Section 1 • Introducing XSI
Use the Resize icon at the right of a viewport’s
toolbar to maximize, expand, and restore:
The 3D views show the geometry of your scene and include:
• Any cameras that are present in your scene.
•Left-click to maximize a viewport, or restore a
maximized viewport. Alternatively, press F12
while the pointer is over the viewport.
• The orthographic Top, Front, and Right views.
• The User view, which is not a real camera but an extra perspective
view that you can navigate in without modifying your main camera
setup.
• Any spotlights that are present in your scene.
• The Object view, which shows the selected object in isolation.
See Working in 3D Views on page 22.
The other views include alternative representations of your scene data
such as the explorer or the schematic views (see Exploring Your Scene
on page 31), as well as tools for specialized tasks.
Resizing Viewports
Viewports can be resized, maximized, or expanded vertically and
horizontally.
Drag the horizontal
and vertical splitter
bars (or their
intersection) to
resize the
viewports.
Middle-click the
bars to reset them.
20 • SOFTIMAGE|XSI
•Middle-click to expand or restore
horizontally.
• Ctrl+middle-click to expand or restore vertically.
• Right-click on the Resize icon to open a menu as shown.
Viewport Presets
Instead of switching views and resizing viewports
manually, you can use the buttons at the lower left to
display various preset combinations.
Muting and Soloing Viewports
The letter identifier in the upper-left corner of the title bar allows you
to mute and solo viewports:
• Middle-click the letter to mute the viewport. A muted
viewport does not update until you un-mute it. You can do
this to increase playback performance in the other
viewports. The letter of a muted viewport is displayed in orange.
Middle-click the letter again to un-mute the viewport.
• Click the letter to solo the viewport. Soloing a viewport
mutes all the others. The letter of a soloed viewport is
displayed in green. Middle-click the letter again to un-solo
the viewport.
Working with Views
Floating Views
A Word about the Active Window
You can open views as floating windows using the first group of
submenus on the Views menu. Some floating views also have shortcut
keys. Depending on the type of view, you can have multiple windows of
the same type open at the same time.
The active window is always the one directly under the mouse
pointer—it’s the one that has “focus” and accepts keyboard and mouse
input even if it is not on top.
• To resize a window, drag its borders.
For example, you can move the mouse pointer over the script editor
window, type commands, then move the pointer over the camera
viewport and press g to toggle the grid display. If you pressed g while the
pointer was still over the script editor, you would have typed the letter g
into the editing pane.
• To bring a window to the front and display it on top of other
windows, click in it.
Be careful that you don’t accidentally send commands to the wrong
window.
You can adjust floating windows in the usual ways:
• To move a window, drag its title bar.
• To close a window, click x in the top right corner.
• To minimize a window, click _ in the top right corner.
You can cycle through all open windows, whether minimized or not,
using Ctrl+Tab. Use Shift+Ctrl+Tab to cycle backwards.
You can collapse a floating view by double-clicking on its title bar.
When collapsed, only the title bar is visible and you can still move it
around by dragging. To expand a collapsed view, double-click on the
title bar again; the view is restored at its current location.
Basics • 21
Section 1 • Introducing XSI
Working in 3D Views
3D views are where you view, edit, and manipulate the geometric elements
of your scene.
Eye icon menu Includes commands for specifying whether or not
scene elements and their components are visible in the viewports.
Camera icon menu Includes commands for
navigating and framing elements in the scene.
Memo cams Memorize up to four
camera settings for quick access later.
Axes
Show the global X, Y, and Z
directions.
22 • SOFTIMAGE|XSI
XYZ Buttons Switches the viewport view to top/bottom/front/
back/left/right.
Display Type menu Specifies how visible items in
the viewports are displayed.
Floor Grid
Indicates the scene origin and
the relative size of objects.
Working in 3D Views
Types of 3D Views
There are many ways to view your scene in the 3D views. These viewing
modes are available from the Views menu in viewports and from the
View menu in the object view.
Except for camera views, all of the viewing modes are “viewpoints”.
Like camera views, viewpoints show you the geometry of objects in a
scene. They can be previewed in the render region, but they cannot be
rendered to file like camera views.
Camera Views
The Top, Front, and Right views are orthographic, which orients the
camera so it is perpendicular (orthogonal) to specific planes:
• The Top view faces the XZ plane.
• The Front view faces the XY plane.
• The Right view faces the YZ plane.
You cannot orbit the camera in an orthographic view.
Top
Camera views let you display your scene in a 3D view from the point of
view of a particular camera. You can also choose to display the
viewpoint of the camera associated to the current render pass.
The Render Pass view is also a camera view: it shows the viewpoint of
the particular camera associated to the current render pass. Only a
camera associated to a render pass is used in a final render.
Spotlight Views
Spotlight views let you select from a list of spotlights available in the
scene. Selecting a spotlight from this list switches the point of view in
the active 3D view relative to the chosen spotlight. The point of view is
set according to the direction of the light cone defined for the chosen
spotlight.
Front
Right
Top, Front, and Right Views
User View (Viewports Only)
The Top, Front, and Right views are parallel projection views, called
such because the object’s projection lines do not converge in these
views. Because of this, the distance between an object and the camera
has no influence on the scale of the object. If one object is close to the
camera, and an identical object is farther away, both appear to be the
same size.
The User view is a viewpoint that shows objects in a scene from a
virtual camera’s point of view, but is not actually linked to a scene
camera or spot light.
The User point of view can be placed at any position and at any angle.
You can orbit, dolly, zoom, and pan in this view. It’s useful for
navigating the scene without changing the render camera’s position
and zoom settings.
Basics • 23
Section 1 • Introducing XSI
The Object View
Navigating in 3D Views
The object view is a 3D view that displays only the selected scene
elements. It has standard display and show menus, and works the same
way as any 3D view in most respects. Selection, navigation, framing, and
so on work as they do in any viewport. There are also some custom
viewing options, available from the object view’s View menu, that make
it easier to work with local 3D selections.
In 3D views, a set of navigation controls and shortcut keys lets you
control the viewpoint. You can use these controls and keys to zoom in
and out, frame objects, as well as orbit, track, and dolly among other
things.
To open the object view, do one of the following:
Most navigation tools have a corresponding shortcut key so you can
quickly activate them from the keyboard. However, some tools are only
available from a viewport’s camera icon menu. In either case, activating
a navigation tool makes it the current tool for all 3D views, including
object views which do not have an equivalent to the camera icon menu.
• From any viewport’s views menu, choose Object View.
or
• From the main menu, choose View > General > Object View.
View menu Like the viewports’ Views menu, but includes special
viewing controls for the object view.
Activating Navigation Tools
Selecting navigation tools from
the camera icon menu activates
them for all 3D views.
Show menu (equivalent to the eye icon menu) Includes
commands for specifying whether scene elements and
their components are visible in the viewports.
Lock/Update Buttons Locks the object view
on the current selection and updates a locked
view to the current selection, respectively.
After you activate a tool, check the mouse bar at the bottom of the XSI
interface to see which mouse button does what.
Tool or
Command
Key
Navigation
s
Description
Combines the most common navigation tools:
• Pan (track) with the left mouse button.
• Dolly with the middle mouse button.
• Orbit with the right mouse button.
In your Tools > Camera preferences, you can
change the order of the mouse buttons as well
as remap this tool to the Alt key.
24 • SOFTIMAGE|XSI
Working in 3D Views
Tool or
Command
Key
Description
Pan/Zoom
z
Moves the camera laterally, or changes the field
of view:
Tool or
Command
Key
Description
Roll
l (L)
Rotates a perspective view along its Z axis. Use
the different mouse buttons to roll at different
speeds.
Frame
f
Frames the selected elements in the view under
the mouse pointer.
Frame
(All Views)
Shift+f
Frames the selected elements in all open views.
Frame All
a
Frames the entire scene in the view under the
mouse pointer.
Frame All
(All Views)
Shift+a
Frames the entire scene in all open views.
Center
Alt+c
Centers the selected elements in the view under
the mouse pointer. Centering is similar to
framing, but without any zooming or dollying.
The camera is tracked horizontally and vertically
so that the selected elements are at the center
of the viewport.
Center
(All Views)
Shift+
Alt+c
Centers the selected elements in all open views.
Reset
r
Resets the view under the mouse pointer to its
default viewpoint.
• Pan (track) with the left mouse button.
• Zoom in with the middle mouse button.
• Zoom out with the right mouse button.
In your Tools > Camera preferences, you can
activate Zoom On Cursor to center the zoom
wherever the mouse pointer is located.
Rectangular
Zoom
Shift+z
Zooms onto a specific area:
• Draw a diagonal with the left mouse button to
fit the corresponding rectangle in the view.
• Draw a diagonal with the right mouse button
to fit the current view in the corresponding
rectangle.
In perspective (non-orthographic) views,
rectangular zoom activates pixel zoom
mode
, which offsets and enlarges the view
without changing the camera’s pose or field of
view.
Orbit
o
Rotates a camera, spotlight, or user viewpoint
around its point of interest. This is sometimes
called tumbling or arc rotation.
• Use the left mouse button to orbit freely.
• Use the middle mouse button to orbit
horizontally.
• Use the right mouse button to orbit vertically.
In your Tools > Camera preferences, you can set
Orbit Around Selection.
Dolly
p
Moves the camera forward and back. Use the
different mouse buttons to dolly at different
speeds. In orthographic views, dollying is
equivalent to zooming.
In addition to the above, there are other tools available on the camera
icon menu, such as pivot, walk, fly, and so on.
Undoing Camera Navigation
As you navigate in a 3D view, you may want to undo one or more
camera moves. Luckily, there is a separate camera undo stack that lets
you undo navigation in 3D views.
To undo a camera move, press Alt+z. To redo an undone camera move,
press Alt+y.
Basics • 25
Section 1 • Introducing XSI
Display Types
You can display scene objects in different ways by choosing various
display modes from a 3D view’s Display Type menu.
Display Type menu
26 • SOFTIMAGE|XSI
Wireframe
Depth Cue
Shows the geometric object made up of its
edges, drawn as lines resembling a model
made of wire. This image displays all edges
or contour lines without removing invisible
or hidden parts or filling surfaces. This is the
default display type in the viewport.
Applies a fade to visible objects, based on
their distance from the camera, in order to
convey depth. You can set the depth cue
range to the scene, selection, or a custom
start and end point. Objects within the
range fade as they near the edge of the range,
while objects completely outside the range
are made invisible.
Bounding Box
Hidden Line Removal
Reduces all scene objects to simple cubes.
This speeds up the redrawing of the scene
because fewer details are calculated in the
screen refresh.
Shows only the edges of objects that are facing
the camera. Lines that are usually hidden
from view by the surface in front of them are
not displayed because they are in a “see
through” wireframe.
Working in 3D Views
Constant
Textured Decal
This type ignores the orientation of surface
normals and instead considers them to be
pointing directly toward an infinite light
source. This results in an object that appears to
have no shading.
This is like the textured, viewing mode, but
textures are displayed with constant lighting.
The net effect is a general “brightening” of
your textures and an absence of shadow.
This allows you to see a texture on any part
of an object regardless of how well that part
is lit.
This mode is useful when you want to work
in textures because there are no attributes to
interfere with the texture’s definitions. This
mode is also useful for previewing rotoscoped
images.
Shaded
Provides an OpenGL hardware shaded view
of your scene that closely approximates its
realistic “look” but does not show shadows,
reflections, or transparency. Wireframes of
geometric objects are superimposed over
their shaded surfaces showing you most
display options, such as lines, points, tags,
and centers. This makes it easy to
manipulate points, lines, tagged points, etc.
You can also view light (point and spot) and
camera icons.
Textured
Realtime Shaders
This displays all realtime shader attributes
for objects that have been textured using
realtime shaders. In the example shown
here, a different texture is used to control the
object’s OpenGL realtime rendering, so the
result is different from what it would be in
the textured or textured decal viewing
modes.
Displays textures, lighting and basic surface
effects like transparency. When objects are
selected, their wireframes are superimposed
on their textured surfaces, showing you
most components (lines, points, tags,
centers, and so on). This makes it easy to
manipulate points, lines, tagged points, and
so on.
Basics • 27
Section 1 • Introducing XSI
Memo Cams
Rotoscopy
Memo cams let you save and recall up to four camera settings for quick
access later.
Rotoscopy is the use of images in the background of the 3D views. You
can use rotoscopy in different 3D views (Front, Top, Right, User,
Camera, etc.) and any display type (Wireframe, Shaded, etc.).
Furthermore, you can use different images for each view.
memo cams
• Single images are useful as guides for modeling in the orthographic
views.
• Middle-click a memo cam box to store the current view settings. If
the memo cam already has defined view settings, the new settings
are not saved. Ctrl+middle-click to overwrite the current view
settings.
• Left-click a memo cam box to switch to its stored view settings.
• Right-click a memo cam box to clear its settings.
XYZ Viewpoint Buttons
The X, Y, and Z buttons are displayed in the viewports’ and object
view’s menu bars.
• Click these buttons to switch to the right, top, or front viewpoints.
• Middle-click these buttons to switch to the left, bottom, or back
viewpoints.
• Click again to return to your original viewpoint.
X, Y, Z viewpoints
28 • SOFTIMAGE|XSI
• Image sequences or clips are useful for matching animation with
footage of live action in the perspective views.
To load an image in a view, choose Rotoscope from the display type
menu and select an image and other options.
There are two types of rotoscoped images:
• By default, rotoscoped images in perspective views have Image
Placement set to Attached to Camera. This means that they follow
the camera as it moves and zooms so that you can match animation
with live action plates.
Attached
to Camera
Working in 3D Views
• On the other hand, rotoscoped images that are displayed in the
orthographic views (Front, Top, and Right) have the Image
Placement option set to Fixed by default. This allows you to
navigate the camera while modeling without losing the alignment
between the image and the modeled geometry.
Fixed images are sometimes called image planes, and they can be
displayed in all views, not just the one for which they were defined.
Fixed
Navigating with Images Attached to the Camera
Normally when a rotoscoped image or sequence is attached to the
camera, it is fully displayed in the background no matter how the
camera is zoomed, panned, or framed. However you can activate Pixel
Zoom mode if you need to maintain the alignment between objects in
the scene and the background, for example if you want to temporarily
zoom into a portion of the scene.
Pixel Zoom
In Pixel Zoom mode, you can:
• Zoom (z + middle or right mouse button, s + middle mouse
button)
• Pan (z + left mouse button, s + left mouse button)
• Frame (f for selection, a for all)
The original view is restored when you exit Pixel Zoom mode. Be
careful not to orbit, dolly, roll, pivot, or track because these actions
change the camera’s transformations and will not be undone when you
deactivate Pixel Zoom.
Basics • 29
Section 1 • Introducing XSI
Setting Viewing Options and Preferences
Object Display
There are several places you can go to set options and preferences
related to viewing.
You can control how individual objects are displayed in a 3D view.
Giving an object or objects different display characteristics is
particularly useful for heavily-animated scenes.
Colors
You can modify scene, element, and component colors (such as the
viewport background) by choosing Scene Colors from any viewport’s
camera icon menu. For instance, by default a selected object is
displayed in white and an unselected object is displayed in black; points
are displayed in blue, knots are displayed in pink, and so on.
For example, if you want to tweak a static object within a scene that has a
complex animated character, you could set the character in wireframe
display mode while adjusting the lighting of your static object in shaded
mode.
You can open an object’s Display property editor from the explorer by
clicking the Display icon in the object’s hierarchy.
Camera and 3D Views Display
You can set display options to control how cameras and views display
scene objects. These camera display options can be set for individual
3D views, or for all 3D views at once.
• To open an individual 3D view’s Camera Display property editor,
choose Display Options from any viewport or object view’s Display
Type menu.
• To open the Camera Display property editor for all 3D views,
choose Display > Display Options (all cameras) from the main
menu.
Object Visibility
Each object in the scene has its own set of visibility controls that allow
you to control how objects appear in the scene, or whether they appear
at all, as well as how shadows, reflections, transparency, final gathering,
and other attributes are rendered.
For example, you may wish to temporarily exclude objects from a
render but retain them in the scene. This can come in handy when you
are working with complex objects and want to reduce lengthy refresh
times.
You can open an object’s Visibility property editor from the explorer by
clicking the Visibility icon in the object’s hierarchy.
30 • SOFTIMAGE|XSI
Click the Display icon to open
the Display property editor.
Exploring Your Scene
Exploring Your Scene
Three of the most important views for exploring your scene are the explorer, schematic, and spreadsheet.
The Explorer
The explorer displays the contents of your scene in
a hierarchical structure called a tree. This tree can
show objects as well as their properties as a list of
nodes that expand from the top root. You
normally use the explorer as an adjunct while
working in XSI, for example, to find or select
elements.
Keeping Track of Selected Elements
If you have selected objects, their nodes are
highlighted in the explorer. If their nodes are not
visible, choose View > Find Next Selected Node.
The explorer scrolls up or down to display the first
object node in the order of its selection. Each time
you choose this option, the explorer scrolls up or
down to display the next selected node. After the
last selected item, the explorer goes back to the
first.
Scope of elements
to view.
Viewing and
sorting options.
Filters for displaying
element types,
Lock and
update.
Search by name,
type, or keyword.
Expand and collapse tree.
Press Shift to expand or
collapse all from this point.
Click icon to open
property editor.
Click name to select. Use
Shift to select ranges and
Ctrl to toggle-select.
Click twice to rename.
Right-click for a
context menu.
Choose View > Track Selection if you want to automatically scroll the
explorer so that the node of the first selected object is always visible.
Click the Scope button to select
the range of elements to view.
Setting the Scope of the Explorer
The Scope button determines the range of elements to display. You can
display the entire scenes, specific parts, and so on.
The Selection option in the explorer’s scope menu isolates the selected
object. If you click the Lock button with the Selection option active,
the explorer continues to display the property nodes of the currently
selected objects, even if you go on to select other objects in other views.
When Lock is on, you can also select another object and click Update
to lock on to it and update the display.
The current scope is indicated
by the button label. It is also
bulleted in the list.
The bold item in the menu
indicates the last selected scope.
Middle-click the Scope button to
quickly select this view.
Basics • 31
Section 1 • Introducing XSI
Filtering the Display
Other Explorer Views
Filters control which types of nodes are displayed in the explorer. For
example, you can choose to display objects only, or objects and
properties but not clusters nor parameters, and so on. By displaying
exactly the types of elements you want to work with, you can find
things more quickly without scrolling through a forest of nodes.
You can view other smaller versions of the explorer (pop-up explorers)
elsewhere in the interface. They are used to view the properties of
selected scene elements.
The basic filters are available on the Filters menu (between the View
menu and the Lock button). The label on the menu button shows the
current filter. The filters that are available on the menu depend on the
scope. For example, when the scope is Scene Root, the Filters menu
offers several different preset combinations of filters, followed by
specific filters that you can toggle on or off individually.
Explorer filter buttons in the Select panel offer a shortcut by instantly
displaying filtered information on specific aspects of currently selected
objects.
Select Panel Explorer
Example: Click the Selection filter button...
Explorer filter
buttons
Preset display filter combinations.
...to display a pop-up explorer showing all property
nodes associated with the selected object.
Individual display filter toggles.
The Explore button opens a pop-up menu of additional filters for
specifying the type of information you wish to obtain on the scene.
Click outside a pop-up explorer to close it.
Object Explorers
You can quickly display a pop-up explorer for a single object—just
select the object and press Shift+F3. If the object has no synoptic
property or annotation, you can press simply F3. Click outside the
pop-up explorer or press those keys again to close it.
32 • SOFTIMAGE|XSI
Exploring Your Scene
The Schematic View
The schematic view presents the scene in a hierarchical structure so
that you can analyze the way a scene is constructed. It includes
graphical links that show the relationships between objects, as well as
material and texture nodes to indicate how each object is defined.
• Press the spacebar to click and select nodes. Use the left mouse
button for node selection, the middle mouse button for branch
selection, and the right mouse button for tree selection.
• Press m to click and drag nodes to new locations. The schematic
remembers the location of nodes, so you can arrange them as you
please.
• Press s or z to pan and zoom.
Relationships between elements are displayed as lines called links. You
can display or hide links for different types of relationship using the
Show menu.
You can also click a parent-child link to select the child. This is useful if
you have located the parent but can’t find the child in a jumbled
hierarchy. Again, use the left, middle, or right mouse buttons to select
the child in node, branch, or tree modes.
When other types of link are displayed, you can click and drag across
the link to select the corresponding operator, such as a constraint or
expression. When a link is selected, you can press Enter to open the
property editor related to the associated relationship (if applicable), or
press Delete to remove the operator.
Use the scope menu to select what to display: the entire
scene, the current layer, or the selected objects.
Use the command
bar to access
display, selection,
and navigation
commands.
Set filters that specify which elements to display in the
schematic view.
Use the memo cams to save
and recall views.
Use the display
area to view and
edit scene
elements and
their properties.
Lock and refresh the
view (when displaying
selection scope).
Set various viewing options.
To select a node, click its label.
To open a node’s property editor, click
its icon or double-click its label. To
open a context menu for a node,
Alt+right-click (Ctrl+Alt+right-click
on Linux) on it.
Alt+right-click (Ctrl+Alt+rightclick on Linux) in an empty
area to quickly access a
number of viewing and
navigation commands.
Access navigation and
selection commands.
Basics • 33
Section 1 • Introducing XSI
The Spreadsheet
The spreadsheet view displays scene
information about elements and their
parameters in a grid. This information
is filtered and organized by queries that
you run to show specific aspects of your
scene in combination with sorting
operations you can perform based on
object data. You can then perform
operations on many elements or
parameters at once.
Each row represents a scene element.
Click a row heading to select all of an
element’s properties. Right-click the
row to select objects in your scene.
Each column represents a parameter. Click a column
heading to select a parameter on all of the displayed
objects. Right-click a heading to quickly sort elements
and mark parameters for animation.
You can execute a query by using one of
the predefined queries found in the
spreadsheet’s Query menu, or you can
choose Query > Open to load a custom
query file.
Once you have executed a query and
the spreadsheet displays the data you
have requested, you can further
organize the information by sorting the
table. Right-click any column heading
to sort the table entries based on the
column’s entries.
The spreadsheet display is not updated as the scene or the current
frame is modified in XSI. To update the spreadsheet to reflect current
scene information, click Execute. To update only currently listed
elements, click the Update button.
Generally, editing one cell changes all highlighted cells of the same type
when you press Enter. That is, if you change a numeric cell, all
highlighted numeric cells are also changed; any non-numeric
highlighted cells remain unchanged. Cells with dark gray contents
cannot be edited.
34 • SOFTIMAGE|XSI
The intersection between a row and a column is called a
cell, each of which holds one value. You can select many
cells at once and modify them all simultaneously.
You can edit spreadsheet cells, depending on the mouse button you
use:
• Left-click edits a highlighted cell.
• Middle-click or right-click edits a cell without changing the value
of other highlighted cells.
Sample Content
Sample Content
XSI ships with a sample database containing scenes, models, presets,
scripts, and other goodies. There is an HTML interface for this
database that you can access by opening a Net View (Alt+5). You can
drag and drop content from Net View directly into your scene.
Another place to find sample content is at XSI Net:
http://www.softimage.com/xsinet/
Display the home page as defined in Start > Settings > Control
Panel > Internet Options or in Internet Explorer options.
Refresh the current page.
Browse back and forth among
previously displayed pages.
Browse to open a page or
display preferences.
Hide or display the
command bar.
Stop loading a page, running a script, or playing an animated image.
Display the XSI Web start page as defined in your XSI preferences.
Display your list of favorites as defined in Internet Explorer.
Enter the address
of a page.
The body pane displays
HTML files. Click a link
to open a new page,
download a file, or run a
script. Right-click to
display the Internet
Explorer context menu.
Basics • 35
Section 1 • Introducing XSI
36 • SOFTIMAGE|XSI
Section 2
Elements of a Scene
This section provides a guide to the objects,
properties, and components you will find in XSI
scenes, and describes some of the workflows for
working with them.
What you’ll find in this section ...
• What’s Inside a Scene
• Selecting Elements
• Objects
• Properties
• Components and Clusters
• Parameter Maps
Basics • 37
Section 2 • Elements of a Scene
What’s Inside a Scene
Scenes contain objects. In turn, objects can have components and
properties.
Objects
Objects are elements that you can put in your scene. They have a
position in space, and can be transformed by translating, rotating, and
scaling. Examples of objects include lights, cameras, bones, nulls, and
geometric objects. Geometric objects are those with points, such as
polygon meshes, surfaces, curves, particles, hair, and lattices.
Properties control how an object looks and behaves: its color, position,
selectability, and so on. Each property contains one or more
parameters that can be set to different values.
Properties can be applied to elements directly, or they can be applied at
a higher level and passed down (propagated) to the children elements
in a hierarchy.
Element Names
Components
Components are the subelements that define the shape of geometric
objects: points, edges, polygons, and so on. You can deform a
geometric object by moving its components. Components can be
grouped into clusters for ease of selection and other purposes.
Points on different
geometry types:
polygon mesh,
curve, surface, and
lattice.
38 • SOFTIMAGE|XSI
Properties
All elements have a name. For example, if you choose Get >
Primitive > Polygon Mesh > Sphere, the new sphere is called sphere by
default, but you can rename it if you want. In fact, it’s a good idea to get
into the habit of giving descriptive names to elements to keep your
scenes understandable. You can see the names in the explorer and
schematic views, and you can even display them in the 3D views.
You can typically name an element when you create it. You can rename
an object at any time by choosing Rename from a context menu or
pressing F2 in the explorer.
XSI restricts the valid characters in element names to a–z, A–Z, 0–9, and
the underscore (_) to keep them variable-safe for scripting. You can also
use a hyphen (-) but it is not recommended. Invalid characters are
automatically converted to underscores. In addition, element names
cannot start with a digit; XSI automatically adds an underscore at the
beginning. If necessary, XSI adds a number to the end of names to keep
them unique within their namespace.
Selecting Elements
Selecting Elements
Selecting is fundamental to any software program. In XSI, you select
objects, components and other elements to modify and manipulate
them.
In XSI, you can select any object, component, property, group, cluster,
operator, pass, partition, source, clip, and so on; in short, just about
anything that can appear in the explorer. The only thing that you can’t
select are individual parameters—parameters are marked for
animation instead of selected.
Select menu
Access a variety of selection tools
and commands.
Select Panel
Select icon
Activates the most recently used selection
tool and filter
Overview of Selection
To select an object in a 3D or schematic view, press the space bar and
click on it. Use the left mouse button for single objects (nodes), the
middle mouse button for branches, and the right mouse button for trees
and chains.
To select components, first select one or more geometric objects, then
press a hotkey for a component selection mode (such as t for rectangle
point selection), and click on the components. Use the middle mouse
button for clusters.
For elements with no predefined hotkey, you can manually activate a
selection tool and a selection filter.
In all cases:
• Shift+click adds to the selection.
Group/Cluster button
Selects groups and clusters.
• Ctrl+click toggle-selects.
• Ctrl+Shift+click deselects.
Filter buttons
Select objects or their components, such
as points, curves, etc.
• Alt lets you select loops and ranges. You can use Alt in combination
with Shift, Ctrl, and Ctrl+Shift.
Object Selection and
Sub-object Selection text boxes
Enter the name of the object and its
components you want to select. You can
use * and other wildcards to select
multiple objects and properties.
Explore menu or explorer filter buttons
Display the current scene hierarchy,
current selection, or the clusters or
properties of the current selection.
These buttons are particularly useful
because they display pre-filtered
information but don’t take up a viewport.
Selection Hotkeys
Hierarchy navigation
Select an object’s
sibling or parent.
Key
Tool or action
space bar
Select objects with the Rectangle selection tool, in either
supra or sticky mode.
e
Select edges with the Rectangle selection tool, in either
supra or sticky mode.
t
Select points with the Rectangle selection tool, in either
supra or sticky mode.
y
Select polygons with the Rectangle selection tool, in
either supra or sticky mode.
Basics • 39
Section 2 • Elements of a Scene
Selection Tools
Key
Tool or action
u
Select polygons with the Raycast selection tool, in either
supra or sticky mode.
i
Select edges with the Raycast selection tool, in either
supra or sticky mode.
' (apostrophe)
Select hair tips with the Rectangle selection tool, in either
supra or sticky mode.
F7
Activate Rectangle selection tool using current filter.
F8
Activate Lasso selection tool using current filter.
F9
Activate Freeform selection tool using current filter.
Rectangle selection is sometimes called marquee selection. You select
elements by dragging diagonally to define a rectangle that encompasses
the desired elements.
F10
Activate Raycast selection tool using current filter.
Raycast Selection Tool
Shift+F10
Activate Rectangle-Raycast selection tool using current
filter.
Ctrl+F7
Activate Object filter with current selection tool.
The Raycast tool casts rays from under the mouse pointer into the
scene—elements that get hit by these rays as you click or drag the
mouse are affected. Raycast never selects elements that are occluded by
other elements.
Ctrl+F8
Activate Point filter with current selection tool.
Ctrl+F9
Activate Edge filter with current selection tool.
Ctrl+F10
Activate Polygon filter with current selection tool.
Alt+space bar
Activate last-used selection filter and tool.
To select something in the 3D views, a selection tool must be active.
XSI offers a choice of several selection tools, each with a different
mouse interaction: Rectangle, Lasso, Raycast, and others. The choice of
selection tool is partly a matter of personal preference, and partly a
matter of what is easiest or best to use in a particular situation. They
are all available from the Select > Tools menu or hotkeys.
Rectangle Selection Tool
Lasso Selection Tool
The Lasso tool lets you select one or more elements
by drawing a free-form shape around them. This is
especially useful for selecting irregularly shaped sets
of components.
Freeform Selection Tool
The Freeform tool lets you select elements by
drawing a line across them. This is particularly
useful for selecting a series of edges when
modeling with polygon meshes, or for selecting a
series of curves in order for lofting or creating hair
from curves, as well as in many other situations.
40 • SOFTIMAGE|XSI
Selecting Elements
Rectangle-Raycast Tool
Selection and Hierarchies
The Rectangle-Raycast selection tool behaves like a mixture of the
Rectangle and the Raycast tools. You select by dragging a rectangle to
enclose the desired elements, just like the Rectangle tool. Elements that
are occluded behind others in Hidden Line Removal, Constant,
Shaded, Textured, and Textured Decal display modes are ignored, just
like the Raycast tool.
You can select objects in hierarchies in several ways: node, branch, and
tree.
Paint Selection Tool
Node Selection
Left-click to node-select an object. Node selection is the simplest way
in which an object can be selected. When you node-select an object,
only it is selected. If you apply a property to a node-selected object, that
property is not inherited by its descendants.
The Paint selection tool lets you use a brush to select components. It is
limited to selecting points (on polygons meshes and NURBS), edges,
and polygons. The brush’s radius controls the size of the area selected
by each stroke, which you can adjust interactively by pressing r and
dragging to the left or right. Note that the Paint Selection tool uses the
left mouse button to select and the right mouse button to deselect.
Press Ctrl to toggle-select.
Effect of nodeselecting Object.
Selection Filters
Selection filters determine what you can select in the 3D and schematic
views. They allow you to restrict the selection to a specific type of object,
component, or property. Pressing Shift while activating a new filter
keeps the current selection, allowing you to select a mixture of
component types.
Selection filter buttons
Select objects or their
components in the
3D views.
The component buttons
are contextual: they
change depending on
what type of object is
currently selected.
Click the triangle for
additional filters.
Click the bottom button
to re-activate the
last filter.
Basics • 41
Section 2 • Elements of a Scene
Branch Selection
Selecting Ranges and Loops of Components
Middle-click to branch-select an object. When you branch-select an
object, its descendants “inherit” the selection status and are highlighted
in light gray. You would branch-select an object when you want to
apply a property that gets inherited by all the object’s descendants.
Use the Alt key to select ranges or loops of components. XSI tries to
find a path between two components that you pick. In the case of
ranges, it selects all components along the path between the picked
components. In the case of loops, it extends the path, if possible, and
selects all components along the entire path.
• For polygon meshes, you can select ranges or loops of points,
edges, or polygons. Several strategies are used to find a path, but
priority is given to borders and quadrilateral topology.
Effect of branchselecting Object.
• For NURBS curves and surfaces, you can select ranges or loops of
points, knots, or knot curves. Points and knots must lie in the same
U or V row. In addition, paths and loops stop at junctions between
subsurfaces on assembled surface meshes.
Range Selection
Alt+click to select a range of components using any selection tool
(except Paint). This allows you to select the interconnected
components that lie on a path between two components you pick.
Tree Selection
Right-click to tree-select an object. This selects the object’s topmost
ancestor in branch mode. For kinematic chains, right-clicking will
select the entire chain.
Effect of treeselecting Object.
42 • SOFTIMAGE|XSI
Selecting Elements
- Use Ctrl+Shift to deselect. Once you have selected a new anchor,
you can Alt+Ctrl+Shift+click to deselect a range.
Loop Selection
First specify the anchor.
Alt+middle-click to select a loop of components using any selection
tool (except Paint). When you select a loop of components, XSI finds a
path between two components that you pick. It then extends the path
in both directions, if it is possible, and selects all components along the
extended path.
Then specify the end
component to select
the range in-between.
First specify the anchor.
1. Select the first “anchor” component normally.
2. Alt+click on the second component. Note that the anchor
component is highlighted in light blue as a visual reference while the
Alt key is pressed.
Then specify another
component to select
the entire loop of
components.
All components between the two components on a path become
selected.
3. Use the following key and mouse combinations to further refine
the selection:
- Use Shift to add individual components to the selection as usual.
If you want to add additional ranges or loops using Alt+Shift, the
last component added to the selection is the new anchor. If you
want to start a new range anchored at the end of the previous
range, you must reselect the last component by Shift+clicking or
Alt+Shift+clicking. Once you have selected a new anchor, you
can Alt+Shift+click to add another range to the selection.
- Use Ctrl to toggle-select. Once you have selected a new anchor,
you can Alt+Ctrl+click to toggle the selection of a range.
1. Do one of the following:
- Select the first “anchor” component normally, then Alt+middleclick on the second component. Note that the anchor component is
highlighted in light blue as a visual reference while the Alt key is
pressed.
or
- Alt+middle-click to select two adjacent components in a single
mouse movement.
Basics • 43
Section 2 • Elements of a Scene
All components on an extended path connecting the two
components become selected.
Note that for edges, the direction is implied so you only need to
Alt+middle-click on a single edge. However, for parallel edge loops,
you still need to specify two edges as described previously.
2. Use the following key and mouse combinations to further refine
the selection:
- Use Shift to add individual components to the selection as usual.
If you want to add additional ranges or loops using Alt+Shift, the
last component added to the selection is the new anchor. The last
selected component becomes the anchor for any new loop. Once
you have selected a new anchor, you can Alt+Shift+middle-click
to add another loop to the selection.
- Use Ctrl to toggle-select. Once you have selected a new anchor,
you can Alt+Ctrl+middle-click to toggle the selection of a loop.
- Use Ctrl+Shift to deselect. Once you have selected a new anchor,
you can Alt+Ctrl+Shift+middle-click to deselect a loop.
Modifying the Selection
The Select menu has a variety of commands you can use to modify the
selection. For example, among many other things, you can:
• Invert the selection.
• Grow or shrink a component selection (polygon meshes only).
• Select adjacent points, edges, or polygons.
Defining Selectability
You can make an object unselectable in the 3D and schematic views by
opening up its Visibility properties and turning off Selectability. This
can come in handy and speed up your workflow if you are working in a
very dense scene and there are one or more objects that you don’t wish
to select.
Unselectable objects are displayed in dark gray in the wireframe and
schematic views. Regardless of whether an object’s Selectability is on or
off, you can always select it using the explorer or using its name.
The selectability of an object can also be affected by its membership in
a group or layer.
44 • SOFTIMAGE|XSI
Objects
Objects
Objects can be duplicated, cloned, and organized into hierarchies,
groups, and layers.
Duplicating and Cloning Objects
Duplicating an object creates an independent copy: modifying the
original after duplication has no effect on the copy. Cloning creates a
linked copy: modifying the geometry of the original affects the clone,
but you can still make additional changes to the clone without affecting
anything else. All the related commands can be found in Edit >
Duplicate/Instantiate.
When an object is cloned, editing
the original object affects all the
clones...
... but editing only one clone has
no effect on the others.
Cloning Objects
You can clone objects using the Clone commands on the Edit >
Duplicate/Instantiate menu.
Clones are displayed in the explorer with a cyan c superimposed on the
model icon. In the schematic view, they are represented by trapezoids
with the label Cl.
Clone in the explorer.
When an object is duplicated...
... the original and its duplicates
can be modified separately with
no effect on each other.
Clone in the
schematic view.
Duplicating Objects
To duplicate an object, select it and choose Edit > Duplicate/
Instantiate > Duplicate Single or press Ctrl+d. The object is duplicated
using the current options and the copy is immediately selected. You
may need to move it away from the original. By default, any
transformation you apply is remembered for the next duplicate.
To make multiple copies, Edit > Duplicate/Instantiate > Duplicate
Multiple or press Ctrl+Shift+d. Specify the number of copies and the
incremental transformations to apply to each one.
Basics • 45
Section 2 • Elements of a Scene
Example: Applying multiple transformations to duplicated objects
1 Select the object (a step) to be
duplicated and transformed.
2 With the step selected,
press Ctrl+Shift+d.
Specify 5 copies and a
transformation to apply
to each.
Hierarchies
Hierarchies describe the relationship between objects, usually using a
combination of parent-child and tree analogies, as you do with a family
tree. Objects can be associated to each other in a hierarchy for a
number of reasons, such as to make manipulation easier, to propagate
applied properties, or to animate children in relation to a parent. For
example, the parent-child relationship means that any properties
applied to the parent (in branch mode) also affect the child.
In a hierarchy there is a parent, its children, its grandchildren, and so
on:
• A root is a node at the base of either a branch or the entire tree.
3 Result: Five copies of the original step are generated,
with each duplicate translated, rotated and scaled to
give the appearance of a flight of spiral stairs.
Note:The center of the step was repositioned to the
right so that the step could be rotated along
its right edge.
Other commands in the Edit > Duplicate/Instantiate menu let you
duplicate symmetrically, from animation, and so on.
46 • SOFTIMAGE|XSI
• A tree is the whole hierarchy of nodes stemming from a common
root.
• A branch is a subtree consisting of a node and all its descendants.
• Nodes with the same parent are called siblings.
Objects
Creating Hierarchies
Deleting an Object in a Hierarchy
You can create a hierarchy by selecting an object
and activating the Parent tool from the Constrain
panel (or pressing the / key). Click on another
object to make it the child of the selected object, or
middle-click to make the selected object the child of the picked object.
Continue picking objects or right-click to exit the tool.
If you delete an object with children, it is replaced by a null with the
same name in order to preserve the hierarchy structure. Deleting this
null just replaces it with another one. If you want to get rid of it, you
must first cut its children if you want to keep them, or branch-select
the object to remove it and its children.
You can also create hierarchies by dragging and dropping in the
explorer:
Make the ball_child a child of the
ball_parent by dropping its node
onto the ball_parent’s node.
The ball_child is now under the
ball_parent’s node.
In the schematic, you can create a hierarchy by pressing Alt while
dragging a node onto a new parent.
Groups
You can organize 3D objects, cameras, and lights into groups for the
purpose of selection, applying operations, assigning properties and
shaders, and attaching materials and textures.
For example, you can add several objects to a group, and then apply a
property like Display, Geometry Approximation, or a material to the
group. The group’s properties override the members’ own ones.
Besides being able to organize objects into groups, you can also create a
group of groups. An object can be a member of more than one group.
Groups, however, can’t be added in hierarchies. They can only live
immediately beneath the scene root or a model.
In XSI, groups are a tool for organizing and sharing
properties. If you are coming from another software package
and want to control transformations, for example, in a
character rig, use transform groups instead.
Cutting Links in a Hierarchy
You will often need to cut the hierarchical links between a parent and
its child or children in a hierarchy of objects. If the child is also a
parent, the links to its own children are not affected.
Select the child and click Cut in the Constrain panel, or press Ctrl+/. A
cut object becomes a child of its model. If an object is cut from its
model, it becomes a child of the parent model.
Basics • 47
Section 2 • Elements of a Scene
Creating Groups
Selecting Groups
To create a group, select some objects and click
Group in the Edit panel or press Ctrl+g. In the
Group property editor, enter a name for your group
and select the different View and Render Visibility,
Selectability, and Animation Ghosting options.
You can select groups in the 3D and schematic
views using the Group selection button or the =
key. Note that the Group button changes to the
Cluster button when a component filter is active.
Group selection
(or use = key)
You can also use the Explore button on the Select
panel to list all groups in the scene.
Once a group is selected, you can select its members using Select >
Select Members/Components. The members of the group are selected
as multiple objects.
Adding and Removing Elements from
Groups
All selected objects are grouped together. In the explorer, you can see
the group with all its members within it.
To add objects to a group, select the group and
add the objects you want to the selection. In the
Edit panel, click the + button (next to the Group
button). You can also drag objects onto a group in
an explorer view.
Add to Group
Remove from Group
If an object is a member of just one group, you can ungroup it by just
selecting it and clicking the – button (next to the Group button). If an
object is a member of multiple groups, you must select the group to
remove it from before selecting the object. Alternatively, use the context
menu in the explorer.
Right-click on name of object
within the group to be removed
and choose Remove from Group.
Removing Groups
You can remove a group by selecting it and pressing Delete. When you
delete groups, only the group node and its properties are deleted, not
the objects themselves.
48 • SOFTIMAGE|XSI
Objects
Scene Layers
The Layer Control
Scene layers are containers — similar to groups or render passes — that
help you organize, view, display, and edit the contents of your scene.
For example, you can put different objects into each scene layer and
then hide a particular layer when you don’t want to see that part of
your scene. Or you might want to make a scene layer’s objects
unselectable if the scene is getting too complex to select objects
accurately. You can create as many layers as your scene requires.
The layer control is a grid-style view from which you can quickly view
and edit all of the layers in a scene. Because it is the only view that
contains, or provides access to all of the layer-related tools and options,
the layer control is the recommended view for editing layers.
You can use the layer control to do things like add objects to — or
remove them from — layers, create new scene layers, toggle scene layer
attributes, select objects in a scene layer, and so on.
Scene Layer Attributes
Each scene layer has four main attributes: viewport visibility,
rendering visibility, selectability, and animation ghosting. You can
activate or deactivate each these attributes for every layer in the scene.
Scene layers can also have custom properties such as wireframe color
and geometry approximation.
Scene Layers in the Explorer
You can view and edit scene layers in the explorer. This is most useful
when you wish to move several objects between layers, since you can
quickly drag and drop them from one layer to another.
The layer control menu
Layer attributes
Scene
layers
A check mark indicates that the
attribute is active for the layer.
The default layer is
highlighted in green.
Basics • 49
Section 2 • Elements of a Scene
Properties
A property is a set of related parameters
that controls some aspect of objects in a
scene.
How Properties Are Propagated
Applying Properties
For some properties, such as Display and Geometry Approximation, an
object can have only one at a time. If it inherits the same property from
more than one source, the source with the highest “strength” is used.
Objects can inherit properties from many different sources. This
inheritance is called propagation.
You can apply many properties using the
Get > Property menu of any toolbar.
This applies the default preset of a
property’s parameter values to the
selected objects, possibly replacing an
existing version of the same property.
In increasing order of strength, the possible sources of property
propagation are:
• Scene Default: This is the weakest source. If an object does not
inherit a property from anywhere else, it uses the scene’s default
values. For example, if an object has never had a material applied to
it, it uses the scene default material.
Editing Properties
To edit an existing property, open its property editor by clicking on the
property node in an explorer. A handy way to do this is to press F3 to
see a mini-explorer for the selected object, or click the Selection button
at the bottom of the Select menu. You can also right-click on Selection
to display properties according to type.
• Branch: If a parent has a property applied when it is branchselected, its children all inherit the property.
• Local: If a child inherits a branch property from its parent, but has
the same property applied directly to it, it uses its local values.
• Cluster: Materials, textures, and other properties applied to a
cluster take precedence over those applied to the object.
Click Selection...
• Group: If an object is a member of a group, then any properties
applied to the group take precedence over local and branch
properties. Similarly, if a cluster is a member of a group, any
properties applied to the group take precedence over those applied
directly to the cluster.
• Layer: Any properties applied to an object’s layer take precedence
over group, local, and branch properties.
• Partition: Properties applied to a partition of a render pass have the
highest priority of all when that render pass is current.
...then click a property icon...
50 • SOFTIMAGE|XSI
...or right-click Selection.
Properties
For other types of properties, an object can have many at the same
time. For example, an object can have several local annotations as well
as several annotations inherited from different ancestors, groups, and
so on.
Scene’s Default Material
In this sphere hierarchy, each sphere is
parented to the one above it. No
materials have been applied yet, so all
spheres share the default scene
material.
Branch Propagation
One sphere was branch-selected and
given a cloud texture. The remaining
sphere retains the checkerboard
texture because it is on another
branch.
Simple Propagation
The larger sphere was branchselected and given a checkerboard
material. Because it was applied in
branch mode, the material is inherited
by all the descendants.
Local Material/Texture Application
One sphere was single-selected and
given a blue surface. This applies a
local material/texture that is in turn
applied to the selected object only—
and none of its children; the sphere’s
children still inherit the checkerboard
texture, despite assigning a local
texture to their parent.
Basics • 51
Section 2 • Elements of a Scene
Viewing Propagation in the Explorer
In the explorer, properties that are applied
in branch-mode, and therefore
propagated, are noted with a B symbol.
An object that has a shared material (such
as sphere5, above) displays its shared
material in italics. The material’s source
(where it’s propagated from) is shown in
parentheses.
If no source is shown, then it is inherited
from the scene root.
You can also set the following options in the explorer’s View menu:
• Local Properties displays only those properties that have been
applied directly to an object.
• Applied Properties shows all properties that are active on a object,
no matter how they are propagated.
Components and Clusters
Components are elements, like points and edges, that define the shape of
3D objects, and clusters are named groups of components.
Displaying Components
You can display the various component types in a
specific 3D view using the individual options available
from its eye icon (Show menu) or in all open 3D views
using the options on the Display > Attributes menu on
the main menu bar.
For more options, you can set the visibility options in the Camera
Visibility property editor: click a 3D view’s eye icon (Show menu) and
choose Visibility Options, or Display > Visibility Options for all open
3D views.
Note that when you activate a component selection filter, the
corresponding components are automatically displayed in the 3D
views.
Creating Presets of Property Settings
Clusters
You can save property settings as a preset. Presets are data files with a
.preset file extension that contain property information. Presets let you
work more efficiently because you can save the modified properties
and reuse them as needed, as well as transfer settings between scenes.
For quick access, you can also place presets on a toolbar.
A cluster is a named set of
components that are
grouped together for a
specific modeling,
animation, or texturing
purpose. By grouping and
naming components, it
makes it easier to work with
those same components
again and again. For
example, by grouping all
points that form an
To save or load a preset, click the button at the top
Save/Load Presets
of a property editor. The saved preset contains
values for only the parameter set currently selected
on the property set tabs in the property editor. For
materials and shaders, it also contains parameter settings for any
connected shaders. Presets do not contain any animation—only the
current parameter values are stored. If there is a render region open
when you save a preset, it will be used as a thumbnail.
52 • SOFTIMAGE|XSI
Eye icon
Spinning top
with two clusters
Top
Bottom
Components and Clusters
eyebrow, you can easily deform the eyebrow as an object instead of
trying to reselect the same points each time you work with it. You can
also apply operators like deformations or Cloth to specific clusters
instead of an entire object.
You can define as many clusters on an object as you like, and the same
component can belong to a number of different clusters.
You can define clusters for points, edges, polygons, subsurfaces, and
other components. Each cluster can contain one type of component.
For example, a cluster can contain points or polygons, but not both.
Clusters may shift if you edit an operator in an object’s
construction history and add components before the position
where the cluster was created.
Adding and Removing Components from Clusters
To add components to a cluster, select the cluster
and add the components you want to the
selection. In the Edit panel, click the + button
(next to the Cluster button).
To remove components from a cluster, select the
cluster, add the components to remove to the
selection, and click the – button.
Add to Cluster
Remove from Cluster
When you add components to an object, any new
components that are completely surrounded by similar
components in a cluster are automatically added to the
cluster.
Creating Clusters
Selecting Clusters
To create a cluster, select some components and click Cluster on the
Edit panel (Cluster changes to Group when objects are selected). As
soon as the cluster is created, you can press Enter to open its property
editor and change its name.
You can select clusters using the Clusters button at the bottom of the
Select panel, or in any other explorer.
To create a cluster whose components aren’t already in other clusters,
choose Edit > Create Non-overlapping Cluster instead. You can also use
Edit > Create Cluster with Center to make a cluster with a null “center”
that you can transform and animate. If you prefer to use a different
object as a center, simply create a cluster and apply Deform > Cluster
Center manually.
You can also select clusters in a 3D view when a component selection
filter is active. Simply activate the Cluster button at the top of the Select
panel, or press =, or use the middle mouse button while clicking on any
component in the cluster.
Removing Clusters
To remove a cluster, select it and press Delete. Removing a cluster
removes the group, but does not remove the individual components
from the object.
Basics • 53
Section 2 • Elements of a Scene
Manipulating Components and Clusters
Parameter Maps
Not every type of component or cluster can be directly manipulated in
XSI. You can select and manipulate points, edges, and polygons in the
3D views, and you can select and manipulate texture UV coordinates
(samples) in the texture editor.
Certain parameters are mappable—you can vary the parameter’s value
across an object’s geometry by connecting a weight map, texture map,
vertex color property, or other cluster property. This allows you to, for
example, control the amplitude of a deformation or the emission rate
of a particle system across an object’s surface.
• You can transform points, edges, and polygons in 3D space. This is
a fundamental part of modeling an object’s shape.
• You can apply deformations to deform points, edges, and polygons
in the same way that you apply them to objects.
• You cannot animate component and cluster transformations
directly. Instead, you can use a deformer such as a cluster center or
volume deformer and animate it, or you can use shape animation.
About Mappable Parameters
Mappable parameters have a connection icon in their property editors
that allows you to drive the value using a map.
Connection icon
unconnected
connected
What Parameters Are Mappable?
Almost any parameter with a connection icon in its property editor is
mappable. These parameters include:
• Certain deformation parameters, such as Amplitude in the Push
operator or Strength in the Smooth operator.
• The Multiplier parameter in the Polygon Reduction operator.
• Edge and vertex crease values.
• Various simulation parameters, such as the rate, speed, and color of
particles, the length and density of hair, the stiffness of cloth, and so
on.
• Shapes in the animation mixer.
What Can You Connect to Mappable Parameters?
You can connect just about any cluster property to a mappable
parameter. The most useful properties include the following:
• Weight maps allow you to start from a base map such as a constant
value or gradient, and then paint values on top.
54 • SOFTIMAGE|XSI
Parameter Maps
• Texture maps consist of an image file or sequence, and a set of UV
coordinates. They are similar to ordinary textures, but are
connected to parameters instead of shaders.
• Vertex color properties are color values stored at each polynode of a
geometric object.
In addition to the attributes listed above, you can connect mappable
parameters to other cluster properties, including UV coordinates
(texture projections), shapes, user normals, and envelope weights.
While these may not always be useful for driving modeling and
simulation parameters, the ability to connect to these properties may
be useful for custom developers.
Connecting Maps
No matter what type of map you want to connect to a parameter, the
basic procedure is the same. In a property editor, click on the
connection icon of a mappable parameter and choose Connect. A
pop-up explorer opens—navigate through the explorer and pick the
desired map:
• Weight maps are found under the appropriate cluster.
• Texture maps are properties directly under the object. They can
also be found under the appropriate cluster. Make sure you don’t
accidentally select the texture projection.
• Vertex color properties are also found under the appropriate
cluster.
• To connect maps to hair parameters, you must first
transfer the maps from the emitter to the hair object.
• In the case of weight maps and deformations, you can
simply select the weight map and then apply the
deformation instead of manually connecting it. Since the
weight map is selected by default as soon as you create it,
this technique is quick and easy.
Weight Maps
Weight maps are properties of point clusters on geometric objects.
They associate each point in a cluster with a weight value. Each cluster
can have multiple weight maps, so you can modulate different
parameters on different operators in different ways.
Each weight map has its own operator stack. When you create a weight
map, a WeightMapOp operator sets the base map, which can be
constant or one of a variety of gradients. Then when you paint on the
weight map, the strokes are added to a WeightPainter operator on top
of the WeightMapOp in the stack. Like other elements with operator
stacks, you can freeze a weight map to discard its history and simplify
your scene data.
Weight Map Workflow
This section presents a quick overview of the workflow for using
weight maps.
1. Start with an object.
The connection icon changes to show that a weight map is
connected. When a map is connected, you can click on this icon to
open the map’s property editor.
If you connect a map that has multiple components, like an RGBA
color, to a parameter that has a single dimension, like Amplitude, you
can use the options in the Map Adaptor to control the conversion.
To disconnect a weight map, right-click on the connection icon
connected parameter and choose Disconnect.
of a
Basics • 55
Section 2 • Elements of a Scene
2. Optionally, select some points or a cluster.
A slight Push is all that’s needed.
Selected cluster
3. Apply a weight map using Get > Property > Weight Map.
Blank weight map,
ready for painting
6. You can reselect the weight map and continue to paint on it to
modify the effect further.
If your object has multiple maps, you may need to select the
desired one before you can paint on it. You can do this easily
using Explore > Property Maps from the Select panel.
Freezing Weight Maps
4. Press w to activate the Paint tool, then use the mouse to paint on
the weight map.
- Press r and drag the mouse to control the brush radius.
- Press e and drag the mouse to control the opacity.
- Press Ctrl+w to open the Brush properties to set other
parameters.
In the default paint mode (normal, also called additive), use the left
mouse button to add paint and the right mouse button to remove
weight. Press Alt to smooth.
A spot of paint
and it’s as good as new!
5. Connect the weight map to drive the value of a parameter—for
example in the image below, it is driving the Amplitude of a Push
deformation.
56 • SOFTIMAGE|XSI
Weight maps can be frozen to simplify your scene’s data. Freezing
collapses the weight map generator (the base constant or gradient map
you chose when you created the weight map) together with any strokes
you have applied.
To freeze a weight map, select it and click the Freeze button on the Edit
panel. After you have frozen a weight map, you can still add new
strokes but you cannot change the base map or delete any strokes you
performed before freezing.
Parameter Maps
Texture Maps
Texture maps consist of an image file or sequence, and a set of UV
coordinates. They are similar to ordinary textures, but are used to
control operator parameters instead of surface colors.
HDR images are fully supported. Floating-point values are
not truncated.
Creating Texture Maps
To create a texture map, you select the texture projection method and
then link an image file to it.
1. Apply a texture projection and texture maps to the particle emitter
object by doing one of the following:
- If the object already has a set of UV coordinates (texture
projection) that you want to use, select it and choose Get >
Property > Texture Map > Texture Map.
This creates a blank texture map property for the object and
opens a blank Texture Map property editor in which you need to
set the texture projection and select an image that will be used as
the map (as described in the next steps).
or
- To create a new texture projection for the map, select the object
and choose Get > Property > Texture Map > projection type
(such as Cylindrical, Spherical, UV, or XZ) that is appropriate for
the shape of the object.
2. In the Clip section of the
Texture Map property
editor, select an image or
sequence to use as the map.
If there isn’t already a clip
for the desired image, click
New to create one.
3. In the UV Property area
beneath the image, select an
existing texture projection
or create a New texture
projection (if there isn’t
already one) that is
appropriate to the shape of
the object or how you want
to project the mapped
image.
Editing Texture Maps
To edit the UV coordinates of a texture map’s projection, select the
object and open the text editor. If necessary, use the Clips menu to
display the correct image and the UVs menu to display the correct
projection.
If you do this, you should make sure that the operator connected to the
texture map is above the modeling region of the construction history,
for example, in the animation region. Otherwise, the UV edits are
“above” the operator and appear to have no effect. You can move the
operator back to the modeling region when you are done.
This creates a texture map property and texture projection for the
object, but doesn’t open the Texture Map property editor. Now
you must open the Texture Map property editor to associate the
image to this projection to use as the map (in the explorer, click
the Texture Map property under the object).
Basics • 57
Section 2 • Elements of a Scene
58 • SOFTIMAGE|XSI
Section 3
Moving in 3D Space
Working in 3D space is fundamental to
SOFTIMAGE|XSI. You will use the transformation
tools constantly as you model and animate objects
and components.
What you’ll find in this section ...
• Coordinate Systems
• Transformations
• Center Manipulation
• Freezing Transformations
• Resetting Transformations
• Setting Neutral Poses
• Transform Setup
• Transformations and Hierarchies
• Snapping
Basics • 59
Section 3 • Moving in 3D Space
Coordinate Systems
SOFTIMAGE|XSI uses coordinate systems, also called reference frames,
to describe the position of objects in 3D space.
Cartesian Coordinates
One essential concept that a first-time
user of 3D computer graphics should
understand is the notion of working
within a virtual three-dimensional space
using a two-dimensional user interface.
XSI uses the classical Euclidean/
Cartesian mathematical representation
of space. The Cartesian coordinate
system is based on three perpendicular
axes, X, Y, and Z, intersecting at one point. This reference point is
called the origin. You can find it by looking at the center of the grid in
any of the 3D windows.
XYZ Coordinates
With the Cartesian coordinate system, you can locate any point in
space using three coordinates. Positions are measured from the origin,
which is at (0, 0, 0). For example, if X = +2, Y = +1, Z = +3, a point
would be located to the right of, above, and in front of the origin.
Location = (2, 1, 3)
Y=1
Origin
Z=3
X=2
XYZ Axes
XSI uses a “Y-up” system, where the Y direction represents height. This
is different from some other software, which are “Z-up”. This is
something to keep in mind if you are familiar with other software, or
are trying to import data into XSI.
A small icon representing the three axes and their directions is shown
in the corner of 3D views. The icon’s three axes are represented by
color-coded vectors: red for X, green for Y, and blue for Z.
An easy way to remember the color coding is RGB = XYZ.
This mnemonic is repeated throughout XSI: object centers,
manipulators, axis controls on the Transform panel,
and so on.
60 • SOFTIMAGE|XSI
XZ, XY, YZ Planes
Since you are working with a twodimensional interface, spatial planes are used
to locate points in three-dimensional space.
The perpendicular axes extend as spatial
planes: XZ, XY, and YZ. In the 3D views,
these planes correspond to three of the
parallel projection windows: Top, Front, and
Right. Imagine that the XZ, XY, and YZ
planes are folded together like the top, front, and right side of a box.
This helps you keep a sense of orientation when you are working
within the parallel projection windows.
Coordinate Systems
Global and Local Coordinate Systems
Softimage Units
The location of an object in 3D space is defined by a point called its
center. This location can be described in more than one way or
according to more than one frame of reference. For example, the global
position is expressed in relation to the scene’s origin. The local position
is expressed in terms of the center of the object’s parent.
Throughout XSI, lengths are measured in Softimage units. How big is a
Softimage unit? It is an arbitrary, relative value that can be anything
you want: a foot, 10 cm, or anything else.
Parent
Scene origin
Object and
its center
However, it is generally recommended that you avoid making your
objects too big, too small, or too far from the scene origin. This is
because rounding errors can accumulate in mathematical calculations,
resulting in imprecisions or even jittering in object positions. As a
general rule of thumb, an entire character should not fit within 1 or 2
units, nor exceed 1000 units.
The Softimage units used for objects also matters for creating dynamic
simulations where objects have mass or density and are affected by
forces such as gravity. For simulations, XSI assumes that 1 unit is 10 cm
by default, but you can change this by changing the strength of gravity.
The center of an object is only a reference—it is not necessarily in the
middle of the object because it can be relocated (as well as rotated and
scaled). The position, orientation, and scaling (collectively known as
the pose) of the object’s center defines the frame of reference for the
local poses of its own children.
Basics • 61
Section 3 • Moving in 3D Space
Transformations
Transformations are fundamental to 3D. They include the basic
operations of scaling, rotating, and translating: scaling affects an
element’s size, rotation affects an element’s orientation, and translation
affects an element’s position. Transformations are sometimes
called SRTs.
Transforming Interactively
1 Select objects or
components to
transform and
activate a tool:
– Scale (press x)
– Rotate (press c)
– Translate (press v)
You transform by selecting an object or components, activating a
transform tool, then clicking and dragging a manipulator in a 3D view.
Local versus Global Transformations
There are two types of transformation values can be stored for
animation: local and global. Local transformations are stored relative
to an object’s parent, while global ones are stored relative to the origin
of the scene’s global coordinate system. The global transformation
values are the final result of all the local transformations that are
propagated down the object hierarchy from parent to child.
You can animate either the local or the global transformation values.
It’s usually better to animate the local transformations—this lets you
move the parent while all objects in the hierarchy keep their relative
positions rather than staying in place.
2 Set the
manipulation mode.
See Manipulation
Modes on page 63.
3 If desired, specify the
active axes. See
Specifying Axes on
page 65.
4 If desired, set the pivot. See Setting
the Pivot on page 65.
5 Click and drag on the manipulator. See Using
the Transform Manipulators on page 66.
62 • SOFTIMAGE|XSI
Transformations
Manipulation Modes
View
When you transform interactively, you always do so using one of
several modes set on the Transform panel: View, Local, Global, etc. The
mode determines the axes and the default pivot used for manipulation.
If an object isn’t transforming as you expected, it’s possible that you
need to change the manipulation mode. It is important to remember
that the mode does not affect the values stored for animation (local
versus global), it only affects your interaction with the transform tool.
View translations and rotations are performed with respect to the 3D
view. The plane in which the object moves depends on whether you are
manipulating it in the Camera, Top, Front, Right, or other view.
Object is transformed using the axes
of the 3D view as the reference.
Global
Global translations and rotations are performed along the scene’s global
axes.
Object is transformed...
If you are using the SRT manipulators in a perspective view
like Camera or User, View mode uses the global scene axes.
Par
...using global axes as the reference.
Local
Local transformations are performed along the axes of the object’s local
coordinate system as defined by its center. This is the only true mode
available for scaling—scaling is always performed along an object’s own
axes.
Par, or parent, translations and rotations use the axes of the object’s
parent. For translation, this is the only mode where the axes of
interaction correspond exactly to the coordinates of the object’s local
position for the purpose of animation. When you activate individual
axes on the Transform panel, the corresponding local position
parameters are automatically marked. To activate Par for rotations,
activate Add and press Ctrl.
Object is transformed...
Object is transformed...
...using the local space of its
parent as the reference.
...using the object’s own local axes as the reference.
Basics • 63
Section 3 • Moving in 3D Space
Par mode is not available for components. In its place, Object
mode uses the local coordinates of the object that “owns”
the components.
Add
Vol
Like Uni, Vol or volume is available only for scaling and is a modifier
rather than a mode. It scales along one or two local axes, while
automatically compensating the other axes so that the volume of the
object’s bounding box remains constant.
Add, or additive, mode is only available for rotation. It lets you directly
control the object’s local X, Y, and Z rotations as stored relative to its
parent. This mode is especially useful when animating bones and other
objects in hierarchies.
For rotations, this is the only mode where the axes of interaction
correspond exactly to the coordinates of the object’s local orientation
for the purpose of animation. When you activate individual axes on the
Transform panel, the corresponding local position parameters are
automatically marked.
Uni
Uni, or uniform, is available only for scaling. It is not really a mode but
it modifies the way objects are scaled locally. It scales along all active
local axes at the same time with a single mouse button. You can activate
and deactivate axes as described in Specifying Axes on page 65. You can
also temporarily turn on Uni by pressing Shift while scaling.
Ref
Ref, or reference, mode lets you translate an object along the X, Y, and
Z axes of another element or an arbitrary reference plane. Right-click
on Ref to set the reference.
Object is transformed...
...using the local space of a
picked object as its reference.
64 • SOFTIMAGE|XSI
Transformations
Plane
Plane mode lets you drag an object along the XZ plane of another
element or an arbitrary reference plane. Right-click on Plane to choose
the plane.
If Allow Double-click to Toggle Active Axes is on in the Transform
preferences, then you can also specify transformation axes by doubleclicking in the 3D views while a transformation tool is active:
• Double-click on a single axis to activate it and deactivate the others.
• If only one axis is currently active, double-click on it to activate all
three axes.
Object is transformed...
• Shift+double-click on an axis to toggle it on or off individually. (If
it is the only active axis, it will be deactivated and both of the other
two axes will be activated).
Setting the Pivot
...using the local space
of a user-defined plane
in space.
When transforming elements interactively, you can set the pivot by
pressing the Alt key while a transformation tool is active. The pivot
defines the position around which elements are rotated or scaled
(center of transformation). When translating and snapping, the pivot
is the position that snaps to the target.
Specifying Axes
When transforming interactively, you can specify
which axes are active using the x, y, and z icons in
the Transform panel. For example, you can activate
rotation in Y only, or deactivate translation only in
Z. Active icons are colored, and inactive icons are
gray.
Individual axes
All Axes
• Click an axis icon to activate it and deactivate
the others.
• Shift+click an axis icon to activate it without affecting the others.
• Ctrl+click an axis icon to toggle it.
• Click the All Axes icon to activate all three axes.
• Ctrl+click the All Axes icon to toggle all three axes.
1. Make sure that Transform > Modify Object Pivot is set to the
desired value:
- Off (unchecked) to set the tool pivot used for interactive
manipulation only. This is useful if you are simply moving
elements into place. The tool pivot is normally reset when you
change the selection. However, you can lock and reset the
position manually.
- On (checked) to modify the object pivot. The object pivot acts like a
center for the object’s local transformations. It is used when playing
back animated transformations, and is also the object’s default pivot
for manipulation. You can animate the object pivot to create a
rolling cube.
2. Activate a transform tool.
Basics • 65
Section 3 • Moving in 3D Space
Rotate Manipulator
3. Do any of the following:
- Alt+drag the manipulator’s center, or one of its axes, to change
the position of the pivot manually. You can use snapping, as well
as change manipulation modes on the Transform panel.
Click and drag on a
single ring to rotate
around that axis.
Click and drag on the
silhouette to rotate
about the viewing axis.
This does not work in
Add mode.
- Alt+click in a geometry view. The pivot snaps to the closest point,
edge midpoint, polygon midpoint, or object center among the
selected objects. This lets you easily rotate or scale an object
about one of its components.
Click and drag on the ball
to rotate freely. This does
not work in Add mode.
- Alt+middle-click to reset the pivot to the default.
You can lock the pivot by pressing Alt, clicking on the
Pivot icon
triangle below the pivot icon, and choosing Lock. The
tool pivot remains at its current location, even if you change the
selection.
Scale Manipulator
Click and drag on a
single axis to scale
along it.
Click and drag along
the diagonal between
two axes to scale
both those axes
uniformly.
Using the Transform Manipulators
Translate Manipulator
Click and drag on a
single axis to translate
along it.
Click and drag between
two axes to translate
along the
corresponding plane.
Click and drag on the center to
translate in the viewing plane.
Click and drag the center left or right
to scale all active axes uniformly.
In addition to dragging the handles, you can:
• Middle-click and drag anywhere in the 3D views to translate along
the axis that most closely matches the drag direction.
• Click and drag anywhere in the 3D views (except on the
manipulator) to perform different actions, depending on the
setting for Click Outside Manipulator in the Tools > Transform
preferences.
• Right-click on the manipulator to open a context menu, where you
can set the manipulation mode and other options.
If you are familiar with SOFTIMAGE|3D and prefer its
method of transforming, turn off Transform > Enable
Transform Manipulators.
66 • SOFTIMAGE|XSI
Transformations
Setting Values Numerically
As an alternative to transforming objects interactively, you can enter
numerical values in the boxes on the Transform panel:
• In Global mode, values are relative to the scene origin.
• In Ref mode, values are relative to the active reference plane.
• In View mode, values can be either global or relative to the object’s
parent depending on what’s set in your preferences.
Parent and child branchselected before scaling.
Scaled in Y using
hierarchical scaling.
Scaled in Y using
classic scaling.
• In all other modes, values are relative to the object’s parent.
Transformation Preferences
Transform > Transform Preferences contains several settings that
affect the display, interaction, and other options of the transformation
tools. Since you will be spending a great deal of your time transforming
things, it’s a good idea to explore these and find the settings that are
most comfortable for you.
Hierarchical (Softimage) versus Classic Scaling
Hierarchical (Softimage) scaling uses the local axes of child objects
when their parent is branch-selected and scaled. This maintains the
relative shape of the children without shearing if they are rotated with
respect to their parent.
You specify which method to use for each child in its Local Transform
property. You can also set the default value used for all new objects.
To specify hierarchical or classic scaling
1. Select one or more child objects and open their Local Transform
property editor.
2. On the Scaling tab, turn Hierarchical (Softimage) Scaling off or on.
If it is off, classic scaling is used.
To set the default scaling mode used for all new objects
1. Choose File > Preferences from the main menu bar.
2. Click General.
3. Toggle Use Classical Scaling for Newly Created Objects.
When this option is off, the result is called classic scaling—children are
scaled along their parent’s axes and may be sheared with non-uniform
scaling. Classic scaling is recommended if you are exchanging data with
other applications, such as game engines, motion capture systems, or
3D applications that do not understand Softimage scaling.
Basics • 67
Section 3 • Moving in 3D Space
Center Manipulation
Resetting Transformations
Center manipulation lets you move the center of an
object without moving its points. This changes the
default pivot point used for rotation and scaling.
You can manipulate the center by using Center
mode interactively, or by using commands on the
Transform menu (Move Center to Vertices and Move Center to
Bounding Box).
The Transform > Reset commands return an object’s local scaling,
rotation, and translation return to the default values. It effectively
removes transformations applied since the object was created or
parented, or since its transformations were frozen.
It’s important to note that center manipulation is actually a
deformation. As the center is moved, the geometry is compensated to
stay in place. Because it is a deformation, you cannot manipulate the
center of non-geometric objects. This includes nulls, bones, implicit
objects, control objects, and anything else without points.
Freezing Transformations
The Transform > Freeze commands reset an object’s size, orientation,
or location to the default values without moving the object’s geometry
in global space. For instance, freezing an object’s translation moves its
center to (0, 0, 0) in its parent’s space without visibly displacing its
points.
Like center manipulation, freezing transformations is actually a
deformation. As the center is transformed, the geometry is
compensated to stay in place.
If a neutral pose exists when you freeze an object’s
transformations, the object’s center moves to the neutral pose
instead of to the origin of its parent’s space. If you want the
object’s center to be at the origin, you should remove the
neutral pose in addition to freezing the transformations. You
can perform these two operations in either order.
68 • SOFTIMAGE|XSI
If you want an object to return to a pose other than the origin of its
parent’s space when you reset its transformations, set a neutral pose
for it.
Setting Neutral Poses
The Transform > Set Neutral commands “zero out” an object’s
transformations. This is useful if you want an object to return to a pose
other than the origin of its parent’s space when you reset its
transformations. For example, you can set the neutral pose of a chain
bone so that it returns to a “natural” position when you reset it.
Neutral poses are also useful for visualizing the transformation
values—it’s easier to imagine a rotation from 0 to 45 degrees than from
78.4 to 123.4 degrees.
The neutral pose acts as an offset for the object’s local transformation
values, as if there was an intermediate null between the object and its
parent in the hierarchy. The neutral pose values are stored in the
object’s Local Transform property, and can be viewed or modified on
the Neutral Pose tab of that property editor.
When you set the neutral pose, any existing animation of the local
transformation values is interpreted with respect to the new pose. This
may give unexpected results when you play back the animation. You
should set the neutral pose before animating the transformations of an
object.
If you remove the neutral pose using Transform > Remove Neutral
Pose, the neutral pose values are added to the local transformation
before being reset to the defaults. The object does not move in global
space as a result.
Transform Setup
Transform Setup
Transformations and Hierarchies
The Transform Setup property lets you define a preferred
transformation for an object. When you select that object, its preferred
transformation tool is automatically activated. Of course, you can still
choose a different tool and change transformation options manually if
you want to.
Transformations are propagated down hierarchies. Each object’s local
position is stored relative to its parent. It’s as if the parent’s center is the
origin of the child’s world.
Transform setups are particularly useful when building animation rigs
for characters. If you are using an object to control a character’s head
orientation, you can set its preferred transformation to rotation. If you
are using another object to control the character’s center of gravity
(COG), you can set its preferred transformation to translation. When
you select the head control, the Rotate tool is automatically activated,
and then when you select the COG control, the Translate tool is
automatically activated.
Objects in hierarchies behave differently when they transformed
depending on whether the objects are node-selected or branch-selected.
By default:
You apply a Transform Setup property by choosing Get > Property >
Transform Setup from any toolbar and then setting all the options. You
can modify the options later by opening the property from the
explorer.
• If an object is node-selected, then children with local animation
follow the parent. This is because the local animation values are
stored relative to the parent’s center. However, what happens to
non-animated children depends on the ChldComp (Child
Transform Compensation) option on the Constrain panel.
While Transform Setups are useful for many tasks, like animating a rig,
at other times you don’t want the current tool to keep changing as you
select objects. In these cases, you can ignore Transform Setups for all
objects in your scene by turning off Transform > Enable
Transformation Setups. Turn it back on to resume using the preferred
tool of each object.
Basics of Transforming Hierarchies
• If an object is branch-selected, then its children are transformed as
well. You can change this behavior by modifying the parent
constraint on the Options tab of the child’s Local Transform
property editor.
Child Transform Compensation
The ChldComp option on the Constrain panel
controls what happens to non-animated children
if an object is node-selected and transformed.
• If this option is off, all children with an active
parent constraint follow the parent. You
cannot move the parent without moving its children.
• If this option is on, the children are not visibly affected. Their local
transformations are compensated so that they maintain the same
global position, orientation, and size.
Child Transform Compensation does not affect what happens when a
child has local animation on the corresponding transformation
parameters nor when the parent is branch-selected.
Basics • 69
Section 3 • Moving in 3D Space
Snapping
Incremental Snapping
Snapping lets you align components and objects when moving or adding
them. You can snap to targets like objects, components, and the viewport
grids, or you can snap by increments.
Snapping to Targets
When translating, rotating, and scaling elements, you can snap
incrementally. Instead of snapping to a target, elements jump in
discrete increments from their current values. This is useful if you want
to move an element by exact multiples of a certain value, but keep it
offset from the global grid.
Use the Snap panel to activate snapping to targets.
To snap incrementally:
Set a variety of options from the menu.
• Press Shift while rotating or translating an element.
• Press Ctrl while scaling (Shift is used for scaling uniformly).
You can set the Snap Increments using Transform > Transform
Preferences.
Activate or deactivate snapping.
Use Ctrl to temporarily toggle
the current state.
Specify the type of target: points,
curves/edges, facets, or the grid.
Right-click to select various sub-types.
The grid used for snapping depends on the manipulation mode:
• Global, Local, Par, Object, and Ref use the Snap Increments set in
the Transform > Transform Preferences. They do not use the
visible floor/grid displayed in 3D views.
• View mode uses the Floor/Grid Setup set in the Camera Visibility
property editor (Shift+s over a specific 3D view, or Display >
Visibility Options (All Cameras)).
• Plane mode uses the Snap Size set in the Reference Plane property
editor.
70 • SOFTIMAGE|XSI
Section 4
Organizing Your Data
Working in XSI involves saving and retrieving files
between systems. A typical project in XSI contains
many files that need to be easily accessible to you or
members of your workgroup. XSI provides data
management features, capabilities, and integrations
that help you optimize your production pipeline.
What you’ll find in this section ...
• Where Files Get Stored
• Scenes
• Projects
• Models
• Importing and Exporting
Basics • 71
Section 4 • Organizing Your Data
Where Files Get Stored
There are two types of files in SOFTIMAGE|XSI: project files and
application data files.
Setting a Workgroup
Project files include scenes as well as any accompanying files such as
texture images, referenced models, cached simulations, rendered
pictures, and so on. They are stored in various subfolders of a main
project folder.
Workgroups provide a method for easily sharing customizations
among a group of people working on the same project. Simply set your
workgroup path to a shared location on your local network, and you
can take advantage of any presets, plug-ins, add-ons, shaders, toolbars,
views, and layouts that are installed there.
Application data files are not specific to a single project. They include
presets and various customizations you can make or install, such as
commands, keyboard mappings, toolbars, shelves, views, layouts, plugins, add-ons, and so on. The application data files can be stored in
various subfolders at one of three locations:
The workgroup is usually created by a technical director or site
supervisor. To connect to an existing workgroup, choose File > Plug-in
Manager, click the Workgroups tab, click Connect, and specify the
location.
• User is the location for your personal customizations. Typically, it
is C:\users\username\Softimage\XSI_6.0 on Windows or
~/Softimage/XSI_6.0 on Linux.
• Workgroup is the location for customizations that are shared
among a group of users working on the same local area network.
• Installation (Factory) is the location for presets and sample
customizations that ship with SOFTIMAGE|XSI. It is located in the
directory where the XSI program files are installed. It is not
recommended that you store your own customizations here.
Whenever you use an XSI browser to
access files on disk, you can quickly
switch among your project, user,
workgroup, and installation locations
using the Paths button.
72 • SOFTIMAGE|XSI
Managing Project Content With NXN alienbrain
If you have the NXN alienbrain integration for XSI, you can have
seamless access to the data management and version control options of
NXN alienbrain within the XSI environment. You can work easily on
XSI scenes and referenced assets that are protected and managed by
NXN alienbrain.
Scenes
Scenes
A scene file contains all the information necessary to identify and position all the models and their animation, lights, cameras, textures, and so on for
rendering. All the elements of a scene are compiled into a single file with an .scn extension.
The Title bar identifies the name of the current
scene and the project in which it resides.
Merging Scenes combines objects in any number
of XSI scenes. When you merge a scene into the
current scene, it is automatically loaded as a model.
Press the Ctrl key as you drag and drop a scene
(*.scn) file from an external window into a 3D view
to merge it as a model under the scene root.
The File Menu contains most of the commands for
creating, opening, and managing scenes.
A New Scene is automatically generated when you start XSI
or create a new project. You can also create a new scene any
time while you work. Every new scene is created in the active
project and its name appears as “Untitled” in the XSI title bar.
Choose Edit > Delete All from the Edit panel in the main
command panel or press Ctrl+Delete to clear the workspace
before creating a new scene.
Save or Save As to update the existing scene or
save it to a new name in the current project.
Open a scene.
Manage scenes and their associated projects using
the Project Manager. You can also create, open,
and save scenes to different projects from here.
or
Open a recently used scene.
Import and export scenes from and to other
3D or CAD/CAM programs saved in the
dotXSI™, DirectX, IGES, OBJ, and 3DS formats.
You can also drag and drop a scene (*.scn) file from an
external window into a 3D view to open the scene. Note that
you cannot drag and drop scenes from external windows on
Linux systems.
Choose Preferences > Data Management
to set options for backing up, autosaving,
recovering, and debugging your scenes.
When you open a scene file, a temporary “lock” file is
created. Anyone else who opens the file in the meantime
must work on a “shared copy” and any changes to the scene
must be saved under a different file name. The lock file is
deleted when you close the scene
Basics • 73
Section 4 • Organizing Your Data
Managing External Files in Scenes
Scenes can reference many external files such as
referenced models, texture images, action
sources, and audio clips. Some of these referenced
files may be located outside of your project
structure. When you save a scene, the path
information that lets XSI locate and refer to these
external files is saved as well.
Click here to refresh
the list of files.
The buttons at the top of the right pane
provide various controls for viewing and
managing external files.
Selected files are
highlighted in green.
As you develop the scene, you’ll probably need to
perform some clean-up and management
operations on its external files. For example, you
might need to update some paths or locate a
missing image. You can do all this, as well as
perform other file management tasks, using the
external files manager.
Choose File > External Files to open the external
files manager.
Displaying Scene Information
You can obtain important statistics for your scene
by choosing Edit > Info Scene from the Edit panel
or by pressing Ctrl+Enter. This information can be
helpful when evaluating a scene’s complexity for
the purpose of optimization.
The left pane allows you to choose
whether to show all external files
used by the scene, or only those used
by a particular model.
Getting and Setting Data in the Scene TOC
Scene files can be further modified by its scene TOC. The scene TOC
(scene table of contents) is an XML-based file that contains scene
information. It has an extension of .scntoc with the same name and in
the same folder as the corresponding scene file.
By default, the scene TOC is created automatically when you save a
scene. When you open a scene file, XSI looks for a corresponding scene
TOC file. If it is found, XSI automatically applies the information it
contains.
74 • SOFTIMAGE|XSI
The grid lists all of the external files
for the scene/model specified in the
left-hand pane, and of the type
specified in the File Type list.
Files with invalid paths
are highlighted in red.
This lets you use a text editor or XML editor to change the path for
external files such as referenced models or texture images, change
render options, change the current render pass, and so on.
Projects
Projects
In XSI, you always work within the structure of a project. A project is a
system of folders that contain the scenes you build and the external files
referenced by those scenes.
Projects are used to keep your work organized and provide a level of
consistency that can simplify production for a workgroup. A project
can exist locally on your machine or can be shared from a network
drive.
When you open XSI for the first time, an untitled scene is created in the
XSI_SAMPLES factory project. You can set your own project as the
default project that opens with XSI. The project name in the title bar at
the top of the XSI interface is the active project.
Project lists are text-based files with an .xsiprojects file name extension.
You can build, manage and distribute your project lists among
members of your workgroup using the Project Manager.
The Project Manager
The Project Structure
The project manager is a tool for managing
multiple projects and scenes. You can create
new projects and scenes, open
existing projects and scenes,
scan your system for projects,
delete projects, as well as add
and remove projects from the
project list.
A set of subfolders are
created in every new
project folder. They store
and organize the different
elements of your work
such as rendered pictures,
scenes, material libraries,
external action sources,
etc.
Select a project from
your project list.
Scan for projects in a specified
path and add them to the
project list.
Export the list of projects and
have all members of the
workgroup import it.
Sort projects by Name, Origin
(factory [F], user [U], and
workgroup [W]), or none.
Location of your project folder.
Set the default project that opens
automatically when you start XSI.
Set the selected project as the
active project.
Basics • 75
Section 4 • Organizing Your Data
Models
Models are like “mini scenes” that can be easily reused in scenes and
projects. They act as a container for objects, usually hierarchies of
objects, and many of their properties. Models contain not just the
objects’ geometry but also the function curves, shaders, mixer
information, groups, and other properties. They can also contain
internal expressions and constraints; that is, those expressions and
constraints that refer only to elements within the model’s hierarchy.
“Club bot” model structure
contains many things that
define the character.
Models and Namespaces
Each model defines its own namespace. This means that each object in
a model’s hierarchy must have a unique name, but objects in different
models can have the same name. For example, two characters in the
same scene can both have chains named left_arm and right_arm if they
are in different models.
All models exist in the namespace of the scene. This means that each
model must have its own unique name, even if it is within the hierarchy
of another model.
Namespaces let you reuse animations that have been stored as actions.
If an action contains animation for one model’s left_arm chain, you
can apply the action to another model and it automatically connects to
the second model’s left_arm. If your models contain elements with
different naming schemes, for example, LeftArm and L_ARM, you can
use connection mapping templates to specify the proper connections.
Creating Local Models
To create a model in your scene, select the elements you want it to
contain and choose Create > Model from the Model toolbar.
There are two types of models:
• Local models are specific to a single scene.
• Referenced models are external files that can be reused in many
scenes.
Exporting Models
Use File > Export > Model to export models created in XSI for use in
other scenes. Using models to export objects is the main way of sharing
objects between scenes.
76 • SOFTIMAGE|XSI
At this point, the model has its own namespace and its own mixer, so it
can share action sources with other models in the same scene. It can
also be instantiated or duplicated within the same scene. If that’s all
you need a model for, you do not need to export and import it.
You can add elements to the model by parenting them to the model
hierarchy. To remove elements, cut them from the hierarchy.
When you export a model, a copy is saved as an independent file. The
file names of exported models have an .emdl extension.
The original model remains in the scene. If you ever need to modify the
model, you can change it in the original scene, and then re-export it
using the same file name. If other scenes use that file as a referenced
Models
model, they will update automatically when you open them. If you
imported the file into another scene as a local model, you must delete
the model from that scene and re-import it from the file to obtain the
updated version.
Importing Local Models
When you import a model locally instead of as a referenced model, its
data becomes part of your scene. It is as if the model was created
directly in the scene—there is no live link to the .emdl file. You can
make any changes you want to the model and its children.
Referenced models also let you work at different levels of detail. You
can have a low-resolution model for fast interaction while animating, a
medium-resolution model for more accurate previewing, and a highresolution model for the final results.
Referenced models are indicated in the explorer by a white man icon.
The default name of this node depends on the name of the external file,
but you can change it if you want. The name of the active resolution
appears in square brackets after the model’s name. The name of a delta’s
target model appears after the delta’s name.
To import a model locally, choose File > Import > Model from the
main menu. You can also drag an .emdl file from a browser or a link on
a Net View page and drop it onto the background of a 3D view. On
Windows, you can also drag an .emdl file from a folder window.
Importing Referenced Models
Referenced models are models that are imported using File > Import >
Referenced Model or converted to referenced using Edit > Model >
Convert to Referenced. Their data is not stored in the scene—it is
referenced from an external .emdl or .xsi file. Changes made to the
external model are reflected in your scene the next time you open the
scene or update the reference.
For example, let’s say that you’re modeling a car that will be used in
various scenes, but the animator needs to start animating with the car
on another computer before you can finish the details. You export the
car as porsche.emdl, which the animator can import into her scene
while you continue your work. Any changes that the animator makes to
the car, such as setting keys or expressions, are automatically stored in
the model’s delta in the scene.
When you’re done modeling the car, you can re-export using the same
file name. Now when the animator loads the scene or updates the
referenced model, all the changes you made are automatically reflected
in the car in her scene. After the model is updated, XSI reapplies the
changes stored in the delta to the model within the animator’s scene.
Use the Modify > Model menu on the Model toolbar to set the current
resolution, or to temporarily offload models.
You can change a referenced model’s
Parameters display a white
lock icon but they can still
parameters values, animate them, apply
be modified and animated.
new properties, and so on. These
changes are stored in the clip and
reapplied when the model is updated. There are some changes you
can’t make, such as adding an object to the hierarchy or deleting a
property.
Whatever changes you perform, make sure that they are selected in the
clip’s Local Modifications to Save property, otherwise they will be lost
the next time the model is updated. Not all types of changes are enabled
by default.
Basics • 77
Section 4 • Organizing Your Data
Instantiating Models
Importing and Exporting
An instance is an exact replica of a model. Any type of model can be
instanced. You can create as many instances as you like using the
commands on the Edit > Duplicate/Instantiate menu, and position
them anywhere in your scene. When you modify the original “master”
model, all instances update automatically.
In any production pipeline, you will need to import and export scene
data for reuse in other scenes or software packages.
Instances are useful because they require very little memory: only the
transformations of the instance root is stored. However, you cannot
modify, for example, an instance’s geometry or material.
Importing and Exporting with Crosswalk
Instantiation has the following advantages:
• Instances use much less disk space than duplicates or clones
because you’re not duplicating the geometry.
• Editing multiple identical objects is very simple because you only
have to edit the original.
• Wireframe, shading, and memory operations are much faster.
Instances are displayed in the explorer with a cyan i superimposed on
the model icon. In the schematic view, they are represented by
trapezoids with the label I.
Instance in the
explorer.
XSI provides a number of importers and exporters available from the
File > Import and File > Export menus. XSI also supports many other
file types such as audio, video, various graphics and middleware
formats, as well as specialized scene elements such as function curves,
actions, and motion capture data.
Instance in the
schematic view.
Crosswalk is a set of plug-ins and converters that lets you transfer assets
such as scenes and models between XSI and other programs in your
pipeline. You can access the Crosswalk converters in XSI from File >
Crosswalk. You can download the latest version of Crosswalk from
www.softimage.com to use with XSI or as a standalone.
Collada and dotXSI
You can use Crosswalk in XSI to import and export scenes and models
in Collada (.dae, .xml) and dotXSI (.xsi) formats.
3ds Max and Maya
Crosswalk plug-ins for Maya and 3ds Max allow you to import and
export dotXSI files in those programs. This allows you to share assets
back and forth with XSI.
Crosswalk SDK
You can use the templates and examples provided in the Crosswalk
SDK to create converters to import dotXSI files into your own custom
format, such as for games content.
78 • SOFTIMAGE|XSI
Importing and Exporting
Importing and Exporting with Point Oven
Point Oven is a suite of plug-ins available from within XSI that allow
you to simplify your XSI scenes by baking in vertex and function curve
data. These plug-ins also allow you to streamline your pipeline by
providing data transfer between different applications that also use
Point Oven.
The XSI Point Oven plug-ins let you load and save various types of
data: you can import and export Lightwave Object (LWO2) files, bake
vertices to MDD files, import and export Point Oven scenes (PSC),
export Lightwave scenes (LWS), export Messiah scenes (FXS), and
import MDD files.
You can access the Point Oven plug-ins from the File > Import > Point
Oven and File > Export > Point Oven menus.
Importing and Exporting Obj Files
You can import and export Wavefront Obj files to transfer data back
and forth with other programs that support this format, such as
ZBrush, using File > Import > Obj File and File > Export > Obj File.
Importing and Exporting Other Formats
In addition to the formats explicitly mentioned here, XSI supports a
large number of other formats for scenes, animation, motion capture,
images, and so on.
Basics • 79
Section 4 • Organizing Your Data
80 • SOFTIMAGE|XSI
Section 5
General Modeling
Modeling is the task of creating the objects that you
will animate and render. No matter what type of
object you are modeling, the same basic concepts
and techniques apply. This section explores the
aspects of modeling that aren’t specific to curves,
polygon meshes, or NURBS surfaces.
What you’ll find in this section ...
• Overview of Modeling
• Geometric Objects
• Accessing Modeling Commands
• Starting from Scratch
• Operator Stack
• Modeling Relations
• Attribute Transfer (GATOR)
• Manipulating Components
• Deformations
Basics • 81
Section 5 • General Modeling
Overview of Modeling
1
Start with a basic object, such as a primitive cube.
3
Rough out the basic shape of the object.
2
4
Add more subdivisions to work with.
Iteratively refine the object, moving points
and adding more detail where required.
5
Once the modeling is done, the object
is ready to be textured and animated.
If changes are necessary, you can still
perform modeling operations on the
animated, textured object.
82 • SOFTIMAGE|XSI
Geometric Objects
Geometric Objects
By definition, geometric objects have points. The set of these points
and their positions determine the shape of an object and are often
called the object’s geometry. The number of points and how they are
connected is called its topology.
A subdivision surface created from a cube.
No matter what the type of geometry, XSI allows you to select,
manipulate, and deform points in the same way.
Types of Geometry
The main types of renderable geometry in XSI are polygon meshes and
NURBS surfaces. In addition, there are other types of geometry that
you can use for specialized purposes.
Polygon Meshes
Polygon meshes are quilts of polygons joined at their edges and
vertices. One advantage of polygon meshes is that they allow for almost
arbitrary topology—you are not limited to rectangular patches and
you can add extra points for more detail where needed.
NURBS Surfaces
Surfaces are two-dimensional NURBS (non-uniform rational B-splines)
patches defined by intersecting curves in the U and V directions. In a
cubic NURBS surface, the surface is mathematically interpolated
between the control points, resulting in a smooth shape with relatively
few control points.
The accuracy of NURBS makes them ideal for smooth, manufactured
shapes like car and aeroplane bodies. One limitation of surfaces is that
they are always four-sided.
A polygon mesh sphere
A disadvantage of polygon meshes is that they are poor at representing
organic shapes—you may require a very heavy geometry (that is, many
points) to obtain smoothly curved objects. However, you can subdivide
them to create “virtual” geometry that is smoother.
NURBS surfaces allow for smooth geometry
with relatively few control points.
Basics • 83
Section 5 • General Modeling
Curves
Particles
In XSI, curves are one-dimensional NURBS of linear or cubic degree.
Cubic curves with Bézier knots can be manipulated as if they are Bézier
curves.
Particles are disconnected points emitted
by a cloud during a simulation. You can
use them to create a variety of effects, such
as fire, water, and smoke.
Curves have points but they are not renderable because they have no
thickness. Nevertheless, they have many uses, such as serving as the basis
for constructing polygon meshes surfaces, paths for objects to move along,
controlling deformations like deform by curve and deform by spine, and
so on.
Hair
Hair objects let you use guide hairs to
control a full head of render hairs.
You can style the hairs manually as
well as apply a dynamic simulation.
A simple cubic NURBS curve.
Lattices
Lattices are a hybrid between geometric objects and control objects.
Although they have points, they do not render and are used only to
deform other geometric objects.
Density
Density refers to the number of points on an object. Part of the art of
modeling is controlling the balance of density. Generally speaking, you
need more density in areas where an object has high detail or needs to
deform smoothly. However, too much density means that an object
will be unnecessarily slow to load, update, and render.
84 • SOFTIMAGE|XSI
Geometric Objects
Normals
On polygon meshes and surfaces, the control points form bounded
areas. Normals are vectors perpendicular to these closed areas on the
surface and their purpose is to indicate the visible side of the object
and its orientation to the camera. Normals are computed to optimize
shading between surface triangles.
Normals are represented by thin blue lines. To display or hide them,
click the eye icon (Show menu) of a 3D view and choose Normals.
Eye icon
When normals are oriented in the wrong direction, they cause
modeling or rendering problems. You can invert them using Modify >
Surface > Invert Normals or Modify > Poly. Mesh > Invert Normals
on the Model toolbar.
If an object was generated from curves, you can also invert its normals
by inverting one or more of its generator curves with Modify >
Curve > Inverse.
Normals should point toward the camera.
Right
Wrong
Basics • 85
Section 5 • General Modeling
Accessing Modeling Commands
The modeling tools can be found, not surprisingly, on the Model
toolbar. In addition, the context menu also contains many of the most
useful modeling commands that apply to the current selection.
Model Toolbar
You’ll find the Model toolbar at the far left of the screen. These
commands are also available from the main menu.
Get commands
Create generic elements,
including primitive objects,
cameras, and lights (also
available on Animate, Render,
and Simulate toolbars).
Modify commands
Change an object’s topology
or deform its geometry.
86 • SOFTIMAGE|XSI
Many modeling commands are available from context menus. The
context menu appears when you Alt+right-click in the 3D views
(Ctrl+Alt+right-click on Linux).
• If you click a selected object, the menu items apply to all selected
objects. On Windows, you can also press the context-menu key
(next to the right Ctrl key on some keyboards).
• If you click an unselected object, the menu items apply only to that
object.
To display the Model
toolbar:
– Click the toolbar title and
choose Model.
Create commands
Draw new objects or generate
them from existing ones.
Context Menus
or
– Press 1 at the top of
the keyboard.
If the Palette or Paint panel is
currently displayed, first click
the Toolbar icon or press
Ctrl+1.
• When components are selected, you can right-click anywhere on
the object that “owns” the selected components. The items on the
context menu apply to the selected components.
• If you click over an empty area of a 3D view, the menu items apply
to the view itself.
Starting from Scratch
Starting from Scratch
When modeling, you need to start somewhere. You can:
• Get a basic shape from the Primitive menu.
• Create text.
• Generate an object from a curve.
Primitives
Primitives are basic shapes like cubes, grids and
spheres. You can add them to a scene and then
modify them as you wish. For example, you can start
with a sphere and move points to create a head. You
can then attach eyeballs and ears to the head and put
the whole head on a model of a character.
There are several different primitive shapes for each geometry type.
Each primitive shape has parameters that are particular to it—for
example, a sphere has a radius that you can specify, a cube has a length,
a cylinder has both height and radius, and so on.
- Surface displays a submenu from which you can choose an
available NURBS surface shape.
3. Set the parameters as desired. The geometric primitives (curves,
polygon meshes, and surfaces) have certain typical controls:
- The shape-specific page contains the basic characteristics of the
shape. Each shape has different characteristics; for example, a
sphere has one radius and a torus has two.
- The Geometry page controls how the implicit shape is subdivided
when converted into a surface. More subdivisions yield more
points, resulting in greater detail but heavier geometry.
Text
You can create text in XSI, as well as import it from RTF (rich text
format) files. Text is not a type of geometric object in XSI; instead, text
information is immediately converted to curves. After that, the curves
can be optionally converted to planar or extruded polygon meshes.
There are also several parameters that are common to all or to several
primitive shapes: Subdivisions, Start and End Angles, and Close End.
Getting Primitives
You add a primitive object to the scene by choosing an
option from the Get > Primitive menu on any of the
toolbars at the left of the main window.
1. Choose Get > Primitive.
2. Choose an item from the submenus:
- Curve displays a submenu from which you can choose an
available NURBS curve shape.
- Polygon Mesh displays a submenu from which you can choose an
available polygon mesh shape.
Creating Text
• Choose one of the following commands from the Model toolbar:
- Create > Text > Curves creates a Text primitive and converts it to
a curve object.
- Create > Text > Planar Mesh creates a Text primitive, converts it
to a curve object, and then finally converts the curve to a polygon
mesh with the Extrusion Length set to 0. The curve object is
automatically hidden.
Basics • 87
Section 5 • General Modeling
- Create > Text > Solid Mesh creates a Text primitive, converts it to
a curve object, and then finally converts the curve to a polygon
mesh with the Extrusion Length set to 0.5 by default. Once again,
the curve object is automatically hidden.
In each case, a property editor with the following pages is displayed:
Enter text and
font properties.
Convert curves to
polygon meshes
(optional).
Objects from Curves
You can generate polygon meshes and surfaces from curves using the
first group of commands in the Create > Surf. Mesh menu or the
Create > Poly. Mesh menu on the Model toolbar.
Create surface from curves
Convert text
to curves.
Create polygon mesh from curves
The commands and the general procedures on these two menus are the
same—the only difference is the type of object that is created.
88 • SOFTIMAGE|XSI
Operator Stack
1. Select the first input curve, then add the remaining input curves (if
any) to the selection.
Different commands require different numbers of input curves. For
example, Revolution Around Axis requires only one curve, while
Loft allows for any number of profile curves to define the crosssection.
You are not limited to curve objects. You can also select curves on
surfaces, including any combination of isolines, knot curves,
boundaries, surface curves, and trim curves. For example, you can
create a loft surface that joins two surface boundaries while passing
through other curves.
2. Choose one of the commands from the first group in the Create >
Surf. Mesh or the Create > Poly. Mesh on the Model toolbar.
3. In the property editor that opens, adjust the parameters as desired.
For more information, refer to the XSI Reference by clicking on the
? in the property editor.
Operator Stack
The operator stack (also known as the modifier stack or construction
history) is fundamental to modeling in SOFTIMAGE|XSI. Every time
you perform a modeling operation, such as modify the topology or
apply a deformation, an operator is added to the stack. Operators
propagate their effects upwards through the stack, with the output of
one operator being the input of the next. At any time, you can go back
and modify or delete operators in the stack.
Viewing and Modifying Operators
You can view the operator stack of an object in an explorer if Operators
is active in the Filters menu. The operator stack is under the first
subnode of an object in the explorer, typically named Polygon Mesh,
NURBS Surface Mesh, NURBS Curve List, and so on.
For example, suppose you get a primitive polygon mesh grid, apply a
twist, then randomize the surface. The operator stack shows the
operators that have been applied. You can open the property page of
any operator by clicking on its icon, and then modify values. Any
changes you make are passed up through the history and reflected in
the final object.
Click icon to open
the property editor.
2 Guide curve
1 Profile curve
Example of extruding a curve along another curve
Click the name to
select the operator.
Then you can press
Enter to open the
editor, or press
Delete to remove
the operator.
For example, you can:
• Change the size of the grid.
• Change the angle, offset, and axis of the twist in Twist Op.
• Change the random displacement parameters in Randomize Op.
Basics • 89
Section 5 • General Modeling
To quickly open the last operator in the selected object’s stack,
press Ctrl+End or choose Edit > Properties > Last Operator in
Stack.
If you modify specific components, then go back earlier in
the stack and change the number of subdivisions, you’ll
probably get undesirable results because the indices of the
affected points have changed.
Construction Modes and Regions
The construction history is divided into four regions: Modeling, Shape
Modeling, Animation, and Secondary Shape Modeling. The purpose
of these regions is to keep the construction history clean and well
ordered by allowing you to classify operators according to how you
intend to use them.
For example, when you apply a deformation, you might be building the
object’s basic geometry (Modeling), or creating a shape key for use
with shape animation (Shape Modeling), or creating an animated
effect (Animation), or creating a shape key to tweak an enveloped
object (Secondary Shape Modeling).
Secondary Shape
Modeling
Define shapes on
top of envelopes,
e.g., muscle bulges.
Shape Modeling
Define shapes for
animation.
Animation
Apply envelopes or
other animated
deformations.
1. Set the current construction mode
using the selector on the main menu
bar.
2. Continue modeling objects by applying new operators. New
deformations (operations that only change the positions of points)
are applied at the top of the current region, and new topology
modifiers (operators that change the number of components) are
always applied at the top of the Modeling region. If you apply a
deformation in the wrong region, you can move it by dragging and
dropping in the explorer.
3. At any time as you work, you can display the final result (the result
of all operators in all regions) or the just the current mode (the
result of all operators in the current region and those below it) by
selecting an option from the Construction Mode Display submenu
of the display type menu on the top right of a viewport:
- Result (top) always shows the final result of all operators, no
matter which construction mode is current.
- Sync with construction mode shows the result of the operators in
the current construction region and below.
Display type
menu
Modeling
Create the basic
shape and topology
of an object.
Use Freeze M to
freeze this region.
90 • SOFTIMAGE|XSI
Here is a quick overview of the workflow for using construction modes:
You can even have different displays in different views so, for
example, you can see and move points in one view in Modeling
mode while you see the results after enveloping and other
deformations in another view.
Operator Stack
Changing the Order of Operators
You can change the order of operators in an object’s stack by dragging
and dropping them in an explorer view. You must always drop the
operator onto the operator or marker that is immediately below the
position where you want the dragged operator to go.
Be aware that you might not always get the results you expect,
particularly if you move topology operators or move other operators
across topology operators, because operators that previously affected
certain components may now affect different ones. In addition, some
deformation operators like MoveComponent or Offset may not give
expected results when moved because they store offsets for point
positions whose reference frames may be different at another location
in the stack.
When you try to drag and drop an operator, XSI evaluates the
implications of the change to make sure it creates no dependency cycles
in the data. If it detects a dependency, it will not let you drop the
operator in that location. Moving an operator up often works better
than moving it down—this is because of hidden cluster creation
operators on which some operators depend.
• Freezing removes any animation on the modeling
operators (such as the angle of a Twist deformation). The
values at the current frame are used.
• For hair objects, the Hair Generator and Hair Dynamics
operators are never removed.
Collapsing Deformation Operators
Sometimes, it is useful to “freeze” certain operators in the stack
without freezing earlier operators that are lower in the stack. For
example, you might have many MoveComponent operators that are
slowing down your scene, but you don’t want to lose an animated
deformation or a generator (if your object has a modeling relation that
you want to keep).
In these cases, you can collapse several deformation operator into a
single Offset operator. The Offset operator is a single deformation that
contains the net effect of the collapsed deformations at the current
frame. Simply select the deformations operators in an explorer and
choose Edit > Operator > Collapse Operators.
Freezing the Operator Stack
When you are satisfied with an object, you can freeze all or part of its
operator stack. This removes the current history—as a result, the
object requires less memory and is quicker to update. However, you
can no longer go back and change values.
• To freeze the entire stack, select the object and click Freeze on the
Edit panel.
• To freeze just the modeling region, select the object and click
Freeze M.
• To freeze from a specific operator down, select the operator in an
explorer and click Freeze.
Basics • 91
Section 5 • General Modeling
Modeling Relations
When you generate an object from other objects, a modeling relation is
established. For example, if you create a surface by extruding one curve
along another curve, the resulting surface is linked to its generator
curves. If you modify the curves, the surface updates automatically.
The modeling relation is sometimes called construction history.
You can modify the generated object in any way you like, for example, by
moving points or applying a deformation. When you modify the
generators, the generated object is updated while any modifications you
have made to it are preserved.
If you delete the input objects, the generated object is
removed as well. To avoid this, freeze the generated object or
at least the generator operator before deleting the inputs. If
you use the Delete button in the Inputs section of the
generator’s property editor, the generator is automatically
frozen first.
You can display the modeling relations:
• In a 3D view, click the eye icon (Show menu) and make sure that
Relations is on.
• In a schematic view, make sure that Show > Operator Links is on.
Modeling Relation
The road was created by extruding a crosssection along a guide. When the original
guide was deformed into a loop, the road
was updated automatically.
92 • SOFTIMAGE|XSI
If the selected object has a modeling relation, it is linked to its input
objects by lines. A label on the line identifies the type of relation (such
as wave or revolution) and the name of the input object. You can click
the line to select the corresponding operator.
Attribute Transfer (GATOR)
Attribute Transfer (GATOR)
Manipulating Components
You can transfer and merge clusters with properties from object to
object. The cluster properties that you can transfer in this way include
materials, texture UVs, vertex colors, property weight maps, envelope
weights, and shape animation.
Tweak Component is the main tool for moving components. It allows
you to translate, rotate, and scale points, polygons, and edges. You can
use it in two ways:
Attributes can be transferred in two ways:
• Click-and-drag components for a fast, uninterrupted interaction.
• Select a component and then use the manipulator for a more
controlled interaction.
• If you are generating a polygon mesh object from others, for
example using Merge or Subdivision, use the controls in the
generator’s property editor to transfer attributes from the input
objects to the generated objects.
To use the Tweak Component tool
• Otherwise, select the target object, choose Get > Property > GATOR,
pick one or more input objects, and right-click to end the picking
session. You can use any combination of polygon meshes and NURBS
surfaces.
2. Activate the Tweak Component tool by pressing m or choosing
Modify > Component > Tweak Component Tool from the Model
toolbar.
Transfer and merge
surface attributes.
Transfer and merge
animation attributes.
Transfer and merge
specific attributes
manually.
1. Select a geometric object.
Note that if a curve is selected, then pressing m activates the Direct
Manipulation tool instead. However, you can still use Tweak
Component with curves by choosing it from the toolbar menu.
3. Move the mouse pointer over the object in any geometry view. As
the pointer moves, the component under the pointer is
highlighted.
The Tweak Component tool will not highlight backfacing
components, or components that are occluded by parts of the same
object. When there are multiple types of components within the
picking radius, priority is given first to points, then to edges, and
finally to polygons.
4. Do one of the following:
- Click+drag to perform a simple transformation on the
highlighted component. If all axes are active on the Transform
panel, translation occurs in the viewing plane and scaling is
uniform in local space. If one or more axes have been toggled off,
translation and scaling use the current manipulation mode and
active axes set on the Transform panel. For example, to translate
along a point’s normal, activate Local and the Y axis only.
Basics • 93
Section 5 • General Modeling
Rotation uses the current manipulation mode and the Y axis by
default, but you can select a different axis by deactivating the
others.
- Click and release the mouse button to select the highlighted
component. A manipulator appears (unless you’ve toggled it off).
You can use the manipulator to transform the selection, or if you
prefer you can first modify the selection, change the pivot, and set
other options.
The Tweak Component tool uses the Ctrl, Shift, and Alt modifier
keys with the left and middle mouse buttons to perform different
functions—look at the mouse/status line at the bottom of the XSI
window for brief descriptions, or read the rest of this section for the
details. The right mouse button opens a context menu.
Switching between Translation, Rotation, and
Scaling
The Tweak Component tool lets you translate, rotate, or scale
components. Select the desired transformation using the v, c, and x
keys—press and release a key to change the transformation (sticky
mode) or press and hold a key to temporarily override the current
transformation (supra mode).
• To translate, press v or choose Translate from the context menu.
Drag the center to translate
freely in the viewing plane.
Drag an axis to translate in
the corresponding direction.
5. The Tweak Component tool remains active, so you can repeat steps
3 and 4 to manipulate other components.
When you have finished, deactivate the tool by pressing Esc,
pressing m again, or activating a different tool.
• To rotate, press c or choose Rotate from the context menu.
Drag an axis to rotate in the
corresponding direction.
94 • SOFTIMAGE|XSI
Manipulating Components
• To scale, press x or choose Scale from the context menu.
Drag the center to
scale uniformly.
Drag an axis to scale in the
corresponding direction.
• Ref, or reference, mode lets you transform elements using another
component or object as the reference frame. See Setting the Pivot on
page 96.
• Plane mode is similar to Ref. It uses the same axes as Ref but the
object center as the pivot.
Activating Axes
You can activate or deactivate axes on the
Transform panel:
Individual axes
• Click an axis icon to activate it and deactivate
the others.
The mouse pointer updates to reflect the current action. You can also
press Tab to cycle through the three actions, or Shift+Tab to cycle in
reverse order.
To activate the standard Translate, Rotate, or Scale tools, you must
either deactivate the Tweak Component tool before pressing v, c, or x,
or use the t, r, or s buttons on the Transform panel.
Setting Manipulation Modes
The Tweak Component tool uses the manipulation
modes shown on the Transform panel. They affect
the axes and pivot used for the transformation.
• Shift+click an axis icon to activate it without
affecting the others.
All Axes
• Ctrl+click an axis icon to toggle it.
• Click the All Axes icon to activate all three axes.
• Ctrl+click the All Axes icon to toggle all three axes.
Alternatively if the Tweak manipulator is displayed, you can activate a
single axis by double-clicking on it. Double-click on the same axis
again to re-activate all axes, or on a different one to activate it instead.
• Global transformations are performed along the scene’s global
axes.
• Local transformations use the component’s own reference frame.
In this mode, Y is the normal direction.
• View transformations are performed with respect to the viewing
plane of the 3D view.
• Object transformations are performed in the local coordinate
system of the object that contains the components.
Basics • 95
Section 5 • General Modeling
Selecting Components
Selecting, Deselecting, and Extending the Selection
Note that for edge loops, the direction is implied so you can simply
Alt+middle-click on an edge to select the loop and then
Alt+Shift+middle-click to select additional loops. However, to select
parallel edge loops, you still need to specify two components as
described above.
Use the following keyboard and mouse combinations for selection:
Selecting by Type
The Tweak Component tool lets you select components in a similar
way to the standard selection tools, but there are some differences.
• Shift+middle-click to toggle-select a component.
The Tweak Component tool allows you to manipulate points, edges,
and polygons, but you can limit it to a particular type of component if
you desire. Use the context menu to activate Tweak All, Points, Edges,
Polygons, or Points + Edges.
• Ctrl+Shift+click to deselect a component.
Setting the Pivot
• To quickly deselect all components, click anywhere outside the
object.
You can quickly set the pivot by middle-clicking on a component. For
example, to rotate a polygon about one of its edges, simply click to
select the polygon and then middle-click to specify the edge as the
reference. The manipulator does not react to middle-clicks unless Shift
is pressed, so you can pick a component even if the manipulator is
covering it in a view.
• Click a component to select it.
• Shift+click a component to add it to the selection.
Note that you can only multi-select components of the same type. You
cannot select a heterogeneous collection of points, edges, and
polygons.
Selecting Loops and Ranges
Use the Alt key to select loops or ranges of components.
To select loops or ranges of components
1. Click to select the first or “anchor” component.
2. Do one of the following:
- Alt+click on a second component to select all components on a
path between the two components.
- Alt+middle-click on a second component to select all
components in the loop that contains both components.
3. To select additional loops or ranges, use Shift+click to specify a new
anchor and then Alt+Shift+click for a new range or
Alt+Shift+middle-click for a new loop.
96 • SOFTIMAGE|XSI
Middle-clicking temporarily switches to Ref manipulation mode. As
soon as you select a new component, the previous manipulation mode is
restored. If you want to transform several components about the same
reference one after another, you should manually switch to Ref mode and
then middle-click to specify the reference. In this way, the reference
frame does not revert to the default when you select a new component to
manipulate.
Manipulating Components
Using Proportional Modeling
Sliding Components
When you manipulate points, edges, and polygons, you can use
proportional modeling. When this option is on, neighboring
components are affected as well, with a falloff that depends on distance.
Proportional modeling is sometimes known as “magnet” or “soft
selection”.
You can slide components with the Tweak Component tool. This helps
to preserve the contours of objects as you tweak them.
Sliding an edge moves its endpoints along the adjacent edges by an
equal percentage. Sliding a point or a polygon clamps the associated
points to the nearest location on the surface of the mesh, as if they had
been shrinkwrapped to the original untweaked object. Sliding works
only on polygon mesh components.
Proportional modeling off
Proportional modeling on
To activate proportional modeling, click the Prop button on the
Transform panel.
Selected edge loop.
Effect of sliding.
Effect of ordinary
translation
for comparison.
To activate or deactivate sliding:
• While the Tweak Component tool is active, do one of the following:
Components that are affected by the proportional falloff are
highlighted, and the Distance Limit is displayed as a circle.
You can change the Distance Limit interactively when proportional
modeling is active by pressing and holding r while dragging the mouse
left or right.
To change other proportional settings, right-click on Prop.
- Press j. Press and release the key to toggle sliding on or off (sticky
mode) or press and hold it to temporarily override the current
behavior (supra mode).
- Click the on-screen Slide icon at the bottom of the view.
Slide Components button
- Right-click and choose Slide Components.
Basics • 97
Section 5 • General Modeling
Snapping
You can use the Ctrl key to snap while using the Tweak Component
tool:
• Press Ctrl to toggle snapping to targets on or off (depending on its
current setting on the Snap panel) while translating.
• Press Ctrl to snap by increments while scaling.
For more information about snapping options, see Snapping on
page 70.
Welding Points
You can interactively weld pairs of points on polygon meshes while
using the Tweak Component tool. Welding merges points into a
single vertex.
To weld points
1. While the Tweak Component tool is active, toggle Weld Points on
by doing one of the following:
- Press l. Press and release the key to toggle welding on or off
(sticky mode) or press and hold it to temporarily override the
current behavior (supra mode).
- Click the on-screen Weld Points icon at the bottom of the view.
Weld Points button
- Right-click and choose Weld Points.
2. Click and drag a point. As you move the mouse pointer, the point
snaps to points within the region.
98 • SOFTIMAGE|XSI
3. Release the mouse button over the point you want to weld to.
Note that interactive welding uses the same snapping region size as
the Snap tool. You can modify the region size using the Snap menu
or Ctrl+mouse-wheel.
4. Repeat steps 2 and 3 to weld more points, if desired. When you
have finished welding, toggle Weld Points off.
Hiding the Manipulator
If you don’t like working with the
manipulator, you can hide or unhide it by
Toggle Manipulator button
clicking the on-screen button at the bottom
of the view or by choosing Toggle
Manipulator from the context menu.
When the manipulator is off, the Tweak Component tool is always in
click-and-drag mode:
• If all axes are active on the Transform panel, translation occurs in
the viewing plane and scaling is uniform in local space. If one or
more axes have been toggled off, translation and scaling use the
current manipulation mode and active axes set on the Transform
panel.
• Rotation uses the current manipulation mode and the Y axis by
default, but you can select a different axis by deactivating the
others.
Manipulating Components
Manipulating Components Symmetrically
Symmetrical manipulation lets you move points and other components
while maintaining the symmetry of an object. Any manipulation
performed on components on one side is mirrored to the corresponding
components on the other side. Components that lie directly on the plane
of symmetry are “locked down”; they can be translated or moved only
along the plane of symmetry itself.
There are two ways to do this in XSI:
• To move components symmetrically in “live” mode, simply activate
Sym on the Transform panel. XSI automatically finds symmetrical
components (within a small tolerance) and moves them, too.
• If you will need to maintain a correspondence between points even
after an object is no longer symmetrical, you first need to apply a
symmetry map (Get > Property > Symmetry Map) while the
object is still symmetrical. This allows you to manipulate
components symmetrically after a character has been enveloped
and posed, for example.
To specify the plane of symmetry or set other options, right-click
on Sym.
Alternatives to the Tweak Component Tool
In addition to the Tweak Component tool, XSI provides many other
ways to manipulate components. For example, you could use the
regular selection and transformation tools, or some of the other tools
on the Modify > Component menu.
Basics • 99
Section 5 • General Modeling
Deformations
Deformations are operators that change the shape of geometric objects.
XSI provides a large variety of deformation types available from the
Modify > Deform menu of the Model and Simulate toolbars as well as
the Deform > Deform menu of the Animate toolbar.
Lattice Deformation
Some deformations, like Bend and Twist, are very simple. Others, like
Lattice and Curve, use additional objects to control the effect.
Deformations can be used either as modeling tools or animation tools.
Depending on the type of deformation, you can animate the
deformation’s own parameters, such as the amplitude of a Push, or the
properties of a controlling object, such as the center of a Wave.
Wave Deformation
Examples of Deformations
Circular wave
Here are just some examples of the many types of deformation and
their possible uses.
Deformation by Curve
Planar wave
Object and curve before the
deformation is applied
Object and curve after the
deformation is applied
Muting Deformations
All deformations can be muted. This temporarily disables its effect. To
mute a deformation, activate Mute in its property editor. Alternatively,
right-click on its operator in an explorer and choose Mute.
100 • SOFTIMAGE|XSI
Section 6
Curves
XSI provides a full set of tools for creating and
editing curves in 3D space. Although they can’t be
rendered by themselves, curves form the basis for a
lot of modeling and animation techniques.
What you’ll find in this section ...
• About Curves
• Drawing Curves
• Manipulating Curve Components
• Modifying Curves
• Inverting Curves
• Importing EPS Files
Basics • 101
Section 6 • Curves
About Curves
Drawing Curves
In SOFTIMAGE|XSI, you can use curves to:
SOFTIMAGE|XSI has tools and commands that let you draw and
manipulate curves in a variety of ways.
• To build objects, for example, by revolving, extruding, or using
Curves to Mesh,
• To deform objects, for example, using curve or spine deformations.
In XSI, you can draw and manipulate two types of curve: linear and
cubic. Linear curves are composed of straight segments, and cubic
curves are composed of curved segments.
• As paths and trajectories for animation.
Curves are linear (degree 1) or cubic (degree 3) NURBS (Non-Uniform
Rational B-Splines). NURBS are a class of curves that computers can
easily manipulate, allowing for a great deal of flexibility in modeling.
Curve Components
Curves have many components. You can display these components
using the options on a viewport’s Show menu (eye icon) and select
them using the filters on the Select panel.
Linear Curve
Cubic Curve
Knot has multiplicity 1.
Cubic Curve
Knot has multiplicity 2.
Cubic Curve
Knot has multiplicity 3 (Bézier).
Knots lie on the curve.
NURBS Boundaries
show the beginning
of the curve (U = 0).
Cubic curves are
interpolated
between points.
On a cubic curve, each knot can have a multiplicity of 1, 2, or 3. This
value refers to the number of control points associated to the knot. In
general, knots with higher multiplicity are less smooth but provide
more control over the trace of the curve. A knot with multiplicity 3 is
like a Bézier point, with one control point at the position of the knot
and the other two control points acting as the tangent handles.
Segments are
the span
between knots.
Hulls join points.
102 • SOFTIMAGE|XSI
The Tweak Curve tool allows you to manipulate these knots in a Bézierlike manner—see Manipulating Curve Components on page 105.
Whether the back and forward tangents remain aligned depends on
how you manipulate them—it is not a property of the knot itself.
Drawing Curves
• Draw Linear allows you to draw lines of connected straight
segments (sometimes called polylines). The straight segments meet
at the locations you click.
To add points or knots to an existing curve, use the corresponding
commands on the Modify > Curve menu. To remove points or knots,
select them and press Delete.
Broken tangents create
a sharp corner.
Four control points create a straight
segment when they are lined up.
Bézier knots also allow you to create straight segments by rotating the
tangents to point at adjacent knots, so that four control points are lined
up in a row. Again, whether the control points remain lined up depends
on how you manipulate the adjacent knots—it is not a property of the
segment. See Drawing a Combination of Linear and Curved Segments on
page 104.
You can draw cubic or linear curves by clicking to place control points
or to place knots. Use one of the following commands from the
Create > Curve menu of the Model or Animate toolbar:
• Draw Cubic by CVs allows you to place control points (also known
as control vertices or CVs). The curve does not pass through the
locations you click but is a weighted interpolated between the
control points. As you add more points, the existing knot positions
may change but the point positions do not.
• Draw Cubic by Bézier-Knot Points allows you to place knots of
multiplicity 3. The curve passes through the points you click. As
you add more knots, the positions of the control points are
automatically adjusted to ensure maximum smoothness of the
curve as the curve passes through the existing knot positions.
• Draw Cubic by Knot Points allows you to place knots of
multiplicity 1. Again, the curve always passes through the locations
you click and the positions of the control points are automatically
adjusted as you add more knots.
The choice between linear, cubic Bézier, and cubic non-Bézier drawing
tools depends on the situation. When creating profiles for modeling,
linear curves give a good sense of the final result. For paths, you’ll want
cubic curves—non-Bézier curves are smoother but you may find
Bézier curves easier to control. Bézier curves also give you the ability to
have sharp corners, and to mix curved and straight segments. The
choice between placing control points or placing knots to draw cubic
non-Bézier curves is simply a matter of personal preference.
While drawing a curve:
• To add a point at the end of the curve, use the left mouse button.
• To add a point between two existing points, use the middle
mouse button.
• To add a point before the first point, first right-click and choose
LMB = Add at Start and then use the left mouse button. To return
to adding points at the end of the curve, first right-click and choose
LMB = Add at End.
• Other useful commands are available on the context menu when you
right-click: Open/Close, Invert, Start New Curve, and, of course, Exit
Tool.
Before you release the mouse button, you can drag the mouse to adjust
the point’s location. Snapping can also be very useful for controlling
the position of points and knots. While drawing, you can move any
point or knot by pressing and holding m while dragging to activate the
Tweak Curve tool in supra mode.
Basics • 103
Section 6 • Curves
If you will be using curves as profiles for modeling, you should
draw them in a counterclockwise direction. This ensures that
the normals of any surface or polygon mesh you create from the
curves will be oriented correctly. If you will be using curves as
paths for animation or extruding, you should draw them from
beginning to end. Otherwise, you may need to invert the curves
or generated objects later.
Drawing a Combination of Linear and Curved Segments
Although XSI does not support having linear and cubic NURBS
segments in the same subcurve, you can use Bézier knots to obtain
straight segments on a cubic curve:
• If you have already begun drawing a linear curve, make it cubic
using Modify > Curve > Raise Degree and then use Modify >
Curve > Add Point Tool by Bezier-Knot Points to draw curved
sections. Press Shift while adding knots to preserve the existing
trace if you want the last-drawn segment to remain straight.
• If you have already begun drawing a cubic curve, place the knots
where you want them and then straighten the desired segments as
described in Creating Straight Segments on page 107.
Straight segments are not inherently linear. Whether they remain
straight depends on how you manipulate them. Using the Tweak Curve
tool to move a knot preserves the linearity, but it will break if you move
a tangent or use another tool.
104 • SOFTIMAGE|XSI
Setting Knot Multiplicity
You can change the multiplicity of a knot to suit your needs. For
example, reducing the multiplicity makes a curve smoother, but
increasing the multiplicity to 3 allows you to use Bézier controls and
make sharp angles.
1. Select one or more knots on a cubic curve. To affect all knots on
one or more curves, select the curve objects instead.
2. Choose one of the following commands from the Modify > Curve
menu of the Model toolbar:
- Make Knots Bezier set the multiplicity of the selected knots to 3.
- Make Knots Non-Bezier set the multiplicity of the selected knots
to 1.
- Set Knots Multiplicity opens the Set Crv Knot Multiplicity Op
property editor, where you can set the multiplicity of the selected
knots to 0, 1, 2, or 3. Setting it to 0 is equivalent to removing the
knot.
Manipulating Curve Components
Manipulating Curve Components
The main tool for manipulating curve components is Tweak Curve.
It allows you to manipulate curves in a Bézier-like manner. In addition
to Bézier knots, you can manipulate non-Bézier knots, control points,
and isopoints.
1. Select a curve and activate the Tweak Curve tool by pressing m or
choosing Modify > Curve > Tweak Curve from the Model toolbar.
Note that pressing m when a curve is not selected will activate the
Tweak Component tool instead.
2. As you move the mouse pointer close to a knot, the manipulator
jumps to it. Click and drag the manipulator’s handles to adjust the
knot’s position, tangent angle, or tangent length.
Drag the round handle to rotate the tangent without
changing its length.
Handle on a Bézier knot
Drag the square handle to move the tangent freely.
Use middle mouse button to rotate one side independently.
If the handles have been broken and you want to maintain
their relative angle while rotating them, right-click on the
manipulator and choose LMB Binds Broken Tangents.
Use the middle mouse button to drag one side
independently. Once the tangent is broken in this way,
the handles always move independently until you align
them again.
Drag the central knot to move it freely. The tangent handles
maintain their relative positions to the knot, unless an
adjacent segment is linear (four control points lined up). In
that case, the tangent handles are automatically adjusted to
maintain the linearity of the segment.
Shift+drag to scale the tangent length without affecting
the slope. Again, use the middle mouse button to scale
one side independently.
Use the middle mouse button to drag the central knot while
leaving the tangent points in place.
Handle on a non-Bézier point
Drag the round handle to rotate the tangent without
changing its length.
Drag the square handle to move the tangent freely.
Press Shift to scale the tangent length without
affecting the slope.
Drag the knot (or isopoint) to move it freely.
Drag a control point to move it and affect the trace of the
curve indirectly.
Basics • 105
Section 6 • Curves
You can also:
Breaking and Aligning Bézier Tangents
- Click and drag a control point to move it to a new location.
On a Bézier knot, the back and forward tangents can have different
orientations. When the tangents are “broken” or “unlinked” in this
way, the result is a sharp corner.
- Select an isopoint by clicking on a curve segment between knots.
A manipulator appears at the isopoint. To select an isopoint that
is very close to a knot, you can click on the curve farther away
and then slide the mouse pointer closer before releasing the
button.
Broken
tangents
Aligned tangents
- Right-click on a knot or isopoint manipulator to access a context
menu containing commands that affect that point, as well as
other tool options.
Note that if you right-click on a selected knot (or on another part
of the curve while knots are selected), the context menu is
different (although many of the same items are available on both
menus). In this case, the commands apply to all selected knots
and not just the one under the mouse pointer.
- Click and drag a rectangle across one or more knots to select
them. Use Shift to add to the selection, Ctrl to toggle, or
Ctrl+Shift to deselect. This allows you to apply commands to
multiple selected knots using the context menu or the Modify >
Curve menu.
3. The Tweak Curve tool remains active, so you can repeat step 2 as
often as you like. When you have finished, exit the tool by pressing
Esc, pressing m again, or activating a different tool.
Note that if you move an isopoint that is adjacent to Bézier knots, the
tangents will break. If desired, first add a Bézier knot at the isopoint’s
location to preserve continuity.
Breaking Tangents
To break Bézier tangents and adjust the handles independently of each
other, use the middle mouse button while using the Tweak Curve tool.
Aligning Tangents
After tangent handles have been broken, they can be realigned to make
the curve smooth again at that point. Select one or more Bézier knots
and choose one of the following commands from the Modify > Curve
menu on the Model toolbar:
• Align Bezier Handles sets the slopes of both tangents to their
average orientation.
• Align Bezier Handles Back to Forward sets the slope of back
tangent equal to the forward tangent.
• Align Bezier Handles Forward to Back sets the slope of forward
tangent equal to the back tangent.
“Back” and “forward” are considered in terms of the curve’s
parameterization from start to end point.
106 • SOFTIMAGE|XSI
Manipulating Curve Components
Creating Straight Segments
To straighten segments adjacent to a knot
You can create straight segments on curves using the commands
available on the Modify > Curve menu of the Model toolbar, or on the
context menu of the Tweak Curve tool. XSI creates Bézier knots, if
necessary, and rotates the appropriate tangents to point at the adjacent
knots. Once a straight segment has been created this way, the Tweak
Curve tool maintains the linearity when you move the adjacent knots.
However, the segment will revert to a curve if you adjust the tangent
handles, or if you use a different tool to move control points.
1. Select a curve.
2. Activate the Tweak Curve tool (press m).
3. Move the mouse pointer over an unselected knot.
4. Right-click and choose one of the following commands from the
context menu:
- Make Adjacent Knot Segments Linear straightens both segments
connected to the knot.
- Make Fwd Knot Segment Linear straightens the forward
segment.
- Make Bwd Knot Segment Linear straightens the back segment.
“Back” and “forward” are considered in terms of the curve’s
parameterization from start to end point.
To straighten segments between knots
1. Select the knots at both ends of each segment you want to
straighten. You must do this individually for each segment you
want to straighten, even if segments are consecutive.
2. Choose Modify > Curve > Make Knot Segments Linear from the
Model toolbar.
Alternatives to the Tweak Curve Tool
In addition to the Tweak Curve tool, XSI provides many other ways to
manipulate components. For example, you could use the regular
selection and transformation tools, or some of the other tools on the
Modify > Component menu.
The segments between selected knots become straight.
Basics • 107
Section 6 • Curves
Modifying Curves
Creating Curves from Other Objects
The Modify > Curve menu of the Model toolbar contains a variety of
commands you can use to modify curves in various ways. Two of the
more common modifications are inverting and opening/closing, but
there are other operations you can perform as well.
Many of the commands on the Create > Curve menu of the Model
toolbar allow you to create curves based on other objects in your scene.
The illustrations here give you an idea of just some of the possibilities.
Extracting Curve Segments
Opening and Closing Curves
Modify > Curve > Open/Close opens a closed curve and closes an open
one.
Original curve
Extracted segment
Fitting Curves onto Curves
Open curve
Closed curve
Inverting Curves
Modify > Curve > Invert switches the start and end points of a curve.
The result is as if you had drawn the curve clockwise instead of
counterclockwise or vice versa.
For example, if an object uses the curve as a path, it moves in the
opposite direction once you invert the curve. Similarly, if a surface has
been built from the curve and its operator stack was not frozen, its
normals become reversed.
Original sketched curve
New curve fitted onto sketched curve
Creating Curves from Intersecting Surfaces
Intersection between
two surfaces
108 • SOFTIMAGE|XSI
Importing EPS Files
Importing EPS Files
Blending Curves
Use File > Import > EPS File from the main menu to import curves
saved as EPS (encapsulated PostScript) and AI (Adobe Illustrator) files
from a drawing program. Once in XSI, you can convert them to
polygon meshes using Create > Poly. Mesh > Curves to Mesh to create
planar or extruded logos.
Original curves
New blend curve
Filleting Curves
Preparing EPS and AI Files for Import
There are some restrictions on the files you can import. Follow these
guidelines:
• Make sure the file contains only curves. Convert text and other
elements to outlines.
Intersecting curves
Fillet between them
• Save or export as version 8 or previous.
• Do not include a TIFF preview header.
Creating Curves from Animation
If you have animated the translation of an object, you can use Tools >
Plot > Curve from the Animate toolbar plot the motion of its center to
generate a curve. For example, this can be used to create a trajectory
curve. You can also plot the movement of a selected point or cluster.
Basics • 109
Section 6 • Curves
110 • SOFTIMAGE|XSI
Section 7
Polygon Mesh
Modeling
Polygon meshes are one of the basic renderable
geometry types in SOFTIMAGE|XSI. They are
ideally suited for modeling non-organic objects with
hard edges and corners, but they can also be used to
approximate smooth, organic objects. Polygon
meshes are particularly used for games development
because of the requirements of most game engines.
Polygon meshes are also the basis of subdivision
surfaces.
What you’ll find in this section ...
• Overview of Polygon Mesh Modeling
• About Polygon Meshes
• Converting Curves to Polygon Meshes
• Drawing Polygons
• Subdividing
• Drawing Edges
• Extruding Components
• Removing Polygon Mesh Components
• Combining Polygon Meshes
• Symmetrizing Polygons
• Cleaning Up Meshes
• Reducing Polygons
• Subdivision Surfaces
Basics • 111
Section 7 • Polygon Mesh Modeling
Overview of Polygon Mesh Modeling
About Polygon Meshes
There are three basic approaches to modeling with polygon meshes.
When working with polygon meshes, there are some basic concepts
you should understand.
Box Modeling
Box modeling starts with a primitive like a cube, then adds subdivision
and shapes it by deforming, adding edges, extruding, and so on.
Polygons
A polygon is a closed 2D shape formed by straight edges. The edges
meet at points called vertices. There are exactly the same number of
vertex points as edges. The simplest polygon is a triangle.
Triangle
Quad
N-gon
Modeling with Curves
When you model with curves, you begin with curves outlining the
basic shape of your object and convert them to polygon meshes. You
can then continue to add detail using any techniques you like.
Polygons are classified by the number of edges or vertices. Triangles and
quadrilaterals (or quads) are the most commonly used for modeling.
Triangles have the advantage of always being planar, while quads give
better results when used as the basis of subdivision surfaces. Certain
game engines may require that objects be composed entirely of triangles
or quads.
Polygons that are very long and thin, or that have extremely sharp
angles, can give poor results when deforming or shading. Polygons that
are regularly shaped, with all edges and angles being almost equal,
generally give the best results.
Polygon-by-polygon Modeling
With polygon-by-polygon modeling, you draw each polygon directly.
112 • SOFTIMAGE|XSI
About Polygon Meshes
Polygon Meshes
Edges that are not shared represent the boundary of the polygon
mesh object and are displayed in light blue if Boundaries and Hard
Edges are visible in a 3D view.
A polygon mesh is a 3D object composed of
one or more polygons. Typically these
polygons share edges to form a threedimensional patchwork.
• Polygons are the closed shapes that make up the “tiles” of the mesh.
However, a single polygon mesh object can
also contain discontiguous sections that are
not connected by edges. These disconnected
A polygon mesh sphere
polygon “islands” can be created by drawing
them directly or by combining existing polygon meshes.
Planar and Non-planar Polygons
When an individual polygon on a polygon mesh is completely flat, it is
called planar. All its vertices lie in the same plane, and are thus
coplanar. Planar polygons give better results when rendering.
Types of Polygon Mesh Components
Polygon meshes contain several different types of component: points
(vertices), edges, and polygons.
Planar polygon
on the ground plane with
normals visible.
Polygon
Edge
Point
• Points are the vertices of the polygons. Each point can be shared by
many adjacent polygons in the same mesh.
Non-planar polygon
created by moving a point
below the ground plane.
Triangles are always planar because any three points define a plane.
However, quadrilaterals and other polygons can become non-planar,
particularly as you move vertices around in 3D space. When objects are
automatically tessellated before rendering, non-planar polygons are
divided into triangles. However, other applications such as game
engines may not support non-planar polygons properly.
• Edges are the straight line segments that join two adjacent points.
Edges can be shared by no more than two polygons.
Basics • 113
Section 7 • Polygon Mesh Modeling
Valid Meshes
Controlling Shading on Meshes
SOFTIMAGE|XSI has strict rules for valid polygon mesh structures
and won’t let you create an invalid mesh. Some of the rules are:
Use the mesh’s Geometry Approximation property to control whether
the shading is smooth or faceted across polygons. If the object doesn’t
already have a Geometry Approximation property, choose Get >
Property > Geometry Approximation from any toolbar.
• Every point must belong to at least one polygon.
• Every edge must belong to at least one polygon.
• A given point can be used only once in the same polygon.
• All edges of a single polygon must be connected to each other.
Among other things, this means that you cannot have a hole in a
single polygon. To get a hole in a polygon mesh, you must have at
least two polygons.
Hole in a
polygon mesh
The Discontinuity parameters on the Polygon Mesh page of the
Geometry Approximation property editor control whether the objects
are faceted or smooth at the edges.
At least two polygons
are required.
• Edges cannot be shared by more than two polygons. Tri-wings are
not supported. To connect three polygons in this way, a double
edge is required.
Faceted polygons are appropriate for geometric shapes like dice.
• XSI does support one case of non-manifold geometry. A single point
can be shared by two otherwise unconnected parts of a single mesh
object.
If you export geometry from XSI, remember that such geometry
may not be considered valid by other applications.
A non-manifold geometry
that is valid in XSI.
Smooth polygons are appropriate for organic shapes like faces.
114 • SOFTIMAGE|XSI
About Polygon Meshes
The illusion of smoothness is created by averaging the normals of
adjacent polygons. When normals are averaged in this way, the shading
is a smooth gradient along the surface of a polygon. When normals are
not averaged, there is an abrupt change of shading at the polygon
edges.
Automatic discontinuity lets you turn off the averaging of normals for
sharper edges and the discontinuity Angle lets you specify how sharp
edges must be before they appear faceted. If the dihedral angle (angle
between normals) of two adjacent polygons is less than the
Discontinuity Angle, the normals are averaged; otherwise, they are not
averaged.
• If Automatic is on and Angle is 0, the object is completely faceted.
• If Automatic is off, the object is completely smooth.
Dihedral angles: flatter edges
have small angles and sharper
edges large angles.
Discontinuity on Selected Edges
You can achieve different effects by adjusting these two parameters:
• If Automatic is on, then the Angle determines the threshold for
faceted polygons.
Flat edges: normals
averaged, smooth shading
Sharp edges: normals
not averaged, faceted
In addition to setting the geometry approximation for an entire object,
you can make selected edges discontinuous by marking them as “hard”
using Modify > Component > Mark Hard Edge/Vertex from the
Model toolbar. Hard edges are displayed in dark blue when Boundaries
and Hard Edges is checked on a viewport’s Show menu (eye icon).
Selected edges
marked as hard.
Basics • 115
Section 7 • Polygon Mesh Modeling
Converting Curves to Polygon Meshes
Use Create > Poly. Mesh > Curves to Mesh from the Model toolbar to
create a polygon mesh based on the selected curves.
Exterior closed curves become disjoint
parts of the same mesh object.
Interior closed curves can become holes.
Tesselating
• Delaunay generates a mesh composed entirely of triangular
polygons. This method gives consistent and predictable results, and
in particular, it will not give different results if the curves are
rotated.
• Medial Axis creates concentric contour lines along the medial axes
(averages between the input boundary curves), morphing from one
boundary shape to the next. This method creates mainly quads
with some triangles, so it is well-suited for subdivision surfaces.
Tesselation is the process of tiling the curves’ shapes with polygons. XSI
offers three different tesselation methods:
• Minimum Polygon Count uses the least number of polygons
possible but yields irregular polygons.
Other Options
In addition to controlling the tesselation, there are many other options
to control holes, extrusion, beveling, embossing, and so on.
116 • SOFTIMAGE|XSI
Drawing Polygons
Drawing Polygons
Modify > Poly. Mesh > Add/Edit Polygon Tool is a multi-purpose tool
that lets you draw polygons interactively by placing vertices. You can
use it to add polygons to an existing mesh, add or remove points on
existing polygons, or to create a new polygon mesh object.
1. Do one of the following:
- To create a new polygon mesh object, first make sure that no
polygon meshes are currently selected.
or
- To add polygons to an existing polygon mesh object, select the
mesh first.
or
- To add or remove points on an existing polygon in a existing
polygon mesh object, select that polygon.
2. Choose Modify > Poly. Mesh > Add/Edit Polygon Tool from the
Model toolbar or press n.
3. Do one of the following:
- Click in a 3D view to add a point. If necessary, you can adjust the
position by moving the mouse pointer before releasing the
button.
or
- Click an existing point on another polygon in the same mesh to
attach the current polygon to it.
or
or
- Middle-click a vertex of the current polygon to remove it.
As you move the mouse pointer, the edges that would be created are
outlined in red. To insert the new point between a different pair of
vertices of the current polygon, first move the mouse across the edge
connecting them.
The direction of the normals is determined by the direction in
which you draw the vertices. If the vertices are drawn in a
counterclockwise direction, the normals face toward the camera and
if drawn clockwise, they face away from the camera. As you draw,
red arrows indicate the order of the vertices.
4. When you have finished drawing a polygon, do one of the
following:
- To start a new polygon and automatically share an edge with the
current one, first move the mouse pointer across the desired edge
and then click the middle mouse button. Repeat step 3 as
necessary.
or
- To start a new polygon without sharing automatically sharing an
edge, click the right mouse button. Repeat step 3 as necessary.
or
- When you are finished drawing polygons, exit the Add/Edit
Polygon tool by clicking the right mouse button twice in a row, by
choosing a different tool, or by pressing Esc.
- Click an existing edge of another polygon in the same mesh to
attach the current polygon to it.
or
- Left-click and drag on a vertex of the current polygon to move it.
Basics • 117
Section 7 • Polygon Mesh Modeling
Subdividing
You can subdivide polygon meshes to add more detail where needed.
Subdividing Polygons with Smoothing
Subdividing Polygons and Edges Evenly
You can subdivide and smooth selected polygons using Modify > Poly.
Mesh > Local Subdivision from the Model toolbar.
You can subdivide polygons and edges evenly using Modify > Poly.
Mesh > Subdivide Polygons/Edges from the Model toolbar. Select
specific polygons or edges first, or just select a polygon mesh object to
subdivide all polygons.
For polygons, you can choose different subdivision types:
Plus
Diamond
X
Triangles
For edges, you can connect the new points and extend the subdivision
to a loop of parallel edges (that is, the opposite edges of quad
polygons):
Splitting Edges
You can split edges interactively using Modify > Poly. Mesh > Split
Edge Tool from the Model toolbar. Activate this tool then click an edge
to split it. Use the middle mouse button to split parallel edges. Press
Ctrl while clicking to bisect edges evenly.
Other Ways to Subdivide
The Modify > Poly. Mesh menu of the Model toolbar contains many
other tools and commands that can subdivide and add detail to
polygon meshes. For example:
Parallel Edge Loop and Connect both off.
Connect on.
• Add Vertex Tool
• Split Polygon Tool
• Split Edges (with split control)
• Dice Polygons
• Slice Polygons
Parallel Edge Loop on.
118 • SOFTIMAGE|XSI
Parallel Edge Loop and
Connect both on.
Drawing Edges
Drawing Edges
Choose Modify > Poly. Mesh > Add Edge Tool from the Model toolbar
to split or cut polygons interactively by drawing new edges. You can use
this tool to freeform or redraw your object’s flow lines.
Middle-click to continue
drawing edges from the
previous point.
1. Select a polygon mesh object.
2. Choose Modify > Poly. Mesh > Add Edge Tool from the Model
toolbar or press \ .
You can also:
3. Start a new edge by clicking on an existing edge or point.
- Press Ctrl while clicking or middle-clicking an edge to bisect it
evenly.
You can also press Alt while clicking to start in the middle of a
polygon and automatically connect to the nearest edge by a triangle
4. If desired, click in the interior of a polygon to add a point. You can
repeat this step to add as many interior points as you like, creating a
polyline, before terminating it.
Click inside a polygon to add an
interior point.
5. Terminate the new edge by clicking or middle-clicking on an existing
edge or point.
- Press Shift while clicking or middle-clicking an edge to ensure
that the angle between the new edge and the target edge snaps to
multiples of the Snap Increments - Rotate value set in your
Transform preferences. For example, if Snap Increments - Rotate
is 15, then the new edge will snap at 15 degrees, 30 degrees,
45 degrees, and so on. Angles are calculated in screen space.
- Press Ctrl+Shift while clicking or middle-clicking an edge to
attach the new edge at a right angle to the target edge. The angle
is calculated in object space.
- Press Alt while clicking in the middle of the polygon to add a
point and connect it to the nearest edge by a triangle.
If you are trying to attach a new edge to an existing edge or
vertex, and the target does not become highlighted when you
move the pointer over it, it means that you cannot attach the new
edge at that location because it would create an invalid mesh.
You cannot attach the
edge to this point.
Click to continue drawing edges
from the last point.
6. To continue adding edges starting at a new location, right-click and
then repeat steps 2 to 4.
To exit the Add Edge tool, press Esc or choose a different tool.
Basics • 119
Section 7 • Polygon Mesh Modeling
Extruding Components
You can extrude polygons to create local details, such as indentations
or protuberances like limbs and tentacles. You can extrude polygons,
edges, or points.
If you want to adjust other properties, open the Extrude Op
property editor in the stack.
Extruding with Options
To display additional options when extruding, select one or more
components and press Ctrl+Shift+d or choose Modify > Polygon
Mesh > Extrude Along Axis. This lets you control whether adjacent
components are extruded separately or together, as well as specify the
subdivisions, inset, transformations, and other values.
Extruding Along a Curve
Extruding Components
1. Select one or more components on a polygon mesh, and then press
Ctrl+d or choose Edit > Duplicate/Instantiate > Duplicate Single.
2. Use the transform tools or the Tweak Component tool to translate,
rotate, and scale the extruded components as desired.
120 • SOFTIMAGE|XSI
You can get more control over the shape of an extrusion by using a
curve. Select one or more components, choose Modify > Polygon
Mesh > Extrude Along Curve, and then pick the curve.
Duplicating Polygons
Duplicating is similar to extruding, but the polygons are not connected
to the original geometry. This is useful for building repeating forms
like steps or railings. Choose Modify > Polygon Mesh > Duplicate, or
check Duplicate Polygons in the Extrude Op property editor.
Removing Polygon Mesh Components
Removing Polygon Mesh Components
There are several different ways to remove polygon mesh components
using different commands from the Modify > Poly. Mesh menu:
Delete Components, Collapse Components, Dissolve Components,
and Dissolve and Clean Adjacent Vertices.
Dissolving Components
Dissolving removes selected components and then fills in the holes
with with new polygons.
When components are selected, pressing Delete performs different
actions:
Dissolving selected
polygons
• Points and edges are dissolved and adjacent vertices are cleaned.
• Polygons are deleted.
Deleting Polygon Mesh Components
Deleting removes selected components and anything attached to them,
leaving empty holes.
Dissolving Components and Cleaning Vertices
Cleaning automatically collapses vertices that are shared by only two
edges after dissolving, but were shared by more before.
Deleting selected
point
Before
Selected polygons
will be dissolved.
Collapsing Polygon Mesh Components
Collapsing removes selected components and reattaches the adjacent
ones, creating no new holes.
Collapsing
selected edge
After
Dissolving and
cleaning vertices
Vertices shared by two
edges after dissolving are
collapsed.
Vertices already shared
by two edges are not
collapsed.
Vertices shared by three
or more edges are not
collapsed.
Basics • 121
Section 7 • Polygon Mesh Modeling
Combining Polygon Meshes
You can combine two or more polygon mesh objects into a single new
one. Select all the meshes you want to combine, then choose Create >
Poly. Mesh > Blend or Merge from the Model toolbar.
There is a Tolerance parameter for determining the maximum distance
in Softimage units between boundaries for them to be considered
“nearby”.
The two commands differ in how they treat boundary edges on
different objects when the boundaries are close to each other.
Other Ways of Combining Meshes
• With Blend, nearby boundaries on different objects are joined by
new polygons.
You can also combine meshes using the Boolean commands on the
Create > Poly. Mesh and Modify > Poly. Mesh menus.
• With Merge, nearby boundaries on different objects are merged
into a single edge at the average position.
Original objects
Far boundaries are not joined
Blended object
Near boundaries are joined
Far boundaries are not merged
Merged object
Near boundaries are merged
122 • SOFTIMAGE|XSI
Symmetrizing Polygons
Symmetrizing Polygons
You can model one half of a polygon mesh object and then symmetrize
it. This creates new polygons that mirror the geometry on the original
side.
1. Model the polygons on one side of the object. In the example
below, an ornamental curlicue was added to the hilt of the dagger.
3. Select the polygons to be symmetrized. You can symmetrize the
whole object or just a portion.
Select the desired polygons.
Model one side of the object.
4. Choose Modify > Poly. Mesh > Symmetrize Polygons from the
Model toolbar.
2. Prepare the other side of the object for symmetrization. For
example, if you intend to merge the symmetrized portions by
welding or bridging, then you may need to create holes for the new
polygons to fit and add vertices to aid the merge.
5. In the Symmetrize Polygon Op property editor, set the parameters
as desired, for example, to specify the plane of symmetry.
The finished dagger.
Prepare the other side.
Basics • 123
Section 7 • Polygon Mesh Modeling
Cleaning Up Meshes
Reducing Polygons
You can filter polygon mesh objects to clean them up. Filtering removes
components that match certain criteria, for example, small
components that represent insignificant detail.
The Modify > Poly. Mesh > Polygon Reduction command on the Model
toolbar lightens a heavy object by reducing the number of polygons,
while still retaining a useful fidelity to the shape of the original highresolution version. For example, you can use polygon reduction to
meet maximum polygon counts for game content, or to reduce file size
and rendering times by simplifying background objects.
Filtering Edges
Modify > Poly. Mesh > Filter Edges on the Model toolbar removes
edges by collapsing them based on either their length or angle. In both
cases, you can protect boundary edges using Keep Borders Edges Intact.
Edge filtering is especially useful for reducing the triangulation on
polygon meshes generated by Boolean operations.
Filtering Points
Modify > Poly. Mesh > Filter Points on the Model toolbar welds
together vertices that are within a specified distance from each other.
• Average position welds each clump of points in the selection
together at their average position.
• Selected point welds each clump of points in the selection together
at the position of the point that is nearest to the average position.
• Unselected point welds each selected point to an unselected point
on the same object.
Filtering Polygons
Modify > Poly. Mesh > Filter Polygons removes polygons based on
their area or their dihedral angles:
• When you filter polygons by angle, adjacent polygons are merged
together if their dihedral angle is less than the threshold you specify.
Small angles correspond to flat areas, so this method preserves sharp
detail.
• When you filter polygons by area, the smallest polygons are
removed. This eliminates small, “noisy” details.
124 • SOFTIMAGE|XSI
Polygon reduction also allows you to generate several versions of an
object at different levels of detail (LODs).
Polygon reduction works by collapsing edges into points. Edges are
chosen according to their “energy”, which is a metric based on their
length, orientation, and other criteria. In addition, you have options to
control the extent to which certain features, such as quad polygons, are
preserved by the process.
Subdivision Surfaces
Subdivision Surfaces
Subdivision surfaces (sometimes called “subdees”) allow you to create
smooth, high-resolution polygon meshes from lower-resolution ones.
They provide the smoothness of NURBS surfaces with the local detail
and texturing capabilities of polygon meshes.
Applying Geometry Approximation
You can turn a polygon mesh object into a subdivision surface by
pressing + and – on the numeric keypad. This applies a local Geometry
Approximation property if there isn’t already one, and sets the
subdivision level for render and display. The higher the subdivision
level, the smoother the object.
The original geometry forms a hull that is used to control the shape of
the smoothed, “proxy” geometry. You can toggle the display of the hull
and the subdivision surface on the Show menu (eye icon).
Polymesh hull
Subdivision Rules
SOFTIMAGE|XSI gives you a choice of
several subdivision rules (smoothing
algorithms): Catmull-Clark, XSI-DooSabin, and linear. In addition, you have
the option of using Loop for triangles
when using Catmull-Clark or linear.
The subdivision rule is set in the Polygon Mesh property editor.
Catmull-Clark
The Catmull-Clark
subdivision algorithm
produces rounder shapes. The
generated polygons are all
quadrilateral.
Catmull-Clark Subdivision
XSI-Doo-Sabin
Subdivision surface
The XSI-Doo-Sabin
subdivision algorithm is a
variation of the standard Doo-Sabin algorithm. It produces more
geometry than Doo-Sabin, but it works better with cluster properties
such as texture UVs, vertex colors, and weight maps, as well as with
creases.
XSI-Doo-Sabin Subdivision
Basics • 125
Section 7 • Polygon Mesh Modeling
Linear Subdivision
Creases
Linear subdivision does not perform any smoothing, so the object’s
shape is unchanged. It is useful when you want an object to deform
smoothly without rounding its contours.
Subdivision surfaces typically produce a smooth result because the
original vertex positions are averaged during the subdivision process.
However, you can still create sharp spikes and creases in subdivision
surfaces. This is done by adjusting the hardness value of points or
edges. The harder a component, the more strongly it “pulls” on the
resulting subdivision surface.
Linear Subdivision
Use Modify > Component > Mark Hard Edge/Vertex to make
components completely hard, or Set Edge/Vertex Crease Value to apply
an adjustable value.
Loop Subdivision
With the Catmull-Clark and linear subdivision methods, you have the
option of using Loop subdivision for triangles. The Loop method
subdivides triangles into smaller triangles, which gives better results
when smoothing and shading.
Catmull-Clark
with Loop
Catmull-Clark
Other Methods of Subdividing
• You can create a new object that is a smoother, denser version of an
existing one using Create > Poly. Mesh > Subdivision from the Model
toolbar.
• You can create a new object that is a smoother, denser version based
on the Geometry Approximation settings of an existing object
using Edit > Duplicate/Instantiate > Duplicate Using Geometry
Approx.
126 • SOFTIMAGE|XSI
Section 8
NURBS Surface
Modeling
NURBS surfaces are one of the basic types of
renderable geometry in SOFTIMAGE|XSI. They are
rectangular patches that allow for very smooth shapes
with relatively few control points. Surfaces can model
precise shapes using less geometry than polygon
meshes and they’re ideal for smooth, manufactured
objects like car and aeroplane bodies.
What you’ll find in this section ...
• About Surfaces
• Building Surfaces
• Modifying Surfaces
• Projecting and Trimming with Curves
• Surface Meshes
Basics • 127
Section 8 • NURBS Surface Modeling
About Surfaces
In SOFTIMAGE|XSI, surfaces are NURBS patches. Mathematically,
they are an interconnected patchwork of smaller surfaces defined by
intersecting NURBS curves.
• Knot curves (sometimes called isoparams or isoparms) are sets of
connected knots along U or V—they are the “wires” shown in
wireframe views. You can select knot curves and use them, for
example, to build other surfaces using the Loft operator.
Components of Surfaces
You can display surface components and attributes in the 3D views, as
well as select them for various tasks.
Knots lie
on the surface.
• Points are the control points of the curves that define the surface.
Their positions define the shape of the surface.
Knot curves
connect knots.
Points define
and control the
surface.
You can display
lines between
points.
• Isolines are not true components. They are, in fact, arbitrary lines
of constant U or V on a surface. You can use the U and V Isoline
selection filter to help you pick isolines for lofting and other
operations.
• NURBS hulls are display lines that join consecutive control points.
It can be useful to display them when working with curves and
surfaces.
• Surface knots are the knots of the curves that define the surface;
they lie on the surface where the U and V curve segments meet.
Isolines are arbitrary lines
on the surface in U or V.
128 • SOFTIMAGE|XSI
Building Surfaces
Building Surfaces
The commands on the Create > Surf. Mesh menu can be used to build
NURBS surfaces in a variety of ways. The first set of commands
generate surfaces from curves—see Objects from Curves on page 88 for
an overview of the basic procedure. Here are a few examples of some of
the other ways you can build surfaces.
Merging Surfaces
Merging two surfaces creates a third surface that spans the originals.
You have the option of also selecting an intermediary curve for the
merged surface to pass through.
Blending Surfaces
Blending creates a new surface that fills the gap between the selected
boundaries on two other surfaces.
Input surfaces
Single merged surface
Filleting Intersections
Input surfaces
Resulting blend
A fillet is a surface that smooths the intersection of two others, like a
molding between a wall and a ceiling.
Input surfaces
Resulting fillet
Shaded view
Basics • 129
Section 8 • NURBS Surface Modeling
Modifying Surfaces
You can modify surfaces in a variety of ways using the commands in
the Modify > Surface menu of the Model toolbar, for instance, by
adding and removing knot curves. Here are a few examples of some
other ways of modifying surfaces.
Opening and Closing Surfaces
You can open a closed surface and close an open surface. A surface can
be open in both U and V like a grid, closed in both like a torus, or open
in one and closed in the other like a tube.
Inverting Normals
If the normals of a surface are pointing in the wrong direction, you can
invert them.
Open
Closed
Inverting a surface
Extending Surfaces
You can extend a surface from the selected boundary to a curve.
130 • SOFTIMAGE|XSI
Projecting and Trimming with Curves
Projecting and Trimming with Curves
You can project curves onto surfaces and then use the result to remove
a portion of the surface, or for any other modeling purpose. This is
useful for modeling manufactured objects like car parts with holes or
for creating smooth surfaces that aren’t four-sided like a standard
NURBS patch.
Trim Curves
If you use the curve to remove part of the surface, it is called a trim
curve.
What Are Surface and Trim Curves?
Both surface and trim curves involve projecting a curve object onto a
NURBS surface. The difference is whether the result is used to remove
a portion of the surface or not.
Surface Curves
Trim curve
If the curve object is just projected and nothing more, the result is
called a surface curve. It is a new component of the surface. This surface
curve can be used like any other curve component of the surface
(isoline, knot curve, and so on) for modeling operations like Loft,
Extend to Curve, and others.
Curve object
NURBS surface
Surface curve
Trimming affects the visible portion of the surface. All the underlying
points are still there and you can still affect the surface’s shape by
moving points in the trimmed area. However, you can still use
trimmed surfaces as collision objects for particle, soft body, and cloth
simulations, and the like.
Projecting or Trimming by Curves
Select a NURBS surface object, choose Modify > Surface > Trim by
Projection from the Model toolbar, and then pick a curve object. The
curves are projected onto the surface and, by default, the surface is
trimmed using all projected curves.
In the Trim Surface by Space Curve property editor, do any of the
following:
• To trim the surface using only some of the projected curves, click
Pick Trims and then pick the desired surface curves. Right-click
when you have finished picking.
• To trim the surface using all the projected curves, click Trim with
All.
• To project the curve onto the surface, click Project All.
Basics • 131
Section 8 • NURBS Surface Modeling
• Use Is Boundary to choose whether to trim the inside or the
outside.
• Use Projection Precision to control the precision used to calculate
the projection. If the shape of the projected curve is not accurate,
increase this value. However, high values take longer to calculate
and may slow down your computer. For best performance, set this
parameter to the lowest value that gives good results.
Deleting Trims
Deleting a trim allows you to remove a trim operation even after you
have frozen the surface’s operator stack. Set the selection filter to Trim
Curve, select one or more trim curves on the surface, and choose
Modify > Surface > Delete Trim from the Model toolbar.
Surface Meshes
Surface meshes provide a way to assemble multiple surfaces into a
single object that remains seamless under animation and deformation.
1. Create a collection of separate surfaces. These will become the
surface mesh’s subsurfaces.
Line the surfaces up into a
basic configuration.
This illustration shows a
common configuration for a
leg or arm.
2. Optionally, line up pairs of boundaries by selecting them and
choosing Create > Surf Mesh > Snap Boundary from the Model
toolbar.
Snap opposite
boundaries together to
connect the surfaces
across the junction.
132 • SOFTIMAGE|XSI
Surface Meshes
3. Select all the surfaces and choose Create > Surf Mesh > Assemble.
The surfaces are assembled into a single surface mesh. The
continuity manager ensures that the continuity is preserved at the
seams.
Excluding Points from Continuity Managements
All assembled surface meshes have a special cluster called
NonFixingPointsCluster. If a point on a subsurface boundary is in this
cluster, its continuity is not managed by SCM when Don’t Fix the
Tagged Points is on. The other points on the same junction are not
affected. This lets you create holes in the surface mesh for mouths, eyes,
and so on.
Notice how the assembled
surface mesh blends smoothly
across the junctions.
4. You can now deform and animate the surface mesh as desired.
If you ever freeze the assembled surface, you will need to
reapply the surface continuity manager manually using
Create > Surf Mesh > Continuity Manager.
Basics • 133
Section 8 • NURBS Surface Modeling
134 • SOFTIMAGE|XSI
Section 9
Animation
To animate means to make things come alive, and
life is always signified by change: growth,
movement, dynamism. In XSI, everything can be
animated, and animation is the process of changing
things over time. For example, you can make a cat
leap on a chair, a camera pan across a scene, a
chameleon change color, or a face change shape.
What you’ll find in this section ...
• Animating with Keys
• Animating Transformations
• Playing the Animation
• Editing Keys and Function Curves
• Layering Animation
• Constraints
• Path Animation
• Linking Parameters
• Expressions
• Copying Animation
• Scaling and Offsetting Animation
• Plotting (Baking) Animation
• Removing Animation
Basics • 135
Section 9 • Animation
Bringing It to Life
The animation tools in SOFTIMAGE|XSI let you create animation
quickly so you can spend the time you need to perfect it by editing the
movements, changing the timing, and trying out different techniques
for perfecting the job. XSI gives you the control and quick feedback you
need to produce great animation. Basically, if you want to make it
move, XSI has the tools.
What Can You Animate in XSI?
You can animate every scene element and most of their parameters—in
effect, if a parameter exists on a property page, it can probably be
animated.
• Motion: Probably the
most common form of
animation, this involves
transforming an object
by either moving
(translating), rotating, or
scaling (resizing) it.
Special character tools let
you easily animate
humans, animals, and all
manner of fantastical
creatures. You can also use
dynamic simulations to
create movement
according to the physical
forces of nature.
Motion, geometry deformations, and
appearances can all be animated in XSI.
• Geometry: You can animate an object’s geometry by changing
values such as U and V subdivision, radius, length, or scale. You
can also use numerous deformation tools and skeletons to bend,
twist, and contort your object.
136 • SOFTIMAGE|XSI
• Appearance: Material, textures, visibility, lighting, and
transparency are just some of the parameters controlling
appearance that can be changed over time.
One of the most important features of XSI is its low and high-level
approach to animation. Low-level animation means getting down to
the parameters of an object and animating their values. Keyframing is
the most common method of direct animation, but you can also use
expressions, constraints, linked parameters, and expressions for
creating animation control relationships.
High-level animation
means that you are
working with animation
in a way that is nonlinear
(the animation is
independent of the
timeline) and nondestructive (any
modifications do not
destroy your original
animation data).
You store animation or
shapes in sources, then
use the animation mixer
to edit, mix, and reuse
those sources as clips.
Keyframed (low-level) animation can be
contained in action sources, then brought
into the animation mixer as a clip (high level).
To use these levels together, you can animate at a low level by
keyframing a specific parameter, then store that animation and others
into action sources and mix them together in the animation mixer to
animate at a high level. This allows you to easily manage complex
animation yet retain the ability to work at the most granular level.
Bringing It to Life
The Many Techniques of Animating in XSI
XSI provides you with many choices of tools and techniques for
animating: explore and decide which tool lets you animate in the most
effective way. In most projects you have, you will probably use a
combination of a number of these tools together to get the best results.
• The most basic
method of animation
is keying. You set
parameter values at
specific frames, and
then set keys for these
values. The values for
the frames between
the keys are calculated by interpolation.
• Create animation relationships
between objects at the lowest
(parameter) level. These include
constraints, path animation,
linked parameters, expressions,
and scripted operators.
• Character animation tools offer
you control for creating and
animating skeletons. You can
animate them with forward or
inverse kinematics, add an
enveloping model, set up a rig,
and fine-tune the skeleton’s
movements in a myriad of ways to
get just the right motion.
• The animation mixer is
a powerful editing tool
that is nonlinear and
non-destructive. Any
type of animation that
you generate can be stored and reused later, on the same model or a
different one. You can also mix different types of animation
together and weight them against each other.
• Shape animation lets you can change
the geometry of an object over time. To
do this, you deform the object into
different shapes using any type of
deformation tool, then store shape keys
for each pose that you want to animate.
• Dynamic simulations let you create
realistic motion with natural forces
acting on rigid bodies, soft bodies,
cloth, and particles. With
simulations, you can create
animation that could be difficult or
time-consuming to achieve with
other animation techniques.
Animation and Models
Models in XSI are data containers (like mini scenes) that make it easy
to organize elements that need to be kept together, such as all the
elements that make up a character.
The main reason for using models for
animation is that they provide the easiest
way to import and export animated
objects between scenes, and to copy
animation between objects.
Models also make it easy to use the
animation mixer. Each model can have
only one Mixer node that contains mixer
and animation data. If you have many
objects in a scene that use the mixer but
they aren’t within models, you can’t copy
animation from one object to another.
Basics • 137
Section 9 • Animation
Playing the Animation
The first thing you need to do before starting an animation is to set up
your frame rate and format to match the medium in which you will be
saving the final animation. In animation, the smallest unit of time is
the amount required to display a single frame. The speed at which
frames are displayed, or the frame rate, is always determined by how the
final animation will be viewed. If you are compositing your animation
with other film or video footage, it’s usually best for the animation to
be at the same frame rate as the footage.
When you change the timing of the animation, you change the way that
the actions look. This means that the timing that looked correct while
you were previewing it in XSI may not look as good on video or film.
For example, an action that spans 24 frames would take one second on
film; changing the frame rate to suit North American video at 30 fps
would cause the same 24 frames to span 0.8 seconds.
Setting up the timing for your
animation is the first thing you
should do before you start. You
can set the frame rate and frame
format in the Output Format
preferences property editor
(choose File > Preferences).
These settings affect many areas of
XSI, including the timeline and
playback controls.
You can set up the default frame format and frame rate preferences for
your scene using the options in the Output Format preferences
property editor. These settings propagate to many other parts of XSI
that depend on timing. Regardless of whether you enter time code or a
frame number as the frame format, XSI internally converts your entry
into time code.
138 • SOFTIMAGE|XSI
A big part of the animation process is the constant tweaking and
replaying of the animation to see that you get things right. There are
different ways of playing back animation in the viewports, but the most
common way of playing back animation is by dragging the playback
cursor in the timeline and using the playback controls below the
timeline.
Before you start playing back the animation, you should set up the time
range, the time display format, and the timeline’s start and end frames.
These define the range of frames in which you can play in the scene.
The timeline displays which frames can be played. The current frame of the
animation is indicated by the playback cursor (the vertical red bar), which you
can drag to different frames. You can set the scene’s length by entering frame
numbers in the Start and End Frame boxes at either end of the timeline.
The time range determines the global range of frames, and the range
slider in it lets you play back a smaller range of frames within the global
range. If you are working with an animation sequence that is very long,
you can focus on just a subsection of frames which you can easily change
and move along the timeline.
Playback menu
displays many
playback options.
Plays/stops the
animation forward
or backward.
Moves the frame
Goes to the first
forward/backward by or last frame of
increments (default is the timeline.
1 frame).
Play back frame
by frame (All) or
Plays or mutes in real time (RT).
audio.
Repeats the
Current
animation in a
frame
continuous loop.
Previewing Animation
Previewing Animation
You can capture and cache images from an animation sequence and
play them back in a flipbook to help you see the animation in real time.
Anything that is shown in the viewport you choose is captured—
render region, rotoscoped scene with background, or any display type
(wireframe, textured, shaded, etc.). For example, you may want to set
the display type to Hidden Line Removal for a “pencil test” effect.
You can also include audio files to play back with the flipbook,
especially useful for lip synching.
How to Create a Flipbook
1
In the viewport whose images you want to capture, set
the display options as you like. Then click the camera
icon in that viewport and choose Start Capture.
2
In the Capture Viewport dialog box, set the options for
the flipbook’s file name, image size, format, sequence,
padding, and frame rate.
3
View the flipbook in the
XSI flipbook or in the
native media player on
your computer.
Open the XSI flipbook by
choosing Flipbook from
the Playback menu.
Ghosting
Animation ghosting, also known as onion-skinning, lets you display a
series of snapshots of animated objects at frames or keyframes behind
and/or ahead of the current frame. This lets you visualize an object’s
motion, helping you improve its timing and flow. Ghosting works for
any object that moves in 3D space, either by having its transformation
parameters (scaling, rotation, and translation) animated in any way, or
by having its geometry changed by shape animation or deformations
(including envelopes), or with dynamic simulations (rigid bodies) and
deformations (cloth and soft body).
Ghosts in a dark color below are
displayed on keys that have
played before the current frame.
Ghosts in a lighter color show
the interpolation between
those keys.
Ghosting displayed for a
character with motion capture
animation stored in an action.
or
Choose Flipbook from
the Start > Programs >
Softimage Products > XSI
menu to play it outside of
XSI.
You can also export
flipbooks in a variety of
standard formats, such as AVI and QuickTime.
You can display an object’s geometry, points, centers,
trails, and velocity vectors as ghosts.
Velocity
Trail
Points
Basics • 139
Section 9 • Animation
Animating with Keys
Methods of Keying
Keyframing (or “keying”) is the process of animating values over time.
Traditional hand-drawn animation is generally created using
keyframes—an animator draws the extreme (or critical) poses at the
appropriate frames, creating “snapshots” of movement at specific
moments.
There are a number of ways in which you can set keys in XSI depending
on what type of workflow you’re used to and the tools you want or
need to use for your production. Any way you choose, each method
results in keyframes being created.
As in traditional animation, a keyframe in XSI is also a “snapshot” of
one or more values at a given frame, but unlike traditional animation,
XSI handles the in-betweening for you, computing the intermediate
values between keyframes by interpolation.
• Keyable parameters: Use the keying panel to set keys on all keyable
parameters for the selected object or hierarchy. If you’re using the
QWERTY interaction model, XSI is automatically set up to work in
this manner.
There are three main keying workflows in XSI from which to choose:
• Character key sets: Create sets of an object’s parameters for keying.
Set the current character key set that you want, then simply key
without needing to select the object first. If you’re transferring
from another 3D software, you may prefer this method of working.
• Marked parameters (and marking sets): Mark parameters and/or
create sets of parameters to set keys on marked parameters on the
selected object or hierarchy.
Keys set at frames 1, 50, and 100. Intermediate frames
are interpolated automatically.
Set the Keying Preference First
Before you start setting keys, you need to set a preference that
determines the way in which you key: with keyable parameters, with
character key sets, or with marked parameters.
You can set keys for just about anything in XSI that has a value: this
includes an object’s geometry, colors, textures, lighting, and visibility.
You can set keys for any animatable parameter in any order and at any
time. When you add a new key, XSI recalculates the interpolation
between the previous and next keys. If you set a key for a parameter at a
frame that already has a key set for that parameter, the new key
overwrites the old one.
When you set keys on a parameter’s value, a function curve (or fcurve)
is created. An fcurve is a graph that represents the changes of a
parameter’s values over time, as well as how the interpolation between
the keys occurs. When you edit an fcurve, you change the animation.
140 • SOFTIMAGE|XSI
This preference determines which parameters are keyed when you save
a key by pressing K, by clicking the keyframe icon in the Animation
panel, or by choosing the Save Key command from the Animation
menu.
Click the Save Key preference
button in the Animation
panel, then select an option
from the menu.
Animating with Keys
Keying Parameters in the Keying Panel
Keying with Character Key Sets
Using the keying panel (click the KP/L tab on the main command
panel), you can quickly and easily change values and set keys for
specific parameters of a selected object. The parameters that are
displayed in the keying panel are called keyable parameters.
Character key sets are sets of keyable parameters that you create for an
object or hierarchy for quick and easy keying. Once you have created
key sets, you don’t need to select an object first to key its parameters—
just press K or click the keyframe icon and whatever is in the current
character key set is keyed.
Once you have set up the object’s keying panel with the keyable
parameters you want, you simply select that object and press K or click
the keyframe icon to set a key on whatever is in its keying panel.
1 Set the Save Key preference
to Key All Keyable.
3
2 Select an object and open the keying
panel (click the KP/L tab).
Character key sets let you keep the same set of parameters available for
any object or hierarchy for easy keying, such as only the rotation
parameters for the upper body control in a rig.
1 Create a character key set that
includes the parameters you
want to key on an object.
2 Set the current character key set.
If you just created a character key
set, it is set as the current one.
If you need to add other keyable
parameters to the keying panel, select
them in the keyable parameters editor.
4
Go to a frame where
you want to set a key.
3 Set the Save Key preference to
Key Character Key Set.
4
5
6 Set a key for the
keyable parameters.
Go to a frame where
you want to set a key.
Change the values for
the selected object’s
keyable parameters.
5 Change the values for the
parameters in the set.
6 Set a key for the parameters in
the current character key set.
Basics • 141
Section 9 • Animation
Keying Marked Parameters
Marking Parameters
Marking parameters is a way of identifying which parameters you want
to use for a specific animation task, such as keying. By keying only the
marked parameters, you can keep the animation information small
and specific to the selected object.
You can mark parameters by clicking them in the marked parameter
list (in the lower-right of the interface), a property editor, the explorer,
or the keying panel. Marked parameters are highlighted in yellow.
Click a parameter to mark it in
the marked parameter list, the
explorer or in a property editor.
1 Set the Save Key preference to
Key Marked Parameters.
2 Select the object you want to
animate and go to the frame at
which you want to set a key.
3
Mark the parameters you want to
key. Transformation parameters
are automatically marked when
you activate a transformation tool.
4 Set the marked parameter values
for the selected object.
Keying with Marking Sets
You can also create marking sets for
keying. Marking sets are lists of an
object’s parameters that you want to
keep handy for keying. You can have
only one marking set per object at a
time. Marking sets make it easy to key in hierarchies because each
object within that structure can have its own marking set, such as a
marking set of rotation parameters for bones, or a marking set of
translation parameters for IK effectors.
• To create a marking set, select an object and mark the parameters
you want to keep in the set. Then press Ctrl+Shift+M.
5 Set a key for the
marked parameters at
this frame.
142 • SOFTIMAGE|XSI
• To key marking sets, select one or more objects with a marking set.
Then press Ctrl+M to activate the marking set, then set a key by
pressing K or clicking the keyframe icon.
You can also press Alt+K to set a branch key. This searches all
nodes of the selected object and its children for custom parameter
sets named “MarkingSet” and keys all the marking sets it finds.
This is useful for working with characters and other hierarchies.
Animating Transformations
Setting Keys on Individual Parameters
Animating Transformations
In addition to the three main keying workflows, you can also set keys
directly on individual parameters in these different ways. These
methods don’t need to consider the keying preference that you have
selected.
Animating the transformations (scaling, rotation, and translation) of
objects is something that you will be doing frequently. It is one of the
most fundamental things to animate in XSI.
Click the keyframe icon to
set keys on ore remove keys
from all or only marked
parameters on the property
page.
Within the
Kinematics node are
the Global
Transform and Local
Transform nodes,
referring to the type
of transformation.
Click the animation icon to
set or remove keys for only
that parameter. You can also
right-click it and choose Set
Key or Remove Key.
In an explorer, right-click a
parameter’s animation icon
and choose
Set Key or
Remove Key.
You can find transformation parameters in the object’s Kinematics
node in the explorer. Kinematics in this case refers to “movement,” not
to inverse or forward kinematics as is used in skeleton animation.
Click the auto
button to
automatically set a
key each time you change a
parameter’s values.
Within each of the
Transform nodes,
there are the Pos
(position, also
called translation),
Ori (orientation,
also called
rotation), and Scl
(scale) folders.
Each of the Pos,
Ori, and Scl folders
contain the X, Y,
and Z parameters
corresponding to
each axis.
Animating Local or Global Transformations
Choose Animation > Set Keys at
Multiple Frames to set keys for the
parameters’ current values at the
multiple frames that you enter.
You can animate objects either in terms of their parents (local
animation) or in terms of the scene’s world origin (global animation).
It’s usually better to animate the local transformations because you
usually animate relative to the object’s parent instead of animating
relative to the world origin. Animating locally lets you branch-select an
object’s parent and move it while all objects in the hierarchy keep their
relative positions. If you animate both the local and the global
transformations, the global animation takes precedence.
Basics • 143
Section 9 • Animation
Manipulation Modes versus Transformation Values
When you transform an object interactively
in a 3D view, you use one of several modes
that determine which coordinate system to
use for manipulation. The manipulation
mode affects the interaction only, the
resulting values of which you see in the
Transform panel.
This is important to know, particularly for
understanding the Local manipulation
mode: the values shown in the Transform
panel while using a transformation tool may
not be the same as the local transform values
that are stored for the object: that is, the
values that you animate.
Manipulation modes for
current transformation
(in this case, translation)
So, how do you manipulate an object so that
the values on the Transform panel are the
same as the stored values for local
animation? You rotate in Add mode or translate in Par mode. These are
the only two manipulation modes that transform in the same way as
local animation: they are both relative to the object’s parent. Of course,
you can set and animate the values as you like directly in the object’s
Local or Global Transform property editor.
Marking Transformation Parameters
When you activate any of the transformation tools, all three of their
corresponding local transformation parameters (X, Y, Z) are
automatically marked.
To have only specific X, Y, or Z axes marked for local animation, you
can rotate in Add mode or translate in Par mode.
You can also choose Transform > Automark Active Transform Axes:
then when you click a transformation’s specific axis button (such as the
Rotation’s Y button) on the Transform panel, only that axis is marked,
regardless of the current manipulation mode.
144 • SOFTIMAGE|XSI
When you rotate in Local
mode, all three rotation axes
are marked automatically,
even if only one rotation axis
is selected.
To have only specific axes
marked, rotate in Add mode
or translate in Par mode, or
choose Automark Active
Transform Axes.
Remembering Transformation Tools for an Object
When you’re manipulating or animating an object, you often use the
same transformation tool for it, such as always using the Rotate tool for
bones in a skeleton. You can create a transform setup property (choose
Get > Property > Transform Setup) for an object so that the same
transformation tool is automatically activated when you select that
object.
This is very useful for
working quickly with control
objects in a character rig—
for example, when you select
the head’s effector, the
Translate tool is
automatically activated.
With a transform setup
property, the Translate
tool is automatically
activated when this
head’s effector is
selected.
Animating Transformations
Animating Transformations in Hierarchies
Animating Rotations
Transformations are propagated down through hierarchies so that each
object’s local position is stored relative to its parent. Objects in
hierarchies behave differently when they transformed, depending on
whether the objects are node-selected (left-click) or branch-selected
(middle-click). By default:
When you animate rotations in XSI, you normally use three separate
function curves that are connected to the X, Y, and Z rotation
parameters. These three rotation parameters are called Euler angles.
Euler interpolation works well when the axis of interpolation coincides
with one of the XYZ rotation axes, but is not as good at interpolating
arbitrary orientations. Euler angles can also suffer from gimbal lock,
which is the phenomenon of two rotational axes aligning with each
other so that they both point in the same direction.
• When you branch-select a parent object and animate its
transformation, the animation is propagated to its children.
• When you node-select a parent and animate its transformation, its
children are not transformed unless their respective local
transformations are animated. For example, suppose the child’s
local translation is animated but its rotation isn’t: if you translate
the parent, the child follows; however if you rotate the parent, the
child stays put.
This is because animation on the local transformations is stored
relative to the parent’s center. You can make unanimated children
follow the parent with the Child Transform Compensation
command (or ChldComp button) on the Constrain panel.
• When you animate a child object, its animation is always done
relative to its parent (local animation).
• When you animate anything in global, it’s always done in relation
to the world origin: it does not matter if your objects are in a
hierarchy or not. Nothing is inherited if you have global
transformation keys because they override any parent-to-child
inheritance.
To solve this, you can change the order in which the rotation axes are
evaluated (by default, it’s XYZ), which changes where the gimbal lock
occurs. As well, you can convert Euler fcurves to quaternion.
Quaternion interpolation provide smooth interpolation with any
sequence of rotations. The XYZ angles are treated as a single unit to
determine an object’s orientation, so they are not restricted to a
particular order of rotation axes. Quaternions interpolate the shortest
path between two rotations. You can create quaternion fcurves by
either setting quaternion keys or by converting Euler fcurves to
quaternion using the Animation > Convert commands in the
Animation panel. And you can always convert back to Euler fcurves in
the same way.
Cone is rotated on 90 degrees in X and Y.
Skeleton chains are an exception to these hierarchy animation
rules because the end location of one element always
determines the start location of the next one in the chain.
Euler interpolation of the rotation
values. Notice how it takes a detour
before reaching the final point.
Quaternion interpolation of the
rotation values. Notice how it takes
a direct path to the final point.
Basics • 145
Section 9 • Animation
Editing Keys and Function Curves
After you have set keys to animate a parameter’s value, you can edit the
keys and the function curve (or fcurve) to edit the animation. An fcurve
is a graph that represents the changes of a parameter’s values over time,
as well as how the interpolation between the keys occurs.
Editing Keys in the Timeline
You can view and edit keys in the timeline similar to how you do in the
dopesheet. The advantage of doing this in the timeline, of course, is
that you don’t need to open up a separate editor: the keys are right
there. This lets you keep the object that you’re animating in full view at
all times.
Once you have selected an animated object, you can easily move its
keys, cut or copy and paste its keys, and scale a region of keys, all within
the timeline. This is especially useful for blocking out rough
animations before you do more detailed editing. You can also select
single keys and move, cut, copy, and paste them.
Editing Keys in the Dopesheet
The dopesheet provides you with a way of viewing and editing key
animation. Similar to a cel animator’s dopesheet, it shows your entire
animated sequence, frame by frame. Because you can see your whole
animation in the dopesheet, it makes an ideal tool for editing overall
motion and timing. For example, if you wanted to change a 100-frame
sequence to 200 frames, you would simply stretch (scale) the
animation segment on the track to be 200 frames long.
You can modify your animation sequences by editing regions of keys
on the tracks with standard operations such as moving, scaling,
copying, cutting, and pasting. You can delete them, shift them left and
right, scale them—all with or without a ripple. Summary tracks help
you see the animation for the whole scene or just the selected objects.
Draw regions (press Q) to edit keys, including moving them,
copying/cutting and pasting them, and muting their animation.
You can also edit (move, copy, paste) individual keys on tracks.
Keys are displayed
as red lines.
You can display an audio
waveform in the timeline.
Right-click in the
timeline to open a menu
of options for displaying
and editing the keys.
Press Shift+drag to draw a region, then drag it
to a new area on the timeline.
Press Ctrl while
dragging to copy
the keys, or choose Copy and Paste from the
right-click menu.
You can scale a region by
dragging either of its ends.
146 • SOFTIMAGE|XSI
Press Shift+click to select
a single key, then you
can move it, or cut/copy,
and paste it.
Animation explorer
displays the parameters of
the selected elements.
The tracks display and let you
manipulate the keys, shown as colored
blocks. You can expand and collapse
tracks to view exactly what you want.
Editing Keys and Function Curves
Editing Function Curves
When you set keyframes to animate a parameter, a function curve, or
fcurve, is created. An fcurve is a representation of the animated
parameter’s values over time. You can edit fcurves in the fcurve editor,
which lives in the animation editor and is its default editor. The fcurve
editor is an ideal tool to help you control the animation’s speed and
interpolation, as well as easily adding and deleting keys.
Press the 0 (zero) key to open the animation editor in a floating
window, or you can open it in any viewport.
The graph in the fcurve editor is where you manipulate the fcurve: time
is shown along the graph’s X axis (horizontal), while the parameter’s
value is plotted along the graph’s Y axis (vertical).
The shape of the fcurve shows how the parameter’s value changes over
time. On the fcurve, keyframes are represented by key points (also
referred to as keys) and the interpolation between them is represented
by segments of the curve linking the key points. You can change the
interpolation for each segment or for the whole fcurve.
The slope of the curve between keys determines the rate of change in
the animation, while the handles at each key let you define the fcurve’s
slope in the same way that control points define Bézier curves.
Animation explorer
displays the parameters
of the selected elements.
Selected fcurves are
white.
Values for the parameter
are shown on the graph’s
Y (vertical) axis.
Keyed values on
fcurves are indicated
by keys. Selected
keys are red.
Time is shown on the
graph’s X (horizontal) axis.
Linear interpolation
connects keys by
straight line
segments. This
creates a constant speed with
sudden changes at each key.
Constant
interpolation
repeats the value of a
key until the next
one. The creates sudden changes at
keys and static positions between
keys, such as for animating a cut
from one camera to another.
By default, fcurves
use spline
interpolation to
calculate
intermediate values. The curves
ease into and ease out of each key,
resulting in a smooth transition.
The slope handles (tangents)
at each key indicate the rate
at which an fcurve’s value
changes at that key.
Basics • 147
Section 9 • Animation
Ways of Editing Function Curves and Keys
Editing a Function Curve’s Slope
When you select one or more fcurves, any modifications you perform
are done only to them. You can also select keys on fcurves to edit only
them, including regions of keys on fcurves.
The fcurve’s slope determines the rate of change in the animation. By
modifying the slope, you change the acceleration or deceleration in or
out from a key, making the animation change rapidly or slowly, or even
reversing it.
Move fcurves and keys in X
(horizontally) to
change the time
or in Y (vertically)
to change the
values.
Create regions (press Q) of
keys for editing.
Drag the region up or down to
move the keys, or drag the
region’s handles to scale.
You can change the slope of any fcurve that uses spline interpolation by
using the two handles (called slope handles) that extend out from a key.
By modifying the handles’ length and direction, you can define the way
the curve moves into and out from each key.
Slope handles displayed on each
selected key.
Add or
delete keys
on an fcurve.
You can change the length and
angle of each handle in unison or
individually.
Copy and paste an fcurve and
keys. You can also set paste options
to control how keys are pasted—
whether they replace the selection
or are added to it.
Scale fcurves or regions of
keys. When you shorten the
length, you speed up the
animation; increasing the
length slows it down. Scaling
vertically changes the values.
Cycle the fcurves for
repetitive motions. You can
create basic cycles, or you can
have relative cycles that are
progressively offset, such as
when creating a walk cycle.
148 • SOFTIMAGE|XSI
The slope handles are tangent to the curve at their key when Unified
Slope Orientation is on. This keeps the acceleration and deceleration
smooth, but you can also turn off this option to “break” the slope at a
certain point. This creates a sudden animation acceleration or deceleration,
or change of direction altogether.
The same fcurve with short and long slope handles. Notice
how the length of the handle changes the shape of the curve.
Layering Animation
Layering Animation
Animation layering allows you to have two or more levels of animation
on an object’s parameters at the same time. You usually want to layer
animation when you need to add an offset to the main animation on an
object, but you don’t want to change that animation.
1 Make sure the objects are
in a model
structure.
2 Animate the objects.
This animation is in the
base layer.
Layering lets you add keys on top of the existing base animation, which
can be either action clips or fcurves. You can easily add keys on top of
the action clip is currently in the mixer without needing to actually
work in the mixer, or add keys on top of existing fcurves.
Animation layers are non-destructive, meaning that they don’t alter
your base animation in any way: the keys in the layers always remain a
separate entity. Layering allows you to experiment with different effects
on your animations and build several variations of a move, each in its
own layer.
3
Create an animation layer in
the Animation Layer panel.
4 Select the animated
objects, change
their values, and set
keys for them in the
layer you created.
For example, let’s say that you’ve imported a mocap action clip of a
character running down the flight of stairs. However, in your current
scene, the stairs are shallower than those used for the mocap session, so
the character steps “through” the stairs instead of on them. To fix this
problem, you create an animation layer, offset the contact points for
the character’s feet so that they step on the stair, then set keys. The
result is an offset animation that sits on top of the mocap data: you
don’t need to touch the original mocap clip at all. You can then easily
edit the fcurves for the animation layer, tweaking it as you like.
Animation layers are actually controlled and managed in the animation
mixer, but you don’t need to access the mixer for creating and setting
keys in layers. You can use the Animation Layers panel (click the KP/L
tab on the main command panel) to do this. However, you may want to
use the animation mixer for added control over each layer, such as
setting each layer’s weight.
There are different ways in which you can work with animation layers
in XSI, but here’s a simple overview just to get you started.
5 Edit the layer’s fcurves.
6
Collapse the layer to
combine its animation with
the base layer.
Basics • 149
Section 9 • Animation
Constraints
Constraining is a way of increasing the speed and efficiency in which
you animate. It lets you animate one object “via” another one’s
animation. You can constrain different properties, such as position or
direction, of one object to that of an animated object. Then when the
animated object moves, the constrained object follows in the same way.
How to Constrain Objects
1 Select the object to be constrained.
2 Choose the constraint command
from the Constrain menu.
Radar dish constrained by
direction to the plane
The X axis of the radar dish
continually points in the direction
of the plane’s center.
3
Pick the constraining (control) object.
The constraint is created
between the objects.
There are a number of types of constraints in XSI:
• Constraining transformations: in position, orientation, direction,
scaling, pose (all transformation), and symmetry.
• Constraining in space: by distance, or between 2, 3, or any number
of points.
• Constraining to objects: to clusters, surfaces and curves, bounding
volumes, and bounding planes.
For many of the constraints, you can add a tangency or up-vector
directions to the mix. The tangency and up-vector constraints are
properties of several constraint types that determine the direction in
which the constrained object should point. For example, if you apply a
Direction constraint to an object, you can also add an up-vector (Y
axis) to control the “roll” of the direction-constrained object.
150 • SOFTIMAGE|XSI
4 Adjust the constraint in its property
editor that opens.
You can see
constraint
information if you click
the eye icon in a viewport
and select Relations.
Constraints
Creating Offsets between Constrained Objects
When you constrain an object, you often need to offset it in some way
from the constraining object. This could be an offset in position,
orientation, or scaling. For example, if you position-constrain one
object to another without an offset, both objects end up sharing the
same position (“on top” of each other), so you need to offset them.
Constraining object
(magnet)
Constrained object
(airplane)
Position constraint
without offset
The position of the
constrained object’s center
matches that of the
constraining object’s center.
with a trailer versus a pickup truck. With soft coupling, the trailer
follows the car but still has a limited range of motion; with rigid
coupling, the truck bed is welded to the truck.
Blending Constraints
You can blend multiple constraints on an object with each other, as
well as blend constraints with other animation on the constrained
object. You set the Blend Weight parameter’s value in each constraint’s
property editor to blend the weight (or “strength”) of one constraint
against the others. And, of course, you can animate the blending to
have it change over time.
Blending is done in the order in which you applied the constraints,
from the first-applied constraint to the last. Each constraint takes the
previous result and gives a new one based on the value you set.
For example, if you have three position constraints on an object, you
can have the object placed exactly in the center of them.
Position constraint with
offset: An offset is applied to
the position of the constrained
object’s center.
Cone has 3 blended position constraints:
1. First to A with a blend weight of 1
A
2. Next to B with a blend weight of 0.5
With almost all types of constraints, you can set offsets using the
controls in their property editors. The offset is set between the centers
of the constrained and constraining objects on any axis.
To set an offset interactively, you can use the
CnsComp button (Constraint Compensation) on
the Constrain panel. With compensation, you can
interactively offset the constrained object from the
constraining object and animate it independently
while keeping the constraint.
When you create an offset for a constraint, you can set the coupling
behavior between constrained objects. Coupling can be either soft or
rigid: the difference between them is like the difference between a car
3. Lastly to C with a blend weight of 0.333
This keeps the cone positioned in the middle of
the triangle formed by A, B, and C.
B
You can see the order of the constraints
as well as their blend weight values if
you click the eye icon in a viewport and
select Relations and Relations Info.
C
Basics • 151
Section 9 • Animation
Path Animation
• Convert the existing movement of an object into a path using the
Create > Path > Convert Position Fcurves to Path command.
A path provides a route in global space for an object to follow in order
to get from one point to another. The object stays on the path because
its center is constrained to the curve for the duration of the animation.
This plane uses path animation. Its
position is measured as a percentage
along the curve.
Want to convert a path animation to translation? Plot the
position of the path-animated object, then apply the result to the
object or as an action in the animation mixer.
A triangle represents
a locked-path key.
The dotted line is connected to
the center of the constraining
curve. You can select the line
and press Enter to open the
PathCns or TrajectoryCns
property editor.
This plane uses trajectory
animation. It jumps from knot to
knot at each frame.
A square represents a
key saved on the path.
You can create path animation in XSI using a number of methods, each
one having its own advantages:
• The quickest and easiest way of animating an object along a path is
by using the Create > Path > Set Path command and picking the
curve to be used as the path. There’s no need to set keyframes—just
set the start and end frames. The object is automatically
constrained to the path and animated along the percentage of the
curve’s length.
• Constrain an object to curve using the Curve (Path) constraint and
manually set keys for the percentage of the path traveled.
• Choose the Create > Path > Set Trajectory command and pick a
trajectory to use a curve’s knots as indicators of the object’s
position at each frame.
• Move an object about your scene and save path keys with the
Create > Path > Save Key on Path command at different
positions—the path curve is created automatically as you go.
152 • SOFTIMAGE|XSI
A circle represents a key set directly
from a property page or the
animation editor. These are the only
type of keys found on trajectories.
You can see path information if
you click the eye icon in a
viewport and select Relations.
After you’ve created path animation, you can modify the animation by
changing the timing of the object on the path (choose the Create >
Path > Path Retime command), or by moving, adding, or removing
points on the path curve as you would to edit any curve.
For example, using the Path Retime command, you can shorten (and
therefore increase the speed) a path animation that went from frame 1
to 100 to frames 20 to 70. You can even reverse the animation—for
example, enter 100 as the start and 1 as the end frame.
Linking Parameters
Linking Parameters
When you link parameters, also known as driven keys, you create a
relationship between them in which one parameter depends on the
animation state of another. In XSI, you can create simple one-to-one
links with one parameter controlling another, or you can have multiple
parameters controlling one parameter.
• Drive a single parameter with the whole orientation of an object.
How to Link Parameters
1
After you link parameters, you set the values that you want the
parameters to have, relative to a certain condition (when A does this, B
does this).
Open the parameter connection editor.
2 Select an object, then select one or more of its
parameters in the Driven Target explorer. These are
the parameters whose values will be controlled by
the driving parameter.
Click the lock icon to prevent this explorer from
changing when you select other objects.
3
Venus flytrap eyes its victim. Its jaw’s rotation Z parameter is linked to (driven
by) the position X parameter of the fly that is animated along a path.
You can link any animatable parameters together—from translation to
color—to create some very interesting or unusual animation
conditions. For example, you could create a chameleon effect so that
when object A approaches object B, it changes color. Basically, if you can
animate a parameter, you can link it.
There are three basic ways in which you can link parameters. You can:
• Create simple one-to-one links with one parameter driving one or
more other parameters. When you link one parameter to another, a
relationship is established that makes the value of the linked
parameter depend on the value of the driving parameter.
• Drive a single parameter with the combined animation values of
multiple parameters. This allows you to create more complex
relationships, where many parameter values are interpolated to
create an output value for one parameter.
Select an object, then
select one of its
parameters in the
Driving Source
explorer.
This is the parameter
whose values will
control the linked
parameters.
5
Click the Link button.
A link relationship is
established between the
parameters. The
animation icon of
the linked
parameter displays
an “L” to indicate this.
4 Select Link With from
the link list.
6 Set the driving and linked parameters’
values as you want them to be relative to
each other, then click Set Relative Values.
Repeat this step for each relative state you
want to set at each frame.
Basics • 153
Section 9 • Animation
Expressions
Expressions are mathematical formulas that you can use to control any
parameter that can be animated, such as translation, rotation, scaling,
materials, colors, or textures. Expressions are useful to creating regular
or mechanical movements, such as oscillations or rotating wheels. As
well, they allow you to create almost any connection you like between
any parameters, from simple “A = B” relationships to very complex
ones using predefined variables, standard math functions, random
number generators, and more.
However you use expressions, you will find that they are very powerful
because they allow you to animate precisely, right down to the
parameter level. Once you’re more experienced using them, you can
create all sorts of custom setups, like character rigs and animation
control systems.
How to write an expression
1
Select an object and open the expression editor.
2 Select the target, which is the
parameter controlled by the expression.
4 Validate and apply
the expression.
Shows the value
of the expression
at the current
frame.
How to create a simple equal (=) expression: 3 ways
3 Enter the
expression in the
expression pane by
typing directly or by choosing
items from the Function,
Object, and Param menus.
In a property editor, drag an unanimated parameter’s
animation icon onto another parameter’s animation
icon. This animation icon shows an equal sign and its
value is made to be equal to the first parameter.
OR
OR
In the explorer, drag the name of an unanimated
parameter and drop it on another parameter’s name.
In the parameter connection editor, set up the
Driving Source and Target parameters, then select
Equals (=) Expression.
154 • SOFTIMAGE|XSI
You can also enter parameter
names by typing their script
names and then pressing F12.
This prompts you with a list of
possible parameters in context.
You can copy, cut, and paste in the
expression pane using standard
keyboard shortcuts (Ctrl+c, Ctrl+x,
and Ctrl+v, respectively).
The message pane updates as
you work, letting you know
whether the expression is valid.
Press Ctrl+G to switch between
this pane and a graph of the
resulting expression values.
For a complete
description and syntax
of all the functions and
constants available,
refer to the Expression Function
Reference (choose Help > XSI
Guides).
Copying Animation
Copying Animation
There are different levels at which you can copy animation in XSI:
between parameters, between objects, or between models. Here are
some of the main ways to do this.
• You can copy animation between any parameters in the explorer or
a property editor in a number of ways, as shown below:
In the explorer, drag the
name of an animated
parameter and drop it on
another parameter’s
name.
• You can copy any type of animation
between selected objects, models, or
parameters using the Copy Animation
commands from the Animation menu
in the Animation panel.
In a property editor,
drag the animation
icon of an animated
parameter and drop it
on another parameter’s
animation icon.
• You can copy keys between parameters or objects in the dopesheet,
or copy function curves and keys between parameters or objects in
the fcurve editor.
In the dopesheet, you can copy animation from one model to
another, or from one hierarchy of objects to another within the
same model. For example, you can paste a walk cycle animation
from the Bob model to the Fred model, as long as Fred has the same
parameter names as Bob.
• Store an object’s animation in an action source and copy it between
models, which is especially useful for exchanging animation
between scenes.
In either the explorer or a
property editor, right-click the
animation icon of an
animated parameter and
choose Copy Animation.
In the explorer, you can drag an
entire folder from one object onto
another object’s folder of the same
name, such as the Pos folder which
contains translation parameters.
Paste this on another
parameter in the same way.
Basics • 155
Section 9 • Animation
Scaling and Offsetting Animation
Plotting (Baking) Animation
If you find that your whole animation is a bit too long or too short, or
you just want to set it off by a few frames, you can do so with the
Sequence Animation commands from the Animation menu in the
Animation panel. They give you control over animation by offsetting
or scaling (shortening or lengthening) the motion of all objects,
selected objects, or just the marked parameters of selected objects. You
can offset or scale all animation sources, function curves, and clips in
the animation mixer. You can scale and offset using explicit values, or
else you can retime an animation by fitting it into a specified frame
range. You can also reverse an animation easily.
When you plot the animation on an object using the commands in the
Tools > Plot menu on the Animate toolbar, the animation is evaluated
frame by frame and function curves are created.
You can also use the dopesheet to offset or scale animation for an
object or even the scene, especially using its summary tracks.
Plotting is useful for generating function curves from any type of
animation or simulation, such as from the simulation of a spring-based
tail on a dog, or plotting mocap animation from a rig. You can also plot
the animation of a constrained object and then remove its constraints
so that only the plotted animation remains on the object.
Animation of an object
constrained between two
points is plotted.
Selected fcurve
(white) has been
scaled to twice its
length.
The ghosted fcurve
(black) shows the
original fcurve’s
size.
156 • SOFTIMAGE|XSI
Plotting is done by first creating an action source. You can choose to
either keep or delete this action source after the animation has been
plotted:
Selected fcurve has
been offset by
about 20 frames.
• You can apply the plotted animation (fcurves) immediately to the
object and delete the action source.
Selected fcurve has
been retimed so
that a range of 125
frames in the middle
of it has been
compressed into a
range of 80 frames.
• You can keep the action source of the plotted animation (fcurves)
but not have it applied to the object immediately. This may be
useful for creating a library of action sources that can be applied to
the same or even a different object.
• You can apply the plotted animation (fcurves) to the object and
also keep them stored in an action source. This may be useful if
you’re using the animation mixer.
Removing Animation
Removing Animation
There are different levels at which you can remove animation in XSI:
between parameters, between objects, or between models. Here are
some of the main ways to do this.
• You can remove any type of animation
from selected objects, models, or
parameters using the Remove Animation
commands from the Animation menu in
the Animation panel.
• You can remove animation from parameters in the explorer or a
property editor.
In a property editor, right-click
the keyframe icon and choose
Remove Animation to remove
animation from all or marked
animated parameters on the
property page.
In either the explorer or a property
editor, right-click the animation
icon of an animated parameter and
choose Remove Animation.
• You can remove all keys from parameters or objects in the timeline
or in the dopesheet, or remove fcurves or all keys from parameters
or objects in the fcurve editor.
• When you remove keys from an fcurve, a flat (static) fcurve
remains. To remove the static fcurve, choose Remove Animation >
from All Parameters, Static Fcurves from the Animation menu.
• In the dopesheet, you can easily remove all animation from a
model or from a hierarchy of objects using its summary tracks.
Basics • 157
Section 9 • Animation
158 • SOFTIMAGE|XSI
Section 10
Character Animation
Character animation is all about bringing your
characters to life, whether it’s some guy dancing in a
club, a dog catching a frisbee, or a simple bouncing
ball with personality to spare. Even though you’re
working in a virtual environment, your job is to
make these characters seem believable in their
movements and expression.
In XSI, you’ll find everything you need to make any
type of character come alive: from envelopes and
skeletons to control rigs and inverse kinematics.
What you’ll find in this section ...
• Character Animation in a Nutshell
• Setting Up Your Character
• Building Skeletons for Characters
• Enveloping
• Rigging a Character
• Animating Characters
• Walkin’ the Walk Cycle
• Motion Capture
Basics • 159
Section 10 • Character Animation
Character Animation in a Nutshell
XSI has many tools to help you create and animate your characters.
Some of them are tools designed for character animation, such as
inverse kinematics, while others are part of the standard XSI tool set,
such as modeling and keying tools.
1 Good animation starts with a story. From
it, create character sketches and even
sculptured models to work out your ideas.
These sketches are transformed into the
character design model sheets for model
and skeleton building in XSI.
2
Develop the story by creating sketches
showing the character in different scenes.
This storyboard should show the main
actions, timing, camera angles, and
transitions.
3 Set up the timing (convert time to
frames) for the scene and create a
2D animatic. You can scan the
storyboard frames and import them
into XSI, then apply them as
textures onto a grid, and create a
flipbook.
The following outline gives you an idea of which steps to take and
which tools to use for developing and animating characters in XSI.
4
Model the body geometry to be used as the
envelope, also known as skin, that will cover
the skeleton. Envelopes can be deformed
according to how the skeleton moves.
You can use either a low or high-resolution
version of the envelope. A low-res
envelope lets you work out the
animation with it as a reference,
but doesn’t hinder the refresh speed.
5
Build a skeleton to provide a
framework for a character, and to
pose or deform it intuitively.
With the envelope as a guide,
create the bones for the skeleton
and assemble them into a
hierarchy.
6
Apply the envelope to the
skeleton. This also involves setting how
the different parts of the envelope are
weighted to the different bones in the
skeleton.
You should also save a reference pose
of the envelope before you start
animating for a home base to which you
can return.
160 • SOFTIMAGE|XSI
Character Animation in a Nutshell
7 Create a control rig to help you
to pose and animate the
character more quickly
and accurately than
without a rig.
9
While simple characters may
not require a rig, a character
that is complex or needs to do
complicated movements will
need a rig.
Skeleton chains are manipulated and
animated using inverse kinematics (IK)
and forward kinematics (FK). IK is a goaloriented way of posing a skeleton, while FK
animates a bone’s rotations.
FK
Animate the character in different poses at
the key frames. You can also save the
skeleton poses as action sources which you
can then bring into the animation mixer to
block out a rough animation.
IK
10
8
When you have all the elements
(envelopes, skeleton, rig controls) for
the character ready, select them and
put them into a model structure.
Adjust the envelope’s weighting for each
pose. Any area where limbs
join the body always require
additional amounts of
tweaking, especially with
extreme poses.
11 Adjust the animation by editing keys in
the timeline, keys in the dopesheet,
fcurves in the fcurve (animation) editor,
or action sources in the animation
mixer.
You can fix foot sliding, add some
variation to a regular cycle, or add a
progressive offset to a walk cycle.
Basics • 161
Section 10 • Character Animation
Setting Up Your Character
How you set up your character determines its destiny in many different
ways. Here are some issues to think about while you’re planning out
your character animation.
Putting the Character’s Elements into Models
Models in XSI are containers that make it easy to organize scene
elements that need to be kept together. A character’s skeleton hierarchy,
rig controls, envelope geometry, and groups are often kept together
within a model. The main reasons for using models with character
animation is that they provide the easiest way to import and export
characters between scenes and to copy animation between characters.
You can refine your rigs
and character models
over the course of a
production without fear
of lost animation. For
example, character
animators can start
roughing out animation
with a simple rig and low
resolution proxy model
while the other creative
work is still being worked
out. As long as you keep
the rig controls’ names
and their coordinate space consistent, all the animation is kept and can
be reapplied as the character and rigging both get more complex.
Another reason to work with models is to easily use the animation
mixer. Each model can only have one Mixer node. If you have many
characters in a scene but they aren’t within models, you have only one
Mixer node for the whole scene (under the scene root, which is
technically a model) which means that you can’t copy animation from
one character to another.
162 • SOFTIMAGE|XSI
When working with characters and the mixer, it’s best to create models
at the character level (or higher); that is, don’t create models for each
hand, foot, leg, etc. This makes it difficult to animate the character as a
whole in the mixer because you won’t have a high-level view of all of
your character’s animation.
Organizing Your Character into Scene Layers and
Groups
Scene layers let you divide up different scene elements into groupings
whose visibility, selectability, renderability, and ghosting can be
controlled. Press 6 to open the scene layer manager and set up the
layers. You can use layers to break a character down into sections so
that you can quickly change selectability and visibility for each layer.
For example, you can separate the
character’s envelope (geometry), its
skeleton, and its control objects for
the rig each into different layers.
Layers, however, live only at the scene
level, so if you’re importing and
exporting models between scenes, they’re not going to include any
layer information. This is where groups can be of help.
Groups let you keep certain character
elements together, such as all objects that
are to be enveloped. Groups are properties
of a model, so you can export them with
your character model.
Groups allow you to select multiple objects
at a time and are important for sharing
materials and setting up texture supports
for many objects at a time.
To create a group, select all elements you
want in the group and press Ctrl+G.
Setting Up Your Character
Tools for Easy Viewing and Selecting
When you’re animating a skeleton, you may want to work with a lowresolution version of the envelope on the skeleton. This helps you get a
sense of how the animation will work with the final envelope. However,
working with enveloped skeletons can make it difficult to view or select
chain elements. To help you with this, XSI has several viewing and
selection options, with the most common ones shown here.
X-ray shading lets you see and
select the underlying chains while
still seeing the shaded surface of
the envelope.
You can display the chains
in screen (bones inside) or
overlay (bones on top)
modes.
You can set up a character synoptic view for other members of your
team, allowing them to use your character easily. Synoptic views allow
you and others to quickly access commands and data related to a
specific object or model. They consist of a simple HTML image map
stored as a separate file outside of the XSI scene file. The HTML file is
then linked to a scene element.
Clicking on a hot spot in the image either opens another synoptic view
or runs a linked script. You can include all sorts of information about
the character, set up hotspots for selecting body parts, setting keys on
different elements, running a script, etc.
Synoptic views are easy to set
up and let you do things like
quickly selecting skeleton
elements, keying them, or
applying set poses.
Click on a hot spot on the
synoptic image to run the script
that is linked to that image.
Shadow icons are displayed here as
cylinders for many bones. These shadows
have been resized and offset from the
bone to make them easy to see and grab.
You can also color-code the shadows to
identify different groups of controls.
You can also change the shape, color, and
size of the chain elements themselves
(such as resizing the bones), including
having no chain element displayed at all.
Basics • 163
Section 10 • Character Animation
Building Skeletons for Characters
Skeletons provide an intuitive way to pose and animate your character.
A well-constructed skeleton can be used for a wide variety of poses and
actions. Skeletons in XSI are made up of bones that are linked together
by joints that can rotate. The combination of bones and joints is
referred to generically as a chain in XSI because you can use chains for
animating any type of object, not just humans or creatures. Chains
have several elements, each of which has an important part to play, as
shown below.
Anatomy of a skeleton
The bones are connected by joints. A bone always
rotates about its joint, which is at its top. The first bone
rotates around the root.
A root is a null that is the starting point
on the chain. It is the parent of all other
elements in the chain.
The first bone in the chain is a child of the root, and all
other bones are children of their preceding bones.
Keying the rotation of bones is how you animate with
forward kinematics (FK).
Because the first joint is local to the root,
the root’s position and rotation determine
the position and rotation of the rest of the
chain.
The effector is a null that is the last part of
a chain. Moving the effector invokes
inverse kinematics (IK), which modifies
the angles of all the joints in that chain.
When you create a chain, the effector is a
child of the root, not the preceding bone.
A joint is the connection between elements in a chain:
between bones in the chain, between the root and the
first bone, and between the last bone and the effector.
By default, joints are not shown but you can easily
display them if you like.
• In a 2D chain, the joints act as hinges, restricting
movement so that it’s easier to create typical limb
actions, such as bending an arm or leg. Only its first
joint at the root acts as a ball joint, allowing a free
range of movement: when using IK, the rest of the
2D chain’s joints rotate only on the root’s Z axis, like
hinges. Of course, you can rotate the joints of a 2D
chain in any direction with FK, but this is overridden
as soon as you invoke IK.
• In a 3D chain, the joints can move any which way
they like. All of its joints are like ball joints that can
rotate freely on any axis, allowing you to animate
wiggly objects like a tail or seaweed.
164 • SOFTIMAGE|XSI
Building Skeletons for Characters
Creating Skeletons
Drawing chains is pretty simple in XSI: you choose the Draw 2D Chain
or 3D Chain command and click where you want the root, joints, and
effector to be. Here are some tips to help you draw chains:
• To place the chain elements exactly where you want, use snapping
as you draw the chains.
• Draw the chains in relation to the default pose of the envelope that
you’re planning to use. This means you don’t have to spend as
much time adjusting each bone’s size and position later.
After you have created the chains for a character’s skeleton, you need to
organize them in a hierarchy. Hierarchies are parent-child relationships
that make it easy to animate the skeleton. There are many different
ways in which you can set up a hierarchy, depending on the skeleton’s
structure and the type of movements that the character needs to make.
Skeleton with part of her hierarchy structure shown in the
schematic view. In this case, the spine root is the parent of
the leg roots, spine, and spine effector.
These elements are, in turn, parents of the legs, neck,
shoulders, spine, and so on.
• Draw the chain with at least a slight bend to determine its direction
of movement when using IK. Drawing bones in a straight line can
result in unpredictable bending.
• If you want two chains to be mirrored, such as a character’s arms or
legs, you can draw one and have the other one created at the same
time. Just activate symmetry (Sym) mode and then draw a chain.
How to draw a chain
1
Choose the Skeleton > Draw 2D Chain
or Draw 3D Chain command.
2 Click once to create
root and first joint.
The bone and joint are not
created until you let go of the
mouse button.
In an explorer, drag the nodes
you want to be children and
drop them onto the node that
will be the parent.
OR
3 Click again to create first bone
and second joint.
Tip: You can try out the joint’s
location by keeping the
mouse button held down as
you drag.
How to create a hierarchy
4 Click once more to create
another bone and joint.
5 When you’re ready to finish,
right-click to create the
effector and end the chain.
Select the node you
want to be the parent,
click the Parent button
and then pick the
elements that will be its
children.
Right-click to end the
parenting mode.
Basics • 165
Section 10 • Character Animation
Hold That Pose!
Neutral Poses for Easy Keying
When you’re creating a skeleton, it’s a good idea to save it in a default
position (pose) before it’s animated or enveloped. This way you have a
solid reference point to revert to when enveloping and animating the
skeleton. This pose is known as the neutral pose, reference pose, or base
pose, and is usually set up so that the character has outstretched arms
and legs (a T-pose), making it easy to weight the envelope and adjust its
textures.
When you’re creating a character, you set its reference or neutral pose
before enveloping or animating it. This pose usually has the character
with outstretched arms and legs, making it easy to weight the envelope
and adjust its textures. However, a pose that’s useful for envelope
weighting isn’t always the best for animating because some limbs don’t
always have local transformation values that are easy to key. For
example, if you load the default skeleton that comes in XSI and you
want to key the rotation of the finger bones, you’ll see that the bones’
local rotation values are large and difficult numbers to use for keying.
To save the skeleton in a pose, you can create an action source using the
Skeleton > Store Skeleton Pose command. To return to this pose at any
time, you apply it to your character with the Skeleton > Apply Skeleton
Pose command.
Because this pose is saved in an action source, you can pop it into the
animation mixer to do nonlinear animation. For example, you could
use this pose, as well as other stored action poses, to block out a rough
animation for the character in the mixer.
To solve this problem, pose your character how you want for its neutral
pose and then simply choose the Skeleton > Create Neutral Pose
command. This creates a neutral pose that uses zero for its local
transformation values (0 for rotation and translation, 1 for scaling).
Basically, this neutral pose acts as an offset for the object’s current local
transformation values. To return to this neutral pose, you can enter
zero in the Transform panel (“zero out” the values).
Then when you key the character’s values, they reflect the relative
difference from zero, and not a number that’s difficult to use. For
example, when you key a hand bone at coordinates (0, 3, 0), you know
that it’s 3 units in the Y axis above the neutral pose.
Character in his neutral pose
for weighting and texturing.
If you store a skeleton pose of
this position, it’s easy to
return to it at any point of
your character development.
166 • SOFTIMAGE|XSI
Branch-selected hand bone
in neutral pose at 0.
Hand bone rotated and keyed. Notice how
the rotation values are easy to understand
because they’re using 0 as a reference.
Building Skeletons for Characters
Making Adjustments to a Skeleton
Even though you’ve created your skeleton with the envelope in mind,
you always need to resize bones, chains, or a whole skeleton to achieve
the exact structure you want. As well, you may need to add or remove
bones to the skeleton.
It’s usually better to modify a skeleton before you apply the envelope to
it so that you don’t have to reweight the envelope to the bones.
However, you can change the skeleton after it’s been enveloped, and
decide whether to have the envelope adjust to the skeleton or not.
Resizing bones
The easiest way to resize bones is to use the Move Joint tool
(press Ctrl+J).
This tool lets you interactively resize bones by moving any chain
element to a new location. The bones that are immediately
connected to that chain element are resized and rotated to fit the
chain element’s new location.
Moving the knee joint
using Move Branch
resizes only the bone
above it: this joint’s
children are moved as
a group but are not
resized.
Adding bones
You can add bones to a chain
using the Skeleton > Add
Bone to Chain command.
Click at the point where you
want the new bone to end, and
the new bone is added between
the last bone and the effector.
Keep on adding as many bones
as you like, then right-click to
end the mode.
Removing bones
You can’t select and delete individual bones from a
chain because of their hierarchy dependencies, but
you can branch-select (middle-click) a chain and then
delete it.
If there are children in that chain that you want to
keep, make sure to Cut their links before deleting the
chain, and then reparent
them to the modified
chain.
Use the Move Joint tool to move
the knee joint to a new position. The
bones connected above and below
this joint are resized.
Modifying bones for an
enveloped skeleton
If you resize or add bones to a skeleton that’s
already enveloped, the envelope automatically
adjusts to the new skeleton. This means that
you may need to adjust the weighting on the
envelope.
If you want to resize bones without having the
envelope adjust to the new size, you set a new
reference pose with the Deform > Envelope >
Set Reference Pose command.
Basics • 167
Section 10 • Character Animation
Enveloping
An envelope is an object that deforms automatically, based on the pose
of its skeleton or other deformers. In this way, for example, a character
moves as you animate its skeleton. The process of setting up an
envelope is sometimes called skinning or boning.
Every point in an envelope is assigned to one or more deformers. For
each point, weights control the relative influence of its deformers. Each
point on an envelope has a total weight of 100, which is divided
between the deformers to which it is assigned. For example, if a point is
weighted by 75 to the femur and 25 to the tibia, then the femur pulls on
the point three times more strongly than the tibia.
Setting Envelopes
1. Make sure the envelope and
deformers are in the reference pose
(sometimes called a bind pose).
The reference pose determines
how points are initially assigned
and weighted. It’s best to choose a
reference pose that makes it easy
to see and control how points will
be assigned.
2. Select the objects, hierarchies, or
clusters to become envelopes.
3. Choose Deform > Envelope > Set Envelope from the Animate
toolbar.
If the current construction mode is not Animation, you are
prompted to apply the envelope operator in the animation region
of the operator stack anyway. In most cases, this is probably what
you want.
4. Pick the objects that will act as deformers. You are not restricted to
skeleton bones; you can pick any object. Left-click to pick individual
objects and middle-click to pick branches. You can also pick groups
168 • SOFTIMAGE|XSI
in the explorer—this is equivalent to picking every object in the
group individually. If you make a mistake, Ctrl+click to undo the
last pick.
5. When you have finished picking deformers, right-click to terminate
the picking session. Each deformer is assigned a color, and points
that are weighted 50% or more toward a particular deformer are
displayed in the same color.
Use the Automatic Envelope
Assignment property editor to adjust
the basic settings.
6. Move the deformers to see how the
envelope deforms. If necessary, you can
now change the deformers to which
points are assigned, as well as modify
the envelope weights using the
methods described in the next few
sections.
If you ever need to reopen the
Automatic Envelope
Assignment property editor,
you can find it in the
envelope weight stack in
an explorer.
Enveloping
The Weight Paint Panel
The weight paint panel is very useful when modifying weights. It combines several features from the weight editor, brush properties, and the Animate
toolbar. To display the weight paint panel, press Ctrl+3 or click the weight paint panel icon at the bottom of the toolbar.
Weight Paint Panel
Chose a paint mode.
Weight Paint Panel
Activate Paint tool.
Set weight assignment of selected points
to current deformer numerically.
Numeric weight assignment options.
Set paint density.
Set brush size.
Update continuously (on)
or only when mouse
button is released (off).
Pick a deformer for painting
from the 3D views.
Smooth weights on object or selected points.
Reassign points to other deformers.
Freeze initial weight assignment and any modifications.
Open weight editor
Display only current
deformer’s weight map.
Click to pick deformer for
painting. Right-click for
other options.
Painting Envelope Weights
You can use the Paint tool to adjust envelope weights. This lets you use
a brush to apply and remove weights on points in the 3D views.
1. Select an envelope.
2. Activate the Paint tool using the weight paint panel or by pressing
w.
3. Pick a deformer for which you want to paint weights by selecting it in
the list in the weight paint panel or by pressing d and picking it in a
3D view.
Change color of current deformer.
4. If desired, set the paint mode. Most of the time you will be using
Add (additive) but Smooth, Erase, and Abs (absolute) are also
sometimes useful.
5. If desired, adjust the brush properties:
- Use the r key to change the brush radius interactively.
- Use the e key to change the opacity interactively.
- Set other options in the Brush Properties editor (Ctrl+w).
Basics • 169
Section 10 • Character Animation
6. Click and drag to paint on points on the envelope. In normal
(additive) paint mode:
Reassigning Points to Specific Deformers
- To add weight, use the left mouse button.
You can reassign points to specific deformers. This is useful in case the
automatic assignment did not assign the points to the desired bones.
- To remove weight, either use the right mouse button or press
Shift+left mouse button.
1. Select points on the envelope.
- To smooth weight values between deformers, press Alt+left
mouse button.
2. Choose Deform > Envelope > Reassign Locally on the Animate
toolbar, or click Local Reassign on the weight paint panel.
3. Pick one or more of the original deformers.
7. Repeat steps 3 to 6 for other deformers and points until you are
satisfied with the weighting.
If your envelope has multiple maps, for example, a weight
map in addition to an envelope weight map, then you may
need to select the envelope weight map explicitly before you
can paint on it. A quick way is to select the enveloped
geometry object, then choose Explore > Property Maps from
the Select panel and select the map to paint on.
Smoothing Envelope Weights
In addition to painting in Smooth mode, you can select an envelope or
specific points and click Apply Smooth on the weight panel. This
applies a Smooth Envelope Weight operator with several options.
Mirroring Envelope Weights Symmetrically
You can mirror the envelope weighting symmetrically. This lets you set
up the weighting on one half or your character and then copy the
weights to the corresponding points and deformers on the other half.
First, you must establish the correspondence between symmetrical
points and deformers using Deform > Envelope > Create Symmetry
Mapping Template from the Animate toolbar. Then, you can select
properly weighted points and copy their values to the other side using
Deform > Envelope > Mirror Weights.
170 • SOFTIMAGE|XSI
These points are incorrectly
assigned to this deformer.
Enveloping
Setting Weights Numerically
The weight editor allows you to modify envelope weight assignments
numerically. You can open the weight editor by pressing Ctrl+e or by
clicking Weight Editor on the weight panel.
Control display of
enveloped objects.
Transfer cell selection
to 3D views.
Reassign points to
other deformers.
Smooth weights on object or selected points.
Freeze the envelope operator stack.
Limit the number of deformers per point.
Control display of points and deformers.
Lock weights.
Weight assignment options.
Deformers are listed in columns. Rightclick for display options. Drag a column
border to resize.
Set weight of selected cells.
Multiple envelopes.
Double-click to expand and collapse, or
right-click for more options.
If some points aren’t fully weighted, the
name is shown in red. Hover the mouse
pointer over the name to see how many
points aren’t fully weighted.
Points are listed in rows. Click to select,
right-click for display options. Drag a row
border to resize. Points that aren’t fully
weighted are shown in red.
Points with more deformers than
the limit are shown in yellow, as are
envelopes with such points.
Selected cells are
highlighted.
Non-zero weights
are shaded.
Basics • 171
Section 10 • Character Animation
Locking Envelope Weights
Using Envelope Presets
You can lock or “hold” the values of envelope weights using the weight
editor, the Envelope menu of the Animate toolbar, or the context menu
in the deformer list of the weight panel. Locking prevents you from
accidentally modifying points that you have carefully adjusted when
you are working on other points. It is also useful for setting exact
numeric values while keeping Normalize on so that points don’t
inadvertently become partially weighted to no deformer. If you need to
modify locked points later, you must first unlock them. Points that are
locked for all deformers are drawn in black in the 3D views.
You can use the commands on the File menu of the weight editor to
save and load presets of envelope weights. This can be useful if you
want to experiment with modifying weights—you can save the current
weights and reload them later if you don’t like the results.
Freezing Envelope Weights
When you freeze envelope weights using Freeze Weights on the weight
paint panel, the weight map’s operator stack is collapsed, removing the
original Automatic Envelope Assignment property along with any
Weight Painter, Modify Envelope Weight, and Smooth Envelope Weight
operators that have been applied. This reduces the amount of stored
data and increases performance, but also has a number of other effects:
• The initial envelope weights can no longer be recalculated—it’s as if
the envelope was imported as is.
• If you change the reference pose, you can no longer change the
initial envelope weights based on the new pose.
• If you add a deformer to an envelope, you can no longer recalculate
the weights automatically. The envelope points are all weighted 0 to
the new deformer, and you must assign weights manually.
However, you can still add new paint strokes, smooth weights, and edit
weights numerically after freezing. In addition, you can still reassign
points locally to other deformers.
172 • SOFTIMAGE|XSI
To share presets between different envelopes, the envelopes must meet
the following conditions:
• They must have exactly the same topology. This includes both the
number of points and their connections.
If you added points after you created a preset, and then reapply the
preset to the modified geometry, the new points are not weighted
to any deformer until you assign them manually.
• Their deformers must have the same names.
The easiest way to meet these conditions is to simply duplicate a model
containing an envelope and its deformers.
Changing Reference Poses
After an envelope has been assigned, you can change the reference pose
of the envelope. The reference pose is the stance that the envelope and
its deformers return to when you use the Reset Actor command. It is
also the pose that determines the initial weighting of points to
deformers based on proximity.
First mute the envelope, then adjust the positions of the envelope and
deformers. Next, select both the envelope and deformers and choose
Deform > Envelope > Set Reference Poses from the Animate toolbar.
Finally, unmute the envelope.
Enveloping
Adding and Removing Deformers
Limiting the Number of Deformers per Point
After you have applied an envelope, you can add and remove
deformers. To add deformers, select the envelope, choose Deform >
Envelope > Set Envelope from the Animate toolbar, pick the new
deformers, and right-click when you have finished. If the envelope
weights have been frozen or if Automatically Reassign Envelope When
Adding Deformers is off, no points are weighted to the new deformers so
you must do that manually. Otherwise, the initial weight assignments are
recalculated and any modifications you made to them are preserved.
You can limit the number of deformers to which each point’s weight is
assigned. This can be especially important for game characters, because
some game engines have a limit on the number of deformers.
To remove deformers, simply choose Deform > Envelope > Remove
Deformers from the Animate toolbar, pick the deformers to remove,
and right-click when you are finished.
If a point’s weight is assigned to more than this number of
deformers, its row is shown in yellow in the weight editor. If an
envelope has any such points, its row is shown in yellow, too.
Modifying Enveloped Objects
Sometimes, after carefully assigning weights manually, you discover
that you need to make a substantial change to the enveloped object,
such as adding points. Luckily, you do not need to redo all your
weighting—you can add and move points after enveloping.
When you add a point to an enveloped object, it is automatically
weighted based on the surrounding points. It is better to add new
points before removing old ones—this means that there is more
weight information for the new points. You can assign the new points
to specific deformers and modify weights as with any point on the
envelope.
1. Set the maximum number of deformers on the weight editor’s
command bar.
Maximum number of deformers
2. To try to fix these points automatically, click Enforce Limit. A Limit
Envelope Deformers operator is applied, and its property page is
opened automatically. By default, the limit is the one you set on the
command bar, but you can change it for individual operators.
If a point has more than the maximum number of deformers, the
operator unassigns the deformers with the lowest weights and then
normalizes the weight among the remainder. However, it will
respect locked weights—locked weights are never changed, even if
other deformers have greater weight. If there aren’t enough
unlocked weights to modify, then the total weight might not add
up to 100%.
If you want to apply a deformation or move points on an enveloped
object, make sure to first set the construction mode based on what you
want to accomplish. For example:
• If you want to modify the base shape of the envelope, set the
construction mode to Modeling.
• If you want to author shape keys on top of the envelope, for
example, to create muscle bulges, set the construction mode to
Secondary Shape Modeling.
Basics • 173
Section 10 • Character Animation
Rigging a Character
Control rigs allow for “puppeteering” a character, helping you easily
pose and animate it. Once a control rig is set up properly, you can
animate more quickly and accurately than without one.
There are a number of tools in XSI to help you create a rig for your
character. You can use a number of tools to create control objects and
constrain them to the skeleton, and tools to help you easily create
shadows rigs and manage the constraints between them and their
parent rigs.
You can also use the prefab guides and rigs in XSI to help you get going
quickly. These are available for biped, dog-leg biped, and quadruped
characters. The rigs are skeletons that include control objects that you
can position and orient to animate the various parts of the character’s
body.
Ready-made (prefab) biped
rig that comes with XSI
You can customize the prefab rigs so that they contain only the
elements you need, exactly as you need them. The guides and rigs can
be used as a starting point for different rigging styles, and technical
directors can write their own proportioning script to attach their own
rig to a guide.
Shadow Rigs and Exporting Animation
Shadow rigs are simpler rigs that are constrained to your more
complex main rig that is used for animating the character. Shadow rigs
are usually used for exporting animation, such as to a games or crowd
engine or other 3D software programs.
You can load a basic shadow rig with the Get > Primitive > Model >
Biped - Box command. You can also create a shadow rig from a guide
with the Character > Hierarchy from Guide command, or generate a
shadow rig at the same time that you create a prefab rig.
To transfer the animation from the complex (animated) rig to its
shadow rig, you plot the animation while the shadow rig is still
constrained to the complex rig. Then you can export the shadow rig or
just its animation.
You can create either
a quaternion or
regular chain spine
and head.
Separate controls for
the chest, upper
body, and hips let
you position and
rotate each area
individually.
Volume
indicators help
you work with
envelopes.
Feet have three
controls to allow for
complex angles and
foot rolls.
174 • SOFTIMAGE|XSI
Animated
main rig
Animation
transferred to
shadow rig while
it’s constrained to
the main rig.
Rigging a Character
Creating Your Own Rig
There are a number of tools in XSI to help you create a rig for your
character. You can create primitive control objects (such as spheres and
cubes) or sophisticated control elements, (such as spines and springbased tails) and constrain them to the skeleton. Expressions and
scripted operators on these controls allow you to have ultimate control
over your character’s animation. There are also tools to help you easily
create shadows rigs and manage the constraints between them and
their parent rigs.
2 Constrain the control object to
its skeleton element using
constraints from the Constrain
menu.
The pose constraint is often
used because it constrains all
transformations (SRT) of the
control object to its skeleton
element.
1 Create control objects out of
primitive objects or curves for each
skeleton element you want to control.
You can also create your own objects
to look like the body parts you’re
controlling, such as the feet, hands,
head, or hips.
You can create a simple but flexible spine
with the Skeleton > Create Spine
command. This creates a quaternionblended spine for controlling a character
the way you like. You constrain the top and
bottom vertebrae to hip and chest control
objects that you create.
Use up-vector constraints for
controlling the resolution plane of the
arms and legs when using IK.
Put the control objects behind the legs
or arms and constrain them to the
thigh or upper-arm bones using the
Skeleton or Constrain > Chain Up
Vector command.
3 Create an object, such as a null,
and make it the parent of all
skeleton and rig control
objects.
Also make sure that all the rig
control objects are within a
model.
Create spring-based tail or ear controls
using the Skeleton > Create Tail
command. Spring-based controls use
dynamics to make them react to motion,
such as bouncing in response to a character
running or jumping.
You can also create a
Transform Group in which a
null becomes an invisible
parent of all selected objects.
Basics • 175
Section 10 • Character Animation
Using Prefab Guides and Rigs
You can use the prefab guides and rigs in XSI to get going quickly.
These are available for biped, dog-leg biped, and quadruped characters.
The rigs are skeletons that include control objects that you can position
and orient to animate the various parts of the character’s body.
These guides and rigs can be used as a starting point for different
rigging styles, and technical directors can write their own
proportioning script to attach their own rig to a guide.
1 Create a guide by choosing Character > Biped
Guide (or quadruped or biped dog-leg) and
adjust it to fit your character’s envelope.
Drag the red cubes
to resize the different
parts of the body.
You can use
symmetry to
resize the limbs on
both sides of the
body at the same
time.
You can also create tail, ear, and belly
controls that are driven by springs. This
lets you create secondary animation on
these body parts using dynamics.
176 • SOFTIMAGE|XSI
To start, you first create a proportioning guide and drag its cubes to
resize it to fit your character’s envelope. Then you can create the actual
rig based on this customized guide.
You can customize each element in the rig so that it is exactly what you
need. You can also create volume controls to help with enveloping.
The guides have synoptic views to help you select and animate the rig
controls: select any control and press F3. There are also preset character
key sets and action sources to help you animate the rig.
2 When the guide is fitted to the envelope,
create a rig based on it by choosing Character
> Rig from Biped Guide.
The rig is a skeleton that also includes standard
XSI objects as control objects.
3 Apply the body geometry as an
envelope to the rig using the
envelope_group
in the rig’s
model to apply
it to the correct
parts of the rig.
4 Position and rotate the rig controls
and key them to animate the various
parts of the skeleton.
Animating Characters
Animating Characters
Skeletons provide an intuitive way to pose and animate your model. A
well-constructed skeleton can be used for a wide variety of poses and
actions, in much the same way as the skeletons in our bodies can. How
parts of the skeleton move relative to each other is determined by the
way your skeleton hierarchy is built, whether and how objects are
constrained to each other.
Before you start animating your character, it is important to
understand how animating transformations work in XSI. There are
several issues related to local and global animation, as well as
animating transformations in skeleton hierarchies.
• Have a movement properly “follow through”, such as giving a good,
hard kick to a football.
Forward kinematics
Bones in arm are rotated and
keyed in order from the
upper arm down to
move from an
outstretched position
to a raised position
with a flexed wrist.
You animate skeletons using inverse kinematics (IK) and forward
kinematics (FK). The method you choose depends on what type of
motion you’re trying to achieve. Of course, you can animate with both
IK and FK on the same chain and then blend between them, allowing
you the flexibility to animate as you like.
To animate with FK
Animating with Forward Kinematics
Forward kinematics, or FK as it is usually known, allows for complete
control of the chain’s behavior. When you animate with FK, you rotate
a bone into position, which sets the angle of its joint, and then key the
bone’s rotation values (its orientation). Each movement needs to be
planned to create the resulting animation. For example, to bend an
arm, you start from the “top” and move down by rotating the upper
arm bone, then the forearm bone, and finally the hand bone.
With FK, you can:
• Key the exact orientation (in X, Y, Z) of a joint. This prevents any
surprises from occurring when 2D chains flatten on their
resolution plane.
• Control certain joints that are difficult to animate, such as
shoulders and arms.
1
Select a bone.
2
Click the Rotate (r) button in the
Transform panel or press C.
3
Rotate the bone into position on any axis
(X, Y, Z).
4
Key the bone’s rotation values.
You could also animate with FK by first translating the chain’s effector
(invoking IK) to move the bones into position, and then tweaking
each bone’s rotation as necessary.
When things are in the position you like, choose Skeleton > Key All
Bone Rotations to set rotation keys for all the bones in that chain.
To help make keying easier, you can create a character key set that
contains all the rotation parameters for the bones. Then you can
quickly key using this set. In a similar way, you can use the keying
panel to key only the rotation parameters that you have set as “keyable”
for the bones.
Basics • 177
Section 10 • Character Animation
Animating with Inverse Kinematics
Inverse kinematics, usually referred to as simply IK, is a goal-oriented
way of animating: you define the chain’s goal position by placing its
effector where you want, then XSI calculates the angles at which the
previous joints in the chain must rotate so that the chain can reach that
goal.
IK is an intuitive way of animating because it’s how you probably think
of movement. For example, when you want to grab an apple, you think
about moving your hand to the apple (goal-oriented), not rotating
your shoulder first, then your arm, and then your hand.
Inverse kinematics
Leg’s effector is branchselected (middle-clicked) and
translated to move the leg from
a standing position to doing the
can-can.
With IK, you can:
• Easily try out different poses. Dragging an effector to reach a goal is
intuitive for certain types of actions.
• Quickly animate simple movements, including 2D chains that have
a limited range of movement.
• Easily set up poses for a chain by positioning the effector, then
keying either the effector’s translation (IK) or the bones’ rotation
values (FK).
Translation values on effectors of chains created in XSI are local to the
effector’s parent (by default, the chain root). By not having the effector
tied to its preceding bone, you are free to create local animation on the
effector that can be translated with its parent. However, many
animators prefer to constrain effectors and bones to a separate
hierarchy of control objects (control rigs) so that they never animate
the skeleton itself directly.
To help make keying easier, you can create a character key set that
contains all the translation parameters for the effector. Then you can
quickly key using this set. In a similar way, you can use the keying
panel to key only the translation parameters that you have set as
“keyable” for the effector.
178 • SOFTIMAGE|XSI
To animate with IK
1
Select the chain’s effector.
2
Click the Translate (t) button in the
Transform panel or press V.
3
Move the effector so that the chain is in
the position you want.
4
Key the effector’s translation values.
You could also constrain the effector
to a curve with the Constrain >
Path command and animate it with
path animation.
The chain is solved in the same way
as if you keyed the effector’s
positions.
Animating Characters
Basic Concepts for Inverse Kinematics
There are two fundamental concepts you should understand when
working in IK: the chain’s preferred angle and its resolution plane.
When you draw a chain, you usually draw it with a bend to be able to
predict its behavior when using IK. This bend is called the chain’s
preferred angle. When you move the effector, the chain’s built-in solver
computes a solution that considers these angles and the effector’s
position.
Preferred angle
You can change the joint’s preferred angle to get the correct skeleton
structure for the animation that you want to create. This solves the IK
in a new way, affecting the movement of the whole chain. You can also
reset a bone’s rotation to the value of its preferred rotation, which
resets the chain to its pose when you created it.
With 2D chains, the preferred axis of a chain (the X axis, by default) is
perpendicular to the plane in which XSI tries to keep the chain when
moving the effector. This plane is referred to as the general orientation
or resolution plane of a chain. It is in the space of this plane that the IK
system resolves the joints’ rotations when you move the effector.
Resolution plane
Constraining the chain to prevent flipping
Using an up-vector constraint for chains, you can
constrain the orientation of a chain to prevent it from
flipping when it crosses certain zones.
Chain is drawn with a slight
bend to determine its direction
of movement when using IK.
The up-vector constraint forces the Y axis of a chain to
point to a constraining object so that the solver knows
exactly how to resolve the chain’s rotations.
This determines
the preferred
angle of
rotation for
each bone’s joint.
You add up-vector constraints to the first bone of a
chain because that is the bone that determines the
resolution plane.
The resolution plane of this skeleton’s leg is shown
with a gray triangle, connecting the root, the effector,
and the knee joint. This plane is defined by the first
joint’s XY plane, and any joint rotations stay aligned
with this plane.
When the first joint is rotated, the resolution plane
rotates accordingly, and all joint rotations remain on
the resulting resolution plane.
First point
(joint 1 at
chain root)
Resolution plane
(gray triangle)
Second point
(effector)
Third point
(a null constrained by an
up-vector constraint)
Basics • 179
Section 10 • Character Animation
Blending between FK and IK Animation
Solving the Dreaded Gimbal Lock
When you’re animating a skeleton, you may need to use both FK and
IK animation on the same chain. For example, you want to use IK to
have the hand grab at something, but to get a more convincing swing
from the shoulder, you need to use FK.
When you’re setting up a character, you should consider how the bones
will be rotating for each body part so that you can choose the proper
rotation order for them.
In XSI, it’s easy to blend between FK and IK using the Blend FK/IK
slider in the Kinematics Chain property editor. This slider controls the
influence that IK and FK both have on a chain, smoothly blending the
results of bone rotation and effector translation.
While the default rotation order of XYZ works for some body parts,
there are certain body parts or movements for which this order can
cause gimbal lock. Gimbal lock is a state that Euler angles go through
when two rotation axes overlap. The angle values can change drastically
when rotations are interpolated through it.
By blending, you can animate with rotations to get a good “whip”
effect (FK), and then blend in specific grabbing/punching/kicking
(goal-oriented IK) movements, or mix goal-oriented movements (IK)
against motion capture data (FK).
When you change the rotation order, you can solve the gimbal lock.
You can change the order in which an object is rotated about its
parent’s axes by selecting a Rotation > Order in the bone’s Local
Transform > SRT property page (select the bone and press Ctrl+k).
1 Animate the chain in FK (key the
bone’s rotation parameters), as well as
in IK (key the effector’s position).
The ghost above the arm shows the
chain at full FK; the ghost below the
arm shows the chain at full IK.
2
Drag the Blend FK/IK slider to set
the value you want between FK
(0) and IK (1). The chain
interpolates smoothly between its
IK and FK positions.
3 Set keys for the
Blend FK/IK values at the
appropriate frames where you
want the blend to start and
finish.
180 • SOFTIMAGE|XSI
To help you see how the
chain is blending, you can
use ghosting. Ghosts are
shown for the full FK and IK
positions of the
chains.
You can also convert the rotation angles from Euler to quaternion
using the Animation > Convert to Quaternion command in the
Animation panel. Quaternion rotation angles produce a smooth
interpolation which helps to prevent gimbal lock.
Walkin’ the Walk Cycle
Walkin’ the Walk Cycle
A walk cycle is probably the most common task you’re going to do as
an animator. In XSI, you can do this with traditional tools such as
keying and the fcurve editor, but XSI provides other excellent tools to
help you animate your character. These include all the tools shown in
this section, as well as the animation mixer.
You can use rotoscoped images of models to act as a
template from which you can base the character’s poses
to be keyed.
Key the position and rotation of the
character’s arms, legs, and hips on
one side of the body. Key the 5 basic
poses at frames 1, 5, 9, etc., or
frames 1, 6, 11, depending on your
character’s stride.
1
You’ll need to tweak your character’s walk afterward to
make it look natural and appropriate for the character.
Tip: It helps to make the arms and legs of the left and
right side in different colors. Here, the right leg and arm
are in black.
The start and end poses must match
so that the motion can be properly
cycled in the animation mixer.
Repeat the same poses for the
other side of the body on
frames 21, 25, 29, and 3 (the
first pose is the same as the last
pose of the side you just did).
2
You can store the walk cycle in an action source, then bring that source
into the mixer to cycle it. Once in the mixer, you can also reverse it,
stretch it out or compress it to change the timing, move it around in
time, mix it with other actions, and more—all in a nondestructive way.
4
Save the finished walk cycle in an action source
using the Action > Store > Fcurves command.
5
Open the animation mixer, and load
the action source into it by rightclicking on a green track and
choosing Insert Source.
This create an action clip for the walk
cycle on that track.
3
If the feet slide when they’re on the
ground, you can fix it by making the
fcurve interpolation flat between the
pose keys. Open the animation (fcurve)
editor, select the keys on the fcurves,
and choose Keys > Zero Slope
Orientation.
6 Cycle the walk clip in the mixer by dragging one of the
clip’s lower corners. You can also quicken or slow down
the walk pace, blend it with another action, or create a
transition to yet another action, such as to a run cycle.
Use the cid clip effect variable to add a progressive
forward offset to a stationary cycle.
The fcurve editor is the tool to help you
fine-tune the walk’s fcurves in many
ways.
Basics • 181
Section 10 • Character Animation
Motion Capture
Motion captured animation (usually known as mocap) offers a way to
animate a character based on motion that is electronically gathered
from a human or animal. This is useful for animating actions that are
particularly difficult to do well with keyframing or other methods of
animation creation. In XSI, you can import mocap data and apply it
onto rigs, as well as retarget animation from BVH or C3D mocap files
to rigs.
Importing Acclaim and Biovision Mocap Data
You can import motion capture information into XSI using the File >
Import > Acclaim and Biovision commands. Once the files are
imported, you can constrain the skeletons to a rig and plot the mocap
data into fcurves so that you can edit the animation.
Acclaim Skeleton files
(ASF) contain information
about the hierarchy and
base pose of the skeleton.
This information is used to
create a skeleton hierarchy
(nulls or bones) when
imported into XSI. The
animation for this skeleton
is saved in an
accompanying Acclaim
Motion Capture (AMC)
file.
Mocap files with
Mocap files with
Biovision (BVH) files
hierarchy imported
contain information about hierarchy imported
as nulls.
as bone chains.
the hierarchy of the
skeleton. This information
is used to create a skeleton hierarchy (nulls or bones) when imported
into XSI. BVH files also contain two action sources to apply to the
skeleton: the base pose and the motion.
182 • SOFTIMAGE|XSI
Retargeting Animation
Retargeting allows you to transfer any type of animation between
characters, regardless of their size or proportions. Retargeting involves
first tagging (identifying) the elements of a rig, then transferring
animation from another rig or a mocap data file to the target rig. The
animation is retargeted to the new rig as it’s transferred. The retargeted
animation is “live” on the rig, controlled by the retargeting operators
that live on the tagged rig elements. Because of this, you can adjust the
animation on the rig at any time so that the motion is exactly as you
like. If you want to commit the retargeted animation to fcurves, you
can plot it on the rig.
While you can retarget any type of animation between characters, it is
especially useful for reusing motion capture data to animate many
different characters with the same movements, such as you would for a
game. For example, you can reuse a basic run mocap file for many
characters and then adjust the animation for each one as you like by
adding offsets in different animation layers. Using the retargeting and
layering tools in XSI, you can quickly test out many variations of
animation on the characters.
Before you start tagging the character elements or retargeting
animation, make sure that the skeleton or rig is in a model. Retargeting
can work only within model structures.
Using the commands in the Tools > MOTOR menu on the Animate
toolbar, you can perform all of these tasks:
• Tag rig elements so that animation can be retargeted onto them.
• Retarget any type of animation from one rig to another.
• Retarget animation from BVH (Biovision) or C3D mocap files to a
rig.
• Adjust the retargeted animation on the rig, such as by setting
position and rotation offsets for the whole rig or just certain
elements.
Motion Capture
• Save any type of retargeted animation in a normalized motion
format (.motor file) so that it can be loaded and retargeted on any
tagged rig. This makes it easy to build up libraries of animation that
can be used across all your rigs.
• Plot the retargeted animation on a rig into fcurves so that you can
edit the animation.
Retargeting animation between rigs
When you retarget animation between rigs, the retargeting
operator figures out which rig elements match based on their tags.
Then it maps and generates the animation that is transferred to the
target rig.
Tagging a rig’s elements
Tagging tells XSI which part is which on
your character, such as its hips, chest,
legs, root, and so on. You tag the rig
controls or skeleton parts that you use to
animate the character. These tags are
used to create a map (template) for that
character.
The animation between the two
rigs is a live link that allows for
interaction.
Select the source rig, then press
Ctrl and select the target rig.
Select a rig and choose the MOTOR >
Tag Rig command to tag its elements.
Then choose the MOTOR > Rig to
Rig command to retarget the
animation from the source to the
target rig.
Once you have tagged a rig, you can
use it for retargeting with another rig or
with mocap data.
If you want to save the animation
on the target rig, you must plot
(bake) it into fcurves.
Retargeting mocap data from a file to a rig
You can retarget mocap data from either C3D or
BVH files to a tagged rig.
Animated source rig
Choose the MOTOR >
Mocap to Rig
command to load either
a *.C3D or Biovision
file and apply it to a rig
in XSI.
You can then save the mocap
animation on the rig in a
.motor file so that you can
apply it to any tagged rig of
the same structure.
Biovision rig
C3D rig
Basics • 183
Section 10 • Character Animation
Adding Offsets to Mocap Data
Working with High-density Fcurves
It’s inevitable: the director took a look at the mocap animation for this
character. It looks good but now he has some comments and wants to
make a few changes. This can be problematic when the change affects a
key pose or move because many other moves and poses are usually
linked to it.
When you import motion capture data, the fcurves often have many
keys, usually one per frame. A high-density fcurve is difficult to edit
because if you change even keys, you have to adjust many other keys to
retain the overall shape of the curve.
Club-bot with a mocap
run action clip in the
animation mixer.
The left leg and arm are
rotated a bit and then keyed
as an offset to the clip.
Luckily, in XSI you can easily add non-destructive offsets to mocap
data in any of these ways:
• Creating animation layers: Create a layer of keys as an offset to
mocap animation. Layers let you keyframe as you would normally
in XSI, but those keys are kept in a separate layer of animation so
that they don’t affect the base mocap animation. After you’ve added
one or more layers of keys and you’re happy with the results, you
can collapse the layers to “bake” them into the base layer of
animation.
• Mixing fcurves with an action clip: Normally, when there is an
action clip in the mixer, it overrides any other animation on that
object that covers the same frames. However, you can blend fcurves
directly with an action clip over the same frames. This allows you to
blend mixer animation with scene level animation.
• Creating action clip effects in the mixer. Clip effects let you adjust
the animation in an action clip without affecting the original
animation in the action source. Clip effects add values “on top” of a
clip, such as noise or offsets.
184 • SOFTIMAGE|XSI
Because editing these fcurves is not always easy, there are tools in the
fcurve editor that can help you work with them: the HLE (high-level
editing) tool and the curve processing tools (for smoothing,
resampling, and fitting curves).
The HLE tool lets you shape an fcurve in an
overall fashion, like lattices shaping an object’s
geometry.
The HLE tool creates a
sculpting curve that has
few keys, but each one
refers to a group of
points on the dense
fcurve.
Section 11
Shape Animation
Shape animation is the process of deforming an
object over time. You take “snapshots” called shape
keys of the object in different poses, then you blend
these poses to animate them.
XSI offers a number of tools in which you can create
shape animation so that you can work in any way
that you feel comfortable.
What you’ll find in this section ...
• Different Tools for Animating Shapes
• Shape Animation on Clusters
• Using Construction Modes for Shape
Animation
• Creating and Animating Shapes in the Shape
Manager
• Selecting Shape Keys
• Storing and Applying Shape Keys
• Using the Animation Mixer for Shape
Animation
• Mixing the Weights of Shape Keys
Basics • 185
Section 11 • Shape Animation
Things are Shaping Up
With shape animation, you can change the shape of an object over
time. To do this, you animate the geometrical shape (deformation) of
an object using clusters of points and store shape keys for each pose.
In XSI, all shape animation is done on clusters. This means that you
can have multiple clusters animated at the same time on the same
object, such as a cluster for each eyebrow, one for the upper lip, one for
the lower lip, etc. Or you can treat a complete object as one cluster,
such as a head, and store shape keys for it.
Different Tools for Animating Shapes
Shape animation in XSI uses the animation mixer under the hood to
do its work. You can also use the animation mixer to do your shape
work, but there are other methods too. You can:
• Use the shape manager to easily create and animate shape keys.
This is probably the fastest and easiest way to work.
• Select shape keys from a group of target shapes (sometimes called
morphing or blend shapes).
• Store and apply shape keys at different frames.
Shape animation is done for this face by simply moving the points in
different clusters on the head object, then storing a shape key for each
cluster’s pose.
You could also treat the whole head object as a cluster and deform its points
in the same way, then store shape keys for each pose for the object.
You can use surface or polygon objects to create shape animation, or
even curves, particles, and lattices—any geometry that has a static
number of points.
You can create shape keys from any kind of deformation to produce
shape animation. For example, you can store shape keys for clusters on
an object by moving points or by deforming by spline, such as for facial
animation and lip-syncing. Or you can create a shape key for an
object’s overall deformation using envelopes, lattices, or any of the
standard deform operators (Bend, Bulge, Twist, etc.).
186 • SOFTIMAGE|XSI
You can use the animation mixer with any of these methods. It is a
powerful tool that gives you a high degree of flexibility in reworking
your shape animation in a nonlinear way. Because shape animation is
essentially pose-based, you can easily reorder the poses in time, reuse
the same pose several times, and mix the poses together as you like, in
the animation mixer. You can even add audio clips to the mixer to
synchronize your shape animation to sound, such as for lip synching.
Shape Animation and Models
Before you start to animate shapes, it’s a good idea to create a model
containing the object that is to be shape-animated. This puts the object
under a Model node and creates a Mixer node for that model that
contains all its shape keys. This way, the shape keys are stored within
the model rather than within the entire scene.
You can then reuse the model with
its shape animation in another
scene, import and export the
model with all its shapes and mixer,
or duplicate the model with its
shape animation.
Things are Shaping Up
Shape Animation on Clusters
Shape Sources and Clips
All shape animation is done on clusters. You can have multiple clusters on
the same object, or you can treat an object as one cluster. You can even
store shape keys for tagged points that are not saved as a cluster.
Whole object. A cluster
including all points on
head is automatically
created when you store
a shape key.
Object with cluster
A shape source is the shape that you have stored
and is usually referred to as a shape key. By storing
several shapes for an object, you can build up a
library of sources. Shape sources are stored in the
model’s Mixer > Sources > Shape folder.
Shape clip is an instance of that source on a track in the animation
mixer. Even if you don’t use the mixer for shape animation, a clip is
always created when you create a shape key.
Object with tagged
points. A cluster of these
points is automatically
created when you store a
shape key.
Shape Reference Modes
Shape reference modes control how the
shape behaves when the base shape is
deformed in Modeling mode.
You should select a reference mode
before you store shape keys on a cluster.
Click the Clusters button on the
Select panel to see a list of the
object’s
clusters.
Always store shape keys using the same cluster of points. When you
deform an object, but store a shape key only for a cluster of points on
that object, the deformed points that don’t belong to that cluster snap
back to their original position when you change frames.
To make it easier to use the same cluster, rename the cluster with a
descriptive name as soon as you create it.
Shape key on a single cluster
Local Relative Mode
Shape deforms with
object.
Object Relative
Mode: Shape deforms
with object but keeps
original orientation.
Absolute Mode
Shape stays locked in
place as object deforms.
Basics • 187
Section 11 • Shape Animation
Using Construction Modes for Shape Animation
When you’re creating shapes, you can use any number of deformation
operators, including envelopes, as the tools for sculpting the shapes.
Because you can use these deformation operators for tasks other than
shape animation, you need to let XSI know how you want to use them.
For example, when you apply a deformation, you could be building the
object’s basic geometry (modeling), or creating a shape key for use
with shape animation (shape modeling), or creating an animated
deformation effect (animation).
1
Select one of the four
construction modes from
the list in the main menu at
the top of the XSI window.
To tell XSI how you’re using the deformation, you need to select the
correct construction mode: Modeling, Shape Modeling, Animation, or
Secondary Shape. The mode puts the deformation operator in one of
four regions in the object’s construction history that corresponds to
that mode. These regions keep the construction history clean and well
ordered by allowing you to classify operators according to how you
want to use them.
Here is a quick overview of how you can use the four different
construction modes for doing shape animation:
In Modeling mode, create and
deform the object to be shapeanimated.
This is the base shape for the
object, which is a result of all the
operators in the Modeling region
of the object’s construction history.
When you create shape keys, they
are stored as the difference of
point positions from this base
shape’s geometry.
3 Switch to Shape Modeling mode to create shape keys. These
shape keys are set in reference to the object’s base shape.
Markers in the explorer divide up
the object’s construction history
into regions that correspond to
the four construction modes.
Deformation operators are kept
in their appropriate region.
188 • SOFTIMAGE|XSI
2
If the object is to be an
envelope for a skeleton,
switch to Animation mode
and apply it as an envelope.
In this case, the jaw bone is
rotated to help deform the
envelope for lip synching.
4 To fix any geometry problems
due to the envelope’s animation,
switch to Secondary Shape
mode and create shape keys in
reference to the animated
envelope’s geometry.
For example, you can fix up the
shape in the corner of the
mouth in relation to the jaw
moving and deforming the
envelope.
Creating and Animating Shapes in the Shape Manager
Creating and Animating Shapes in the Shape Manager
The shape manager provides you with an environment for creating,
editing, and animating shapes. To help you work efficiently, the shape
manager has a viewer that immediately displays the results of the
changes as you make them to the object.
1
Open the shape manager in a viewport or
in a floating window (choose View >
Animation > Shape Manager).
2 Duplicate the shape
and rename it.
When you create a new shape in the shape manager, a shape key is
added to the object’s Mixer > Sources > Shape list and shape clips are
created for the object in the animation mixer.
3 Deform the object or
cluster into a new shape in
the shape viewer.
4 Repeat these two steps to create a library
of different shapes for this object.
With an object selected, select Shape or an
existing shape in the shape list.
6
Go to the next frame at which you want to set a key,
change the values of the weight sliders, and set
another key.
5 On the Animate tab, set the
values of the shape weight
sliders until you get the
shape you want. Notice the
object update in the shape
viewer as you change the
slider values.
Set a key at this frame.
Basics • 189
Section 11 • Shape Animation
Selecting Shape Keys
Selecting shape keys (also known as morphing or blend shapes) lets you
deform a target object using a series of objects that are deformed in
different shapes. These objects must have the same type of geometry
and the same topology (number and arrangement of points) as the
target object that they’re shape-animating. The easiest way to do this is
to duplicate the target object that you want to shape-animate, and then
deform each of the copies in a different way that will correspond to a
shape key.
1 Create the target
object in a neutral
pose. This is the
object to be
deformed
with shape
keys.
4
Selecting shape keys sets up a relation between the target object and the
shape keys, allowing to you fine-tune the shape keys and have those
adjustments appear on the target object. For example, if your client
thinks that the nose is too long on one of the shapes, just change the
nose for that shape and it updates the nose on the target object. You
can also choose to break the relationship between the target object and
its shape keys to keep performance optimal.
Select the target object and choose Deform
> Shape > Select Shape Key. Then pick
each of the deformed shapes in the order
that you want to
create shape keys for
the target object.
6
5 Name the first shape created in the
Name text box, such as face. The other
shapes use this name plus a number, such
as face1, face2, etc.
For each shape you pick, a shape key is
added to the
model’s Mixer >
Sources > Shape
folder.
190 • SOFTIMAGE|XSI
3
2 Duplicate the target
object and deform into
different shapes,
such as for phonemes.
Move them out of the
way of the camera.
Select Shape Modeling from
the Construction Mode list.
To create the animation, set the values for
each shape key’s weight slider in the
animation mixer or in the Shape Weights
custom parameter set.
In either the mixer or the
parameter set, click the weight
slider’s animation icon to key
this value at this frame.
Storing and Applying Shape Keys
Storing and Applying Shape Keys
When you store and apply shape keys, you create a shape source in the
model’s Mixer > Sources > Shape folder, as well as a shape clip in the
animation mixer.
You can then animate the shape weights in the Shape Weights custom
parameter set that is automatically created for you. This custom
parameter set contains a proxy of each shape key’s weight slider.
If you want to use the mixer for doing your shape animation, this is an
easy way to work because the clips are set up for you. In the mixer, you
can then change the length of the clips, create transitions between clips,
change the weight of the clips, and so on.
You can also simply store shape keys and then apply them to the object
or cluster later. When you store shape keys, a shape key is created for
the current shape and added to the model’s list of shape sources, but it
does not create a shape clip in the mixer. Storing shape keys is a good
way to build up a library of shapes: then when you’re ready to apply the
shape keys, you can load them into the animation mixer to create shape
clips. Or if you don’t want to use the mixer, you can simply apply the
shape keys to the object or cluster at different frames.
If you don’t want to use the mixer, storing and applying shape keys is
also an easy way to work because everything is set up “under the hood”
in the mixer for you.
1
Select a cluster of points
or the whole object
(creates a cluster
for the object).
2
Select Shape Modeling from
the Construction Mode list.
3
Go to the frame at which you
want to set a shape key.
4 Deform the cluster or object into a
shape that you want to store, then
choose Deform > Shape > Store
and Apply Shape Key.
When you store and apply, the shape key is
applied to the cluster or object at the
current frame. A shape clip for this shape
key is also created in the animation mixer.
6
5
Go to the next frame at
which you want to set a
shape key, deform the cluster
or object, and store and apply
another shape key.
You can edit the shape animation in the mixer. You can
resize and layer the clips, add transitions between the
clips for a smooth change between
shapes.
You can also animate the weight of each
shape clip against each other in the mixer
or in the Shape
Weights custom
parameter set.
Basics • 191
Section 11 • Shape Animation
Using the Animation Mixer for Shape Animation
Once you have created shape keys, you can use the animation mixer to
sequence and mix them as shape clips. This lets you easily move shape
clips around in a nonlinear way and change the weighting between two
or more clips where they overlap in time.
The first step to using shape keys in the mixer is to add them as shape
clips to a shape track. If you stored and applied shape keys or selected
shape keys, this is automatically done for you.
Shape clips do not actually contain animation—they are simply static
poses. This is why you need to create transitions between them and/or
weight their shapes against each other to animate. This creates smooth
and more complex shape animation than is possible with shape keys
simply set at different frames with no transitions or weighting.
Once you have added shape clips to the animation mixer, you can use
any of the mixer’s features to move, reorder, copy, scale, trim, and
blend them.
Notice how the shape interpolates over time, from clip to clip.
To add a shape key as a clip to a track in the
mixer, right-click on a blue shape track and
choose Insert Source, then pick the source
you’ve stored.
You can also drag a shape key from the
model’s Mixer > Sources > Shapes
folder in the explorer and drop it on a
blue shape track.
You can make composite shapes
by creating compound clips for
different clusters on the same
frames of different tracks.
You can easily reorder the shape clips in time on the tracks, or duplicate a
clip to repeat a shape several times over the animation. Because each shape
clip refers to the source, you don’t need to duplicate the source.
For example, one compound clip
could drive the eyebrow cluster
of a character while another clip
drives the mouth cluster.
Create sequence of shapes by creating clips one after another using
transitions to help smooth the spaces between them.
192 • SOFTIMAGE|XSI
Mixing the Weights of Shape Keys
Mixing the Weights of Shape Keys
Shape clips don’t contain any animation—they are simply static poses.
As a result, the way to create animation with shapes is to animate the
weight of each shape. Weighting is always done in relation to another
shape key. This means that shape keys have to be overlapping in time
with at least one other shape key to be weighted.
The higher the weight value, the more strongly a clip contributes to the
combined animation. For example, if you set the weight’s value to 1,
the clip’s contribution to the animation is 100% of its weight.
No matter which tool you use, the basic process is the same: go to the
frame you want, set each shape weight’s value, then click the keyframe
or animation icon to set a key. You can then edit the resulting weight
function curve in the animation editor as you would any other fcurve.
How to Mix and Key Action Weights in the Mixer
1
Put clips on different
tracks and overlap
them where you want
to mix them.
You can mix shape key weights in different ways, depending on how
you created the shape keys in the first place and on how you like to
work. You can mix shape key weights:
In most cases, this is
for the whole duration
of the scene.
• Using the shape manager.
• Using the animation mixer.
3 Set a weight value for
each clip at this frame.
Red curves in the clip
display its weight values.
• Using a custom parameter set, either the Shape Weights one or one
you set up yourself.
4
The advantage of having a custom control panel is that you can
have all the sliders in one property editor that you can easily move
around in the workspace. As well, you can key all the sliders’ values
at once by clicking the property set’s keyframe icon.
Click the Shape Weights icon beneath the shape-animated object in
the explorer to open the custom parameter set.
2 Move to the frame at which you
want to set a key.
5
Click each weight’s
animation icon to set
a key for this value at
this frame.
After you are done setting keys for the weights, you can edit the
resulting weight fcurves. Right-click the weight’s animation icon
and choose Animation Editor.
Basics • 193
Section 11 • Shape Animation
Normalized or Additive Weighting
One of the most important things to understand about weighting is to
know whether weights are normalized (averaged) or additive. You can
control how the weights of clips are combined, depending on whether
or not you select the Normalize option in the Mixer Properties.
You’ll know that shapes are normalized if they seem to average or
“smooth” each other out, or if different clusters on the same object
affect each other when they shouldn’t (such as an eyebrow affecting the
mouth shape). You may want to use the normalized mode if you’re
mixing together shapes for a whole object.
In many cases, you will probably want the weight to be additive instead
of normalized, such as if you’re mixing different clusters on one face
over the same frames. This adds the shapes together but doesn’t
“blend” them together.
Additive mix of Shapes 1 and 2. The
shapes are literally added together to
create a composite result. You can also
exaggerate shapes by setting weight
values higher than 1.
+
Shape 1
=
Shape 2
Normalized mix of Shapes 1 and 2. The
shapes are averaged resulting in a
combination of the shapes. The total
weight value of the two shapes equals 1.
194 • SOFTIMAGE|XSI
or
Section 12
Actions and the
Animation Mixer
Actions are “packages” of low-level animation, such
as function curves, expressions, constraints, and
linked parameters. By creating a package that
represents the animation, you can work at a higher
level of animation that is not restricted by time.
The animation mixer is the tool that lets you work
with actions, all in a nonlinear and non-destructive
way.
What you’ll find in this section ...
• What Is Nonlinear Animation?
• Overview of the Animation Mixer
• Storing Animation in Action Sources
• Working with Clips in the Animation Mixer
• Mixing the Weights of Action Clips
• Modifying and Offsetting Action Clips
• Sharing Animation between Models
• Adding Audio to the Mix
Basics • 195
Section 12 • Actions and the Animation Mixer
What Is Nonlinear Animation?
Nonlinear animation is a way of animating that does not restrict you to
a fixed time frame. You store animation into a package called an action
source, then load this package in the animation mixer. In the mixer, you
can layer and mix the animation sequences at a higher level in a
nonlinear and non-destructive way. You can reuse and fine-tune
animation you’ve created with keyframes, expressions, constraints, and
shape animation (shape keys stored in shape sources). You can even
add audio clips to the mixer to help synchronize it with the animation.
And at any time, you can go back and modify the animation data at the
lower levels, without needing to begin again and redo all your work.
When you bring an action source into the animation mixer, it becomes
a clip. In the mixer, you can move an action clip around anywhere in
time, squeeze or stretch its length as you like, apply one action after
another in sequences, and combine two or more actions together to
create a new animation. On the frames “covered” by the clip, the data
stored in the source drives the object’s animation.
If you’re modifying someone else’s animation, you don’t really have to
deconstruct their work—just add a layer with your own animation.
You can even modify the existing animation with a clip effect, acting as
a separate and removable layer on top of the original animation.
Models and the Mixer
Models provide a way of organizing the objects in a scene, like a mini
scene. You should always put your object structures within a model so
that you have a Mixer node for it, because each model can have only
one Mixer node. This node contains mixer data, such as action sources,
mixer tracks, clips, transitions, and compounds.
If the characters in the scene aren’t within models, you have only one
Mixer node for the whole scene (in the scene root, which is technically
a model) which means that you can’t easily copy animation from one
model to another.
Club_bot model structure contains
many elements, including a Mixer
node that has its action sources.
The animation mixer is well-suited for editing existing material and
bringing together all the pieces of an animation. In it, you can assemble
all the bits and pieces you’ve imported from different scenes and
models and help you build them into a final animation.
196 • SOFTIMAGE|XSI
There are a number of ways in which you can share animation between
models, whether they are in the same scene or different scenes. You can
copy action sources, clips, compound clips, and even a model’s whole
Mixer node between models. And when you duplicate a model, all
sources and clips and mixer information are also duplicated.
Overview of the Animation Mixer
Overview of the Animation Mixer
The animation mixer gives you high-level control over animation
because you can layer and mix sequences in a nonlinear and nondestructive way, making it the ideal tool to use for complex animation.
The animation mixer looks like a digital video editor, but instead of
editing video sequences, you create animation sequences, transitions,
and mixes. It helps you reuse and fine-tune animation you’ve created
with keyframes, expressions, and constraints.
You can use the animation mixer with animation data (action sources),
shape animation data (shape keys as shape sources), and add audio
files for synchronization. Once you have a library of action sources
created, you bring them into the mixer as action clips.
You can display the animation mixer
in a viewport, or display it in a
floating window by pressing Alt+0
(zero) or choosing View >
Animation > Animation Mixer
on the main menu bar.
Icons indicate the type of track
and let you select the track.
Each clip is an instance of the action source. With clips, the original
animation data stays untouched, making it easy to experiment with the
animation without fear of destroying anything. You can always go back
and change the original data and all your changes will automatically be
applied; or you can add animation on top of the original animation
source, as you may want to do with motion capture data.
On the frames “covered” by the clip, the data stored in the source drives
the animation for the object. The mixer overrides any other animation
that is on the object at that frame, unless you set a special option that
mixes an action clip with fcurves on the object over the same frames.
Multiple tracks let you
overlap clips in time
and mix their weights.
The playback cursor
shows the current
frame on the timeline.
Select an object, then click the
Update icon in the mixer to
see its tracks and clips.
Tracks are the background on
which you add and sequence clips
in the mixer. You can sequence one
clip after another on the same track
or different tracks. To overlap clips
in time for mixing, they must be on
separate tracks.
You can ripple, mute, solo,
and ghost all clips on a track.
Clips appear as colored bars according to
their type. Create sequences of clips on the
same track or on different tracks.
Mix overlapping clips by setting
and animating their weight values
in the weight panel.
Animation (action) tracks
are green.
Shape tracks are blue.
Audio tracks are sand.
To add a track, press Shift+A, Shift+S, or Shift+U to add animation
(action), shape, or audio tracks, respectively.
You can also choose a type from the Track menu.
Basics • 197
Section 12 • Actions and the Animation Mixer
Storing Animation in Action Sources
Action sources are packages of animation that you can use in the
animation mixer. This is where the animation lives. You can package
function curves, expressions, constraints, and linked parameters into a
source, as well as rigid body simulations. You can create an entire library
of actions, like walk cycles or jumps, and then share them among any
number of models.
How to Create Action Sources and Clips
1 Animate an object or model. Each
animation sequence here will be
stored in its own source.
When you create an action source, it is saved in the Sources > model
folder for the scene, which you can find in the explorer. This lets you see
all sources for all models in the scene. However, for convenience, a copy
of the source is available in the model’s Mixer > Sources > Animation
folder. The name of this source is in italics to indicate that it’s a copy of
the original source.
2 Select the animated object and choose
an appropriate command from the
Actions > Store menu on the Animate
toolbar. This stores the animation in an
action source.
3 Right-click on a track and choose
Insert Source. An action clip is
created.
Arm wave
Step and look
Ground jimmy
4
You can also drag a source from
the model’s Sources folder in the
explorer and drop it on a track.
Once the clip is in the mixer, you
can manipulate it in many ways.
Here are some ideas ...
You can composite actions by
adding clips for different
parameters on the same frames
of different tracks.
Here, the top clip drives the legs
of the character while the
bottom clip drives the arms.
198 • SOFTIMAGE|XSI
You can use the mixer as a simple
sequencing tool that lets you position and
scale multiple clips on a single track.
You may find the technique of pose-topose animation using the mixer easy to do
by saving static poses of a character,
loading the actions onto the tracks in
sequence, and then creating transitions
between the poses.
Storing Animation in Action Sources
Changing What’s in an Action Source
After you have created an action source, you can modify the original
animation data stored in its source, remove items from it, or even add
keys to fcurves in the source. When you modify the source, you change
the animation for all action clips that were created from that source
and refer to it.
Because editing an action source is destructive (you’re changing the
original animation data), you should always make a backup copy of it
before editing. This is also useful to do if you don’t want all action clips
to share the same source (duplicate the source before creating clips
from it).
You can access the animation data in an action source by right-clicking
an action clip and choosing Source, or right-click and choose
Animation Editor to access the source’s fcurves.
If you want to modify an action clip without affecting the
source, you must use clip effects.
Restoring the Animation in a Source Back to an
Object
You can return to the original animation stored in an action source at
any time by applying that action source to the object. This is useful for
restoring animation to an object if you removed it when you created an
action source, or you can apply a source to another model.
To apply the action source to a model, you simply select the source in
the model’s Mixer > Sources > Animation folder in the explorer and
choose Actions > Apply > Action from the Animate toolbar.
You can also deactivate or remove
certain parameters in the source.
Click this button to
access the source’s
fcurves or
constraints
(depending on the
type of animation
in the source)
Select the action
source in the model’s
Mixer > Sources >
Animation node, then
choose the Apply
Action command to
restore it to that object.
Creating Action Sources from Clips
If expressions are stored in the source, enter
information in a Value cell to edit them.
To add keys to a source, use the Action Key
button in the mixer’s command bar.
Because applying works only on sources, you can’t use it on clips. But
what do you do when you want to combine some clips? You can select
the clips and choose Clip > Freeze to New Source or Clip > Merge to
New Source in the mixer to create a new source. You can then apply
this new source to the model with the Actions > Apply > Action
command.
Basics • 199
Section 12 • Actions and the Animation Mixer
Working with Clips in the Animation Mixer
Clips are represented by boxes on tracks in the mixer that you can
move, scale, copy, trim, cycle, bounce, etc. Clips define the range of
frames over which the animation items in the source are active and play
back. You can also create compound clips which are a way of packaging
multiple clips together so that you can work with larger amounts of
animation data more easily.
Clips are instances of action sources that you have created. While
sources contain data such as function curves, clips don’t actually
contain any animation: they simply reference the animation in the
source and wrap it with timing information. You can create multiple
clips from the same source and modify the clips independently of each
other without affecting the animation data in the source.
To add a clip to a track in the mixer,
right-click on a track and choose
Insert Source, then pick the source
you’ve stored.
Select and move clips
Select and drag a clip to move it somewhere
else on the same track or a different track of
the same type (action, shape, or audio).
Select only
You can also drag a source
from the model’s Sources
folder in the explorer and
drop it on a track in the
mixer.
Press Ctrl while dragging the clip to
copy it. You can copy clips between
different models’ mixers this way, one
clip at a time.
Drag on either of the clip’s upper
corners to hold the clip’s first or last
frames for any number of frames.
Click and drag in the middle of
either end of a clip to scale it.
Drag on either of the clip’s
lower corners to cycle it.
Press Ctrl+drag on either of the
clip’s lower corners to bounce it.
Transitions interpolate from one clip to the next, making the animation flow
smoothly between clips, rather than jerk suddenly at the start of the next clip.
Add markers to clips and add
information to a clip, such as to
synchronize action or shape clips
with audio clips.
If you’re working in a pose-to-pose method of animation using pose-based action
clips, you need to use transitions to
prevent a blocky-looking animation.
Create thumbnails for each clip to help
quickly identify what’s in them.
200 • SOFTIMAGE|XSI
Mixing the Weights of Action Clips
Mixing the Weights of Action Clips
One of the most powerful features of the animation mixer is its ability
to mix the weight of clips against each other. When two or more clips
overlap in time and drive the same objects, you can mix them by
setting their weights. By adjusting the weight of a clip, you can control
how much of an influence it has compared to the other clips in the
How to Mix and Key Action Clip Weights
1
Put clips on different
tracks and overlap them
where you want to mix
them.
resulting animation. The higher the mix weight, the more strongly a
clip contributes to the animation. Mixing compound clips is an easy
way to blend animation at an even higher level.
You can set keys on each clip’s weight to animate the changes. When
the weight is animated, a weight fcurve is created that you can adjust
like any other fcurve.
2 Move to the frame at which you
want to set a key.
3 Set a weight value for
each clip at this frame.
Red curves on the clip
display its weight values.
This can also be for the
duration of the scene.
For the club-bot here, an
arm wave action is being
mixed with a dejected
turn action.
You can control how the weights of clips are combined
using the Normalize option in the Mixer Properties:
4 Click each weight’s
animation icon to
set a key for this
value at this frame.
You can also create a custom
parameter set, then drag
and drop the animation icons
from each action clip weight
in the mixer into the
parameter set to make proxies
of those weight sliders.
• When Normalize is on, the weight values of the separate
clips are averaged out. This is useful if you’re blending
similar actions, such as two leg actions of a character.
• When Normalize is off, mixes are additive meaning that
the weight values of the separate clips are added on top
of each other. This is useful if you’re weighting dissimilar
actions against each other, such as weighting arm and leg
actions of a character, or separate clusters on a face with
shape animation.
5 After you are done setting keys for the
weights, you can edit the resulting
weight fcurves.
Right-click the weight’s animation icon
and choose Animation Editor.
Basics • 201
Section 12 • Actions and the Animation Mixer
Mixing Fcurves with Action Clips
Modifying and Offsetting Action Clips
Normally, when there is an action clip in the mixer, it overrides any
other animation on that object that covers the same frames. However,
by selecting the Mix Current Animation option in the Mixer Properties
editor, you can blend fcurves on the object directly with an action clip
over the same frames.
If you want to modify an action clip that contains animation data from
fcurves, you can create a clip effect. A clip effect is a package of any
number of variables and functions that you use to modify the data in
the action source. Each clip effect is an independent package,
associated with its action clip, and sits “on top” of the clip’s original
action source animation without touching it.
For example, you can paste a clip in the mixer that contains the final
animation for an object, then you can blend it with other fcurve
animation you have added to that object, such as a slight offset or a
minor adjustment to a mocap clip.
Being able to mix clips directly with fcurves means that you can easily
create animation using the mixer, as well as using it for blending and
tweaking final animations. You can keep manipulating and setting keys
for the animated object and not have to make its animation into a clip
to blend it with another clip.
Club-bot with a run
action clip active in the
animation mixer.
Because the effect is an independent unit, you can easily activate or
deactivate it, allowing you to toggle between the clip’s original
animation and the animation modifications in the clip effect. This
makes it easy to test out changes to your animation.
You may need to edit a clip’s animation for a number of reasons:
• Add a progressive offset (using the cid variable) to a stationary walk
cycle so that a character moves forward with each cycle.
• Animation coming from a library of stored actions often needs to
be modified to fit a particular goal or environment. For example,
you have a walk cycle, but the character must now step over an
obstacle, so you have to move the leg over the obstacle.
Open the Mixer Properties editor and select
Mix Current Animation. Then adjust the leg
and arm a bit (as below right) and key it.
• Animation that was originally created or captured for a given
character must be applied to a different character that has different
proportions.
The Mix Weight value determines how much
influence the fcurve animation has over the
animation in the clip.
• Animation with numerous keys, such as motion capture
animation, must be adjusted, but you don’t want to touch the
original animation because it can be difficult to edit.
Key this parameter to blend the fcurves in and
out of the action clips.
Moving a key point in
a mocap fcurve
results in a peak in
the curve.
202 • SOFTIMAGE|XSI
Modifying and Offsetting Action Clips
How to Add a Clip Effect to a Clip
1
Right-click an action clip and
choose Clip Properties.
2
In the Instanced Action
property editor, click the Clip
Item Information tab.
Offsetting Clip Values
Offsetting actions is a task that you will probably perform frequently.
This lets you move an object in local space so that its animation occurs
in a different location from where it was originally defined.
3 Enter formulas for any item’s
expression to create a clip effect.
Original position on
left with foot in ball.
4
The clip effect is created and
displayed as a yellow bar
above the clip.
The cid variable in a clip effect is the cycle ID number.
The cycle ID can be used to progressively offset a
parameter in an action, such as for having a walk
cycle move forward. The Cycle ID of the current
frame is in the Time Control property editor (click the
clip and press Ctrl+T).
For example, with a clip effect expression like
(cid * 10) + this
the parameter value of the
action is used for the duration
of the original clip, then 10 is
added for the first cycle, 20 is
added for the second cycle,
and so on.
Leg effector is translated
to a position where
Club-bot is just about to
kick the ball and an
offset key is set.
To offset a clip’s values, you can:
• Click the Offset Map button in the mixer’s command bar.
• Choose the Set Offset Map - Changed Parameters command which
compares the current value of all parameters driven by the clip and
sets an offset if there is a difference.
• Choose the Effect > Set Offset Keys - Marked Parameters, which is
the same as creating a clip effect, except that the clip effect’s offset
expression is created for you.
• Choose the Set Pose Offset command to offset all transformations
(scaling, rotation, and translation). All parameters to be offset are
calculated together as a whole instead of as independent entities.
The pose offset is especially useful for offsetting an object’s
rotation, as well as position. As with clip effects, pose offsets sit “on
top” of a clip’s animation.
Basics • 203
Section 12 • Actions and the Animation Mixer
Changing Time Relationships (Timewarps)
Sharing Animation between Models
A timewarp basically defines the speed of the animation in a clip.
Timewarps change the relationship between the local time of the clip
and the time of its parent (either a compound clip or the entire scene)
while taking into account other things like scales, cycles etc. You can
make a clip speed up, slow down, and reverse itself in a nonlinear way
(such as making a character run or walk backwards).
One of the great things about actions is that you can use them again
and again. You can create an action for one model and then use it again
to animate another model in the same or another scene. You can even
use the same action for different objects within the same model.
When you apply timewarps to a compound clip, it creates an overall
effect that encompasses all clips that are contained within the
compound clip.
If your clip is cycled or bounced, the timewarp can either be repeated on
each cycle or bounce or encompass the duration of the whole
extrapolated clip (the warp is not repeated with each cycle or bounce).
This means, for example, that the overall animation on a cycled clip
could increase in speed with each cycle.
You can apply a timewarp by right-clicking a clip and choosing Time
Properties, or by selecting a clip and pressing Ctrl+T. The Warp page is
home to both the Do Warp and Clip Warp options. Use the Clip Warp
option for applying a warp over an extrapolated clip to warp its overall
animation.
These two models can share actions easily
because they have similar hierarchies.
There are a number of ways in which you can share animation between
models, whether they are in the same scene or a different scene:
• Copy action sources and compound sources between models in the
same scene.
• Copy action clips and compound clips (which lets you combine a
number of clips non-destructively) between models.
• Save an action source as a preset to copy action sources between
models in different scenes.
• Create an external action source in a separate file in different
formats (.xsi or .eani) to be used in other XSI scenes.
• Import and export action sources in different file formats to be
used in other scenes or other software packages.
• Import and export a model’s animation mixer as a preset
(.xsimixer) to copy it to models in the same scene or another scene.
204 • SOFTIMAGE|XSI
Sharing Animation between Models
Copying Action Sources between Models
If you want to share an action source between models in the same
scene, you can drag-and-drop one from the model’s Mixer > Sources >
Animation folder in the explorer onto the mixer of another model.
This makes a copy of that action source for the model.
To copy compound sources between models, press Ctrl while you drag
the compound action source from the model’s Mixer > Sources >
Animation to a track in the other model’s mixer.
1
Open the animation mixer for the model to which you want to
copy the action source (the target).
2
Open an explorer and expand the Model node for the model from
which you want to copy the action source (the original).
3
Drag a source from the original model’s Mixer > Sources >
Animation folder in the explorer and drop it on a track in the
animation mixer of the target model.
You can also create connection-mapping templates to specify the
proper connections between models before you copy action sources
between models. These templates set up rules for mapping the object
and parameter names stored in the action sources, such as when
similar elements have with different naming schemes, such as L_ARM
and LeftArm.
To create a connection-mapping template, open the animation mixer
and choose Effect > Create Empty Connection Template. A template is
created for the current model and the Connection Map property editor
opens. Once you have created an empty connection-mapping template,
you can add and modify the rules as you like.
Jaiqua’s (on the left)
elements are mapped
to the corresponding
ones on the Club-bot
using a connectionmapping template.
Mapping Model Elements for Sharing
Sharing actions is possible because each model has its own namespace.
This means that each object in a single model’s hierarchy must have a
unique name, but objects in different models can have the same name.
For example, if an action contains animation for one model’s left_arm,
you can apply the action to another model and it automatically
connects to the second model’s left_arm element.
This is set up before
action sources are
shared between
them.
If the names for some of the objects and parameter names in the source
don’t match when you’re copying sources between models, the Action
Connection Resolution dialog box opens up in which you can resolve
how the object or parameters are mapped.
Basics • 205
Section 12 • Actions and the Animation Mixer
Adding Audio to the Mix
You can add audio files to your scenes using the animation mixer. This
allows you to adjust the timing of your animations by using the sound
as a reference. For example, you can use an audio file as reference for
lip synching with a shape-animated face, or sync up some special effect
noise with an animation. Or you could load an audio file to do some
previsualization or storyboarding as you’re experimenting with your
animation project.
How to Synchronize Audio with Animation
1
3
Load an audio source file on an audio track in the animation
mixer to create an audio clip. To do this, right-click a tan-colored
audio track and choose Load Source from File.
Markers let you
delimit different
portions of the audio
clip and give their
wave patterns a
corresponding meaningful
name to help you synchronize
more easily with the animation.
Move the playback cursor to the
portion of audio wave you want
to mark. Create markers with
the Create Marker tool in the
mixer by pressing the M key,
then dragging over a range of
frames on the clip.
2
Sound files are added as audio clips on tracks in the animation mixer,
in the same way as you load action and shape sources as clips on tracks.
Once you have an audio clip in the mixer, you can move it along the
track, copy it, scale it, add markers to it, mute, and solo it.
The following process shows you how you can easily load and play
sound files in the animation mixer.
In the Playback panel, click the All button so that
RT (real-time playback) is active.
Play the audio clip using the regular playback
controls below the timeline, including scrubbing in
the timeline and
looping.
On
4 Adjust the animation of the character (such
as facial animation) to match the marked
audio waveforms.
To help do this, you can view the audio
waveform in the timeline or the fcurve
editor to sync with the animation.
Or you can create a
flipbook to preview
the animation with
audio.
5
206 • SOFTIMAGE|XSI
Toggle the sound on and
off by clicking the
headphones icon.
When you’re satisfied with the results, do a
final render and use an editing suite to add
the sound to the final animation.
Muted
Section 13
Simulation
Imagine a scene with an alien climbing out of her
space ship: it has just crashed to the ground after
breaking through fence posts like match sticks,
smoke streaming out of the engine. As she stares at
the burning rubble that was once her home in the
skies, a single tear rolls down her cheek. She
stumbles through a raging snow storm, the
howling wind whipping through her hair and
tearing at her cape.
You can use all the simulation powers in XSI to
create your own compelling scenes—all the tools
are there for you.
What you’ll find in this section ...
• Dynamics and Particle Effects
• Making Things Move with Forces
• Particles
• Hair and Fur
• Rigid Body Dynamics
• Soft Body Dynamics
• Cloth Dynamics
Basics • 207
Section 13 • Simulation
Dynamics and Particle Effects
Making Things Move with Forces
In XSI, you can simulate almost any kind of natural, or unnatural,
phenomena you can think of. To simulate these phenomena, you must
first make objects into rigid bodies, soft bodies, or cloth; or generate
particles or hair from an emitter. Only these types of objects can be
influenced by forces and collisions to create simulations.
Forces make simulated objects move according to different types of
forces in nature. Each force in XSI has a control object that you can
select, translate, rotate, and scale like any other object in a scene. For
example, you can animate the rotation of a fan’s control object to create
the effect of a classic oscillating fan. Scaling a force’s control object
changes its strength as well as its size.
Natural forces make simulated objects move and add realism. As well,
you can create collisions using any type and number of obstacles for
any type of simulated object.
Each simulated object can have multiple natural forces applied to it,
and the same force can be applied to any number of simulated objects.
Types of Forces
Types of simulated objects you can create in XSI:
particles, rigid body, hair, soft body, and cloth
You can use any of these forces with hair, particles, and rigid bodies, but
not all forces work with soft body, cloth, fluid, and explosions.
Hair
Gravity is the most common type of
force, even though it’s actually a constant.
It pulls the simulated objects down ...
unless you reverse it!
Particles
Cloth
The wind force controls
the effect of wind blowing
on the simulated objects.
Particles
Rigid bodies
The fan creates a “local” effect of
wind blowing through a cylinder
so that everything inside the
cylinder is affected.
The drag force opposes the
movement of simulated objects, as if
they were in a fluid.
208 • SOFTIMAGE|XSI
Making Things Move with Forces
How to Create and Apply a Force
The turbulence force builds a wind
field to let you imitate turbulence
effects, such as the violent gusts of air
that occur when an airplane lands.
You can apply a force to simulated objects (particles, hair, soft bodies,
and cloth) in one of two ways as described below:
For rigid bodies, the process is different and simpler: simply create a force
and it is applied to all rigid bodies in the current simulation environment.
A Select the
simulated object to
which you want to
apply the force.
Choose a force from
the Get > Force
menu on the
Simulate toolbar.
The force is
automatically applied to
the selected object.
The attractor attracts or repels
simulated objects much like a
magnet attracts/repels iron filings.
The vortex simulates a
spiralling, swirling movement.
OR
B
An eddy force simulates the effect of a
vacuum or local turbulence by creating
a vortex force field inside a cylinder.
Choose a force from
the Get > Force
menu on the
Simulate toolbar.
Select the simulated
object to which you
want to apply the
force.
Apply the force to this
object by choosing
Modify >
Environment > Apply
Force from the
Simulate toolbar.
The toric force simulates the effect of a
vacuum or local turbulence by creating
a vortex force field inside a torus.
Basics • 209
Section 13 • Simulation
Particles
The particle simulators makes it easy to animate all types of
phenomena that can be based on particles, such as dry ice flowing out
of a flask in an alchemist’s laboratory, fireworks bursting in the night
sky, or snowflakes falling gently on a gray winter’s day.
There are three separate particle-based simulators in XSI that are
special in the effects they create and the tasks you can perform with
them.
• The Particle simulator is the main “all-purpose” tool that you can
use to create the widest variety of particles. All particle shaders
work with this simulator, and many of the tools available for
particles apply only to it.
What Makes Up a Particle System?
A particle system is an assembly of different parts that work together:
the particle simulator, the cloud, the emitter, the particle type, and the
shader. Natural forces and obstacles for collisions also affect the
particle simulation, but are not directly part of the particle system’s
structure.
The particle cloud represents the
simulator that generates the
particles. You can have multiple
particle clouds in a scene.
The emitter is any object from which
the particles are emitted and the
properties that determine how the
particles are emitted. You can have
multiple emitters per particle cloud.
• The Fluid simulator specializes in fluid-type effects such as water,
oil, or mud. The Blob shader is applied to the fluid particles by
default to achieve a liquid appearance.
• The Explosion simulator specializes in effects associated with
explosions and fires. This simulator creates its effects by using three
different phases for smoke, flames, and sparks. You can use any
combination of these phases to achieve the effects you want.
Particles are actually points associated with a particle cloud. As a result,
you can tag them, delete them, or create clusters of them—all as you
would points. This is especially useful if you want to deform a particle
cloud, animate a cluster of particles, or constrain objects to clusters of
particles.
When you play a particle simulation, you can have the particles
updated continuously (Live mode), calculate the changes only when
you want (Standard No Caching), or cache the PTP files to play back as
you like (Standard Caching). When you cache files, the playback is
faster, you can play backwards and scrub the simulation, and you can
use a special player to play the PTP files on any machine.
210 • SOFTIMAGE|XSI
Particle shaders define the
rendered look of the particles.
There are several special
shaders for particles.
Particle types are the “recipes” that
describe what each group of particles
looks like and how they behave. You
can have only one particle type per
emitter at a time.
Particles
Overview of Basic Particle Workflow
1
Create particles from an emitter object by
choosing a Create > Particles command
from the Simulate toolbar.
2
Set up the emission properties to get the
basic particle movement, such as rate,
spread, and speed.
5
4
Add forces, such as gravity, and set up obstacles
to add realism to the particles’ movement.
Edit the default shaders, or change
and add more shaders to the particle’s
render tree.
3
Set up the particle type to get the basic
particle look and behavior, such as size,
lifetime, and basic color.
6
Use the render region to preview
the final rendered results.
Basics • 211
Section 13 • Simulation
Setting Up the Emission and Basic Particle Type
When a particle is born (emitted from the particle emitter object), it
has certain properties that determine how many particles are emitted,
their speed, their spread, and their initial rotation. These are associated
with the particle cloud and the emitter object. In addition to emission
Particle Rate
Particle Speed
Particles can be emitted from different components of the emitter object:
From points
Rate is measured by
number of particles
emitted per second.
Particle Size
Size is measured
in Softimage
units. You can
also animate a
size shift using
the percentage
of a particle’s
lifetime.
Particle Mass
From lines
From surface
Speed is measured by
number of Softimage
units traveled per second.
Death
From volume
To get started with some particle types, choose
View > Toolbars > Shader Presets, then drag and
drop a preset from the SimPTypes page onto any
particle cloud. You can also create and save your
own particle type presets.
Mass determines how
quickly particles react to
forces, and how they
react in a collision.
More mass requires
greater force to change
their motion.
Gravity is a little different
because it is a force
directly proportional
against the particle mass.
Particle Lifetime
A particle’s life is measured in seconds from the moment it is
emitted (born) to the moment it dies.
Birth
212 • SOFTIMAGE|XSI
properties, the particles also have particle type properties that
determine their size, mass, the length of life, and their basic color.
Many other options are tied to the particle type, including noise,
events, goals, and reactions to forces.
You can also keep particles alive for the entire duration of a
simulation with the Live Forever option, rather like life support for
particles.
Particles
Creating a Particle Event
Creating Goals for Particles
Particle events are a combination of two things: a trigger and an action.
The trigger determines what causes the event to occur and the action
determines what will happen when the trigger is executed. For
example, a common trigger/action combination is a collision/bounce,
but there are many more options you can choose from. Particle events
are set per particle type, allowing you to have very specific control over
an effect.
When you create a goal for particles, the particles are attracted to or
repelled from it, similar to the way in which a magnet attracts or repels
pieces of metal. With goals, you can create a number of particle effects,
such as drops of water forming into a puddle, paint being sprayed over
a surface, or a swarm of bees chasing after an unfortunate bear.
How to set up an event
1
2
3
Select a particle cloud and
choose Modify > Particles
> Add Particle Event from
the Simulate toolbar.
Select a Trigger and set
its value. The trigger is the
thing that makes the
event happen. Here, the
trigger is “at 30% of the
particle’s age”.
Select an Action, which is
the thing that happens
when the trigger’s value is
reached. Here, the action is
that a new smoke particle
type is emitted.
A typical event to set up is a collision.
You can create collisions with the
Modify > Environment > Set
Obstacle command and then pick the
obstacle for the particles. Doing this
automatically creates a collision/
bounce event .
You can set goals for particle clouds, which affect all particle types that
are attached to it, or you can set goals for specific particle types. You
can use more than one goal to affect the particles, then set the weight of
each goal and animate the weight to make the particles move between
the goals.
How to set goals for particles
1 Select the particles.
2 Choose Modify > Particles > Add Goal from the Simulate
toolbar, then pick one or more objects to be the goals.
3 Select a goal behavior on the Goal property page.
Chase makes
particles follow this
animated goal.
Arrive makes particles
slowly approach the goal
in a circuitous manner.
Stick makes particles appear
directly on the goal as they
are born—there is no
travelling from the particle cloud
to the goal object.
Flee repels the
particles from the
goal.
Spring links the
particles to the
goal on springs
and dampers.
Basics • 213
Section 13 • Simulation
Attaching Objects to Particles
Deforming Particle Simulations
Attaching objects to particles allows you to use instances of objects as
part of a particle simulation. Using instancing, you can attach any
geometric object, a light, or a camera to particles to create many
different effects.
You can apply almost any of the standard tools from the Deform menu
(such as Push, Twist, or Taper) on the Simulate toolbar, or you can use
lattices and cage deformers to deform particles.
For example, you could attach a blood cell object to some particles
flowing through an artery and take a wild ride through the blood
stream!
Create a group of one or more objects,
then pick them as the instances on the
particle type’s Instancing property page.
Then when the particles are rendered, the
instances of objects that are attached to the
particles are rendered instead of the particles.
The instances can inherit the rotation of the
particles, as well as their size.
Another way of attaching objects to particles is to create particle
clusters and then constrain an object to the clusters. This is useful if
you have particles that don’t constantly get born and die, such as with a
static cloud whose particles live forever.
214 • SOFTIMAGE|XSI
Deformations provide a powerful way of making complex particle
systems, especially when combined with natural forces. For example,
you could make a whirling tornado by containing a particle system
within a twisted and tapered lattice and then applying a vortex force.
You could also have a simple school of fish deform along a curve in a
flowing line.
Deformations are done on the particle cloud, not individual particle
types, so to have finer control over deformations, you can deform
particle clusters. This allows you to deform each particle type
individually instead of all the particle types attached to a cloud.
Create a lattice and deform it
in any way, such as by
transforming its points or by
using any of the standard
Deform operators on it, such
as Twist or Bulge, as has been
done on this lattice.
Particles
Particle Shaders and Rendering
Shaders designed for particles
In many basic ways, rendering particle simulations is similar to
rendering any other object in XSI. You can use all standard lighting
techniques, set shadows, and apply motion blur. However, there are
special shaders designed specifically for particles that let you make
particles look the way you want.
The Billboard, Sphere, and Blob shaders define the basic shape/surface
of the particles. You need to have one of these shaders attached to the
particle type before you can attach other shaders to the particle’s render
tree.
Blob shader with Fluid particles.
Make particles transparent by
connecting standard XSI
transparency shaders or setting the
particle shader’s alpha value.
Once you have one of these three basic particle shaders attached to the
particle type, you can plug most other standard XSI shaders into a
particle’s render tree.
How to attach shaders to a particle’s render tree
1
Select the particles and open a render tree (press 7). This tree shows
the default shader connection when you create a particle cloud.
Billboard shader with Shape
shader attached. The shape used
here is a fractal noise.
2
Sprite shader using a bubble image.
Select any particle shader from the Nodes > Particle menu in the
render tree.
As you’re tweaking the
particles, use the
render region for
previewing the final
look.
You must first have one of the basic shaders (Billboard, Sphere, or
Blob) attached to the particle type.
3
Then you can connect other particle shaders, such as the Shape,
Sprite, or Particle Gradient, or any standard XSI shader.
Sphere shader with
wood texture.
Just press Q and drag
over the particles.
Particle flame using the
Particle Gradient
shader.
Basics • 215
Section 13 • Simulation
Hair and Fur
In XSI you can make all sorts of hairy and furry things—from Lady
Godiva to rabbits, bears, and caterpillars. Hair in XSI is a fully
integrated hair generator that interacts with other elements in the
scene. If you apply dynamics to the hair, the dynamics operator
calculates the movement of the hair according to the velocity of the
emitter object and any natural forces applied to the hair object.
Overview of Growing and Grooming Hair
1 Emit hair from an object,
cluster, or curves.
2 Style the guide hairs using tools
on the Hair toolbar.
Hair comes with a set of styling tools that allow you to groom and style
the hair, almost as easily as if it was on your head. You can control the
styling hairs one at a time, or grab many and style in an overall way.
To control the rendered look, you can use two special shaders designed
for hair, or you can use any other XSI shader with hair. And as with all
things rendered in XSI, you can use the render region to preview
accurate results.
Hair is represented by two types of hairs: guide hairs and render hairs.
Guide hairs are segmented curves that act like inverse kinematics (IK)
chains and are used for styling, while render hairs are the “filler” hairs
that are generated from and interpolated between the guide hairs.
Render hairs are the only hairs that are actually rendered.
The render hairs
are interpolated
between the guide
hairs—these are the
hairs that are
rendered.
Guide hairs shown
in white (selected).
These are the hairs
that you style.
216 • SOFTIMAGE|XSI
3 View and set up how the
render hairs look.
4
Apply dynamics to have hair
respond to movement and
natural forces.
5 Select obstacles for hair
collisions.
6 Adjust the default hair shader or
apply another one to the hair.
Hair and Fur
Basic Grooming
When you’re styling, you always work with the guide hairs: these are the
styling hairs that are similar to and behave like segmented chains. In
fact, the most intuitive way to style hair is to grab a tip and position it
the same way you would the effector to invoke IK on a chain.
You can find all styling tools on the
Hair toolbar (press Ctrl+2).
Because guide hairs are actual geometry, you can use all of the standard
Deformation tools on them to come up with some groovy hairdos!
Lattices, envelopes, deform by cluster center, randomize, and deform
by volume usually produce the best results. However, if you animate
the deformations, you cannot then use dynamics on the hair.
Use the Brush tool to sculpt hairs with a
natural falloff, like proportional modeling.
Comb the hair in the desired
direction, such as in the negative
Y direction. Maybe use Puff to
give some lift at the roots.
Translate and rotate specific
tips or points of hair.
Select tips, points, or entire
strands of hair to style in any way.
Here, just the tips of some hair
strands are selected.
When you use a styling tool after
selecting Tip, press Alt+spacebar to
return to the Tip selection tool.
Use the Clump tool to bring
hair strands or points
Change the length of the guide hairs
using the Cut tool or the Scale tool.
Copy the style to another hair object.
You can
deform the
shape of the hair
using any deformation tool, like a lattice.
To have smoother animation, activate
Stretchy mode to allow the hair segments
to stretch along with the deformation.
Basics • 217
Section 13 • Simulation
Making Hair Move with Dynamics
Getting the Look with Render Hairs
When you apply dynamics to hair, you make it possible for the hair to
move according to the velocity of the hair emitter object, like long hair
whipping around as a character turns her head quickly. The dynamics
calculations also take into account any natural forces applied to hair,
such as gravity or wind, as well as any collisions of the hair with
obstacles.
The render hairs are the “filler” hairs that are generated from and
interpolated between the guide hairs. And as their name implies,
render hairs are the hairs that are actually rendered. You can change the
look of a hair style quite a lot by modifying the render hairs.
You can also use dynamics as a styling tool by freezing the hair when it’s
at a state that you like. For example, apply dynamics, apply some wind
to the hair, then freeze the hair when it has the right wind-swept look.
Set the number of render hairs to be
rendered, then decide which percentage of this
value you want to display. To work quickly,
display a low percentage, then display the full
amount of hair for the final render.
How to apply dynamics to hair
1
Select the hair and choose Create
> Dynamics on the Hair toolbar.
2
Play through the simulation—you
may want to loop it.
3
Animate the hair emitter object’s
translation or rotation, or apply a
force to the hair to make it move.
4
Adjust the hair’s Stiffness, Wiggle,
and Dampening parameters, if
necessary.
Set the render hair
root and tip thickness
separately.
Add kink, waves, and frizz to
render hairs to change their shape.
5 Set the Cache to Read&Write,
then play the simulation to
cache it to a file for faster
playback and scrubbing.
Change the number of segments to
change the hair’s resolution. Use a
higher amount for curly or wavy hair.
Caching also helps for more
consistent rendering results.
Tip: Click the Style button on the
Hair toolbar to toggle the dynamics
state. You can style the hair only
when dynamics is off.
218 • SOFTIMAGE|XSI
Set the hair’s density according to a
weight or texture map so that you can
create some bald spots or sparser growth.
You can also use maps for the render hair
length (cut map) so that some areas have
shorter hair than others.
Hair and Fur
Hair Shaders and Rendering
Rendering hair is similar to rendering any other object in XSI. You can
use all standard lighting techniques (including final gathering and
global illumination), set shadows, and apply motion blur. Hair is
rendered as a special hair primitive geometry by the mental ray
renderer.
While you can use any type of XSI shader on hair, there are two special
hair shaders (the Hair Renderer and Hair Geo shaders) that give you
the most control over the hair for making it look the way you want. You
can determine different coloring, transparency, and translucency
anywhere along the length of the hair, such as at the roots and tips.
How to attach shaders to hair
The Hair Renderer shader gives you
control over coloring, transparency, and
shadows along the hair strands. You can
also optimize the render and take
advantage of final gathering.
The Hair Geo shader lets you set
the coloring, transparency, and
translucency using gradient sliders,
which give you lots of control over
where the shading occurs along the
hair strand.
You can even add incandescence to
make the hair “glow”.
1
Select the hair and open a render tree (press 7). This
tree shows the default shader connection when you
create hair.
2
To switch to the Hair Geo shader, choose Nodes > Hair > Hair
Geometry Shading and attach it to the hair’s Material node in
the same way as the Hair Renderer shader.
3
To connect other XSI shaders to the hair,
disconnect the current Hair shader. Then you
can load and connect an XSI shader directly
to the hair’s Material node.
For example, you can attach a
Phong shader to the Surface input of the
hair’s Material node to change the hair’s
color.
Incandescence on the inner part
of the hair strand.
To get started with some hair coloring,
choose View > Toolbars > Shader
Presets, then drag and drop a preset
from the Hair page onto a hair object.
These presets are based on the Hair
Renderer shader.
Incandescence on the rim of the
hair strand.
Hair with high translucency values along most of the
hair strand, then lessening at the root.
Basics • 219
Section 13 • Simulation
Connecting a Texture Map to Hair Color Parameters
Rendering Objects (Instances) in Place of Hairs
A texture map is the combination of a texture projection plus an image.
Instead of one value being applied over the surface as with a weight
map, a texture map applies a color. You create a texture map in which
you select the texture projection method, then link up an image file
whose pattern of colors you want to map.
Replacing hairs with objects allows you to use any type of geometry in
a hair simulation. You can replace hair with one or more geometric
objects (referred to as instances) to create many different effects. For
example, you could instance a feather object for a bird or instance a leaf
object to create a jungle of lush vegetation.
When mapping a texture to the hair, the color of the individual strands
are derived from the texture color found at the root of the hair, so make
sure your map is painted accordingly.
The instanced geometry can be animated, such as its local rotation or
scaling, or animated with deformations. This allows you to animate the
hair without needing to use dynamics, such as instancing wriggling
snakes on a head to transform an ordinary character into Medusa!
Unlike other geometry in XSI, hair is not a typical surface so you can’t
apply projections directly to it. Instead, you need to create a texture
map property for the hair emitter object first, and then transfer it to the
hair itself.
To do this, apply a texture map to the hair emitter using one of the Get
> Property > Texture Map commands, associate an image to this
projection to use as the map, then transfer the texture map from the
hair emitter to the hair object itself using the Transfer Map button on
the Hair toolbar.
Transfer the texture
map from the hair
emitter to the hair
object using the
Transfer Map button.
To render instances for the hairs, simply put the objects you want to
instance into a group, and each object in the group is assigned to a
guide hair when you select options on the Instancing page in the Hair
property editor. The instanced geometry is calculated at render time so
you’ll only see the effect in a render region or when you render the
frames of your scene.
You can choose whether to replace the render hairs or just the guide
hairs. You can also control how the instances are assigned to the hair
(randomly or using a weight map values), as well as control its
orientation by using a tangent map or have the instances follow an
object’s direction.
You can render
instances of 3D
objects as hair instead
of the hair’s geometry.
The instance objects
can even be animated!
You can change the color of the hair using a
texture map connected to the hair shaders’
color parameters.
220 • SOFTIMAGE|XSI
Put the instance
objects in a group and
then select them on
the Instancing page
in the Hair property
editor.
Rigid Body Dynamics
Rigid Body Dynamics
Rigid body dynamics let you create realistic motion using rigid body
objects (referred to as rigid bodies), which are objects that do not
deform in a collision. With rigid body dynamics, you can create
animation that could be difficult or time-consuming to achieve with
other animation techniques, such as keyframing. For instance, you can
easily make effects such as curling rocks colliding and rebounding off
each other, a brick wall crumbling into pieces, or a saloon door
swinging on its hinges.
How to Create a Rigid Body Simulation
1
Select an object and choose either Create > Rigid Body > Active
Rigid Body or Passive Rigid Body from the Simulate toolbar.
A simulation environment is automatically created in which the
rigid body dynamics are calculated.
2
You can make a regular object into a rigid body by simply selecting it
and choosing a Create > Rigid Body command from the Simulate
toolbar. This applies rigid body properties to that object, which include
the object’s physical and collision properties, such as its mass or
density, center of mass, elasticity, and friction.
The center of mass is the location at which a rigid body spins around
itself when dynamics is applied (forces and/or collisions). By default,
the center of mass is at the same location as the object’s center, but you
can move it to wherever you like.
Center of mass at default
location of object’s center.
If a rigid body is animated, you don’t need a
force to make it move: just make sure to use
its animation as its initial state for the
simulation.
3 Have two or more rigid bodies collide—
make their geometries intersect at any time
other than at the first frame.
Here, the floor is set as an obstacle by
making it a passive rigid body.
4
Set up the playback for the environment.
This includes the duration of the simulation,
the playback mode, and caching the
simulation.
5
Play the simulation!
Notice how the box bounces a
bit in the middle before falling
off the edge.
Center of mass is moved to
the bottom right corner of
the object.
Notice how the box hits the
edge and tumbles more
quickly with more spinning.
Apply a force to the scene, such as gravity.
The force is added to the simulation
environment.
Tip: Animation ghosting lets you
display a series of snapshots of the rigid
bodies at frames behind and/or ahead
of the current frame.
You can preview the simulation result—
without having to run the simulation.
Basics • 221
Section 13 • Simulation
Simulation Environments
Adding Forces to the Environment
All elements that are part of a rigid body simulation are controlled
within a simulation environment. A simulation environment is a set of
connection groups, one for each type of element in the simulation: the
rigid bodies, the rigid constraints, the forces, the dynamics operator,
the simulation time control, and simulations you have cached. A
simulation environment is created as soon as you make an object into a
rigid body. You can also create more environments so that you have
multiple simulation environments in one scene.
When you create a force in a scene, that force is automatically added to
the Forces group in the current simulation environment and the
dynamics solver calculates all active rigid bodies’ movements according
to the force. If there are other simulations in the scene (such as particles
or hair), they are not affected by the force unless you specifically apply
it to them.
The environment keeps track of the relationships between the objects
in the simulation and determines onto which objects the dynamics
operator is applied. The dynamics operator solves the simulation for all
elements that are in this environment. It calculates the moment of
inertia about a rigid body’s centre of mass resulting from forces acting
on the rigid body, then does collision detection on the geometry of all
rigid bodies involved in the collision.
You have a choice of dynamics operators in XSI: physX or ODE. physX is
the default operator, offering you stable and accurate collisions with many
rigid bodies in a scene, even when using the rigid body’s actual shape as
the collision geometry. ODE is a free, open source library for simulating
rigid body dynamics.
You can see the current
simulation environment by
using the Curr. Envir. scope
in the explorer. Or use the
Environments scope to see
all simulation environments
in the scene.
All elements involved in the
rigid body simulation are
contained within this
environment.
222 • SOFTIMAGE|XSI
After you apply the force, you can adjust its weight individually on the
rigid bodies. For example, you may want to have only 50% of a gravity
force’s weight applied to a specific rigid body, while you want 100% of
the gravity’s weight used on all the other rigid bodies in the simulation.
Passive or Active?
Rigid bodies can be either active or passive:
• Active rigid bodies are affected by dynamics, meaning that they can
be moved by forces and collisions with other rigid bodies.
• Passive rigid bodies participate in the simulation but are not
affected by dynamics; that is, they do not move as a result of forces
or collisions with other rigid bodies. They can, however, be
animated. You often use passive objects as stationary obstacles or as
stationary objects in conjunction with rigid constraints (as an
anchor point).
You can easily change the state of a rigid body by toggling the Passive
option in the rigid body’s property editor.
The pool table is a passive
rigid body, while the white
ball is an active rigid body
with the gravity force
applied.
The ball rebounds off the
table but the table does not
move.
Rigid Body Dynamics
Animation or Simulation?
Creating Collisions with Rigid Bodies
You can apply rigid body dynamics to objects that are animated or not:
Rigid bodies are all collision objects as soon as they come in contact
with one other—you don’t need to specifically set an object as an
obstacle with rigid bodies. For example, to animate billiard balls
colliding with each other, you simply make the balls into rigid bodies.
Then when they come in contact with each other, they all react to the
collision.
• If the rigid bodies are animated, you can use their animation
(position, rotation, and linear/angular velocity) for the initial state
of the simulation. When you apply a force to an animated rigid
body, the force takes over the object’s movement as soon as the
simulation starts.
• If the rigid bodies are not animated, you need to apply a force to
make them move.
You can easily animate the active/passive state of a rigid body to achieve
various effects: you simply animate the activeness of the Passive option
in the rigid body’s property editor.
At least one rigid body must be active to create a collision. When you
have collisions between two or more active objects, they all move
because they are all affected by the dynamics.
You can put rigid bodies into different collision layers, which lets you
create exclusive groups of rigid bodies that can collide only with each
other. By putting rigid bodies that don’t need to collide together in
different collision layers, you can lessen the collision processing time.
Animation
The billiard ball is a passive rigid
body whose rotation and
translation is animated to make it
move to the table’s edge. A
gravity force has been applied to
the simulation environment.
All billiard balls are assigned as
active rigid bodies.
1
When the white ball (circled) hits
them, they all react to the collision.
When the ball reaches the edge
of the table, the ball’s state is
switched from passive to active,
the simulation takes over, and
gravity makes the ball fall down.
2
Simulation
3
Basics • 223
Section 13 • Simulation
All rigid bodies use a set of collision properties to calculate their
reactions to each other during a collision. These properties include
elasticity, friction, and the collision geometry type.
Bounding shapes: box, sphere, and capsule
• Elasticity is the amount of kinetic energy lost from an object when
it collides with another object. For example, when a billiard ball
hits the table, elasticity influences how much the ball rebounds.
• Friction is the resisting force that determines how much energy is
lost by an object as it moves along the surface of another. For
example, a billiard ball rolling along a table has a lower friction
value than a rubber ball along a table. Likewise, a billiard ball
rolling on a carpet would have more friction than if it was rolling
on a marble floor.
Collision Geometry Types
The collision type is the geometry used for the collision. It can be a
bounding box/capsule/sphere, a convex hull, or the actual shape of the
rigid body’s geometry.
• Bounding shapes (capsules, spheres, and boxes) provide a quick
solution for collisions when shape accuracy is not an issue or the
bounding shape’s geometry is close enough to the shape of the rigid
body.
• Actual Shape provides an accurate collision but takes longer to
calculate than bounding shapes or convex hulls. You may need to
use the actual shape, however, for rigid body geometry that is
irregular in shape or has holes, dips, or peaks that you want to
consider for the collision.
• Convex hulls give a quick approximation of a rigid body’s shape,
with the results similar to a box being shrinkwrapped around the
rigid body. Convex hulls don’t calculate any dips or holes in the
rigid body geometry, but are otherwise the same as the rigid body’s
original shape. Convex hulls resemble the actual shape of the rigid
body well enough for most cases, and have the advantage of being
very fast.
224 • SOFTIMAGE|XSI
Actual Shape provides an accurate
collision using the rigid body’s original
shape.
This is useful for rigid body geometry
that is irregular in shape or has holes,
dips, or peaks that you want to consider
for the collision, such as this bowl with
cherries falling inside of it.
Convex hull doesn’t calculate any
dips or holes in the rigid body
geometry, such as for this bowl, but
is otherwise the same as the rigid
body’s original shape.
Rigid Body Dynamics
Constraints between Rigid Bodies
How to constrain rigid bodies
You can set constraints between rigid bodies to limit a rigid body to a
specific type of movement. For example, you could create a trap door
that has a hinge at one of its ends. Then when some crates fall on the
trap door, the collision causes the trap door to open up and the crates
fall through it.
Rigid body constraints are actual objects that you can transform
(translate, rotate, and scale), select, and delete like any other 3D object
in XSI.
You can constrain two rigid bodies together, a single rigid body to a
point in global space, or constrain several active rigid bodies together
as a chain.
1
Choose a constraint from the Create > Rigid Body >
Rigid Constraint menu, then left-click to pick the position
for the constraint object.
To constrain multiple rigid
bodies to one, choose a
command from the Create >
Rigid Body > Multi
B
A
Constraint Tool menu.
2 Left-click to pick the first
constrained rigid body (A). The
constraint object connects to
its center.
A
B
A
B
Types of rigid body constraints
Slider
A is a passive rigid body and B
is an active rigid body.
Ball and socket
3 Left-click to pick the second
constrained rigid body (B). The
constraint connects to its
center, joining the two rigid
bodies together.
Hinge
Spring
B
A
Fixed
Rigid body B’s resulting movement with gravity applied. Notice
how the constraint object is attached to both rigid bodies’ centers.
Basics • 225
Section 13 • Simulation
Cloth Dynamics
The cloth simulator uses a spring-based model for animating cloth
dynamics. You can specify and control the mass of the fabric, the
friction, and the degree of stiffness, allowing you to simulate different
materials such as leather, silk, dough, or even paper.
Cloth deformation is controlled by a virtual “spring net” which is made
up from three different types of springs, each controlling a different
kind of deformation: shearing, stretching, and bending.
After you set up how the cloth is deformed according to its own
“internal” spring-based forces, you can then affect how it’s deformed
using external forces, such as gravity, wind, fans, and eddies.
• Shear controls the resistance to shearing, meaning crosswise stretching,
keeping as much to the original shape as possible. Try to decrease this
value if the cloth’s wrinkling is too rigid.
• Bend controls the resistance to bending. With low values, the cloth moves
very freely like silk; with high values, the cloth appears like rigid linen or
even leather.
• Stretch controls the resistance to stretching as it controls the elasticity of
the material. Low values allow the cloth to deform without resistance,
while higher values prevent the cloth from having elasticity.
As well, you can have the cloth collide with external objects or with
itself. The obstacles can be animated or deformed and interact with the
cloth model according to the cloth’s and obstacle’s friction.
Although you can apply cloth only to single objects, you can also create
a garment made of multiple NURBS surface patches stitched together
using any number of points.
You must first assemble the different
patches into a single surface mesh
object, then apply cloth to that
object, and set the Stitching
parameters in the ClothOp property
editor to create seams between the
different NURBS surfaces of the
same surface mesh model.
Low resistance
to Bend.
Low resistance
to Stretch.
To give you a head start on creating cloth,
there are a number of presets in the Cloth
property editor that let you quickly
simulate the look and behavior
of different materials, such as
leather, paper, silk, or pizza
dough.
Paper preset
226 • SOFTIMAGE|XSI
Low resistance
to Shear.
Silk preset
Cloth Dynamics
How to apply cloth to an object
1
2
Select Animation as the Construction
Mode. This tells XSI that you want to
use cloth as an animated deformation.
Select an object and choose Create >
Cloth > From Selection from the
Simulate toolbar.
5 Select objects as obstacles for
collisions and choose Modify >
Environment > Set Obstacle.
You can also have the cloth collide
with itself by activating Self Collision
in the ClothOp property page.
6
Play the simulation.
To calculate the whole simulation
more quickly, go to the last frame of
the simulation.
3
4
Set the cloth’s physical properties such as
mass, friction, and resistance to shearing,
bending, and stretching.
Apply forces to make the cloth move.
Here, a little gravity and a large fan
are applied to create the effect of a
strong wind blowing on the flag.
You can cache the simulation to files
to play back faster, as well as being
able to scrub the simulation and play
it backwards.
You can also set clusters of points to define specific areas of
a cloth that you want to be affected by the cloth simulation,
then use the Nail parameter to nail down these clusters.
For example, you can anchor down clusters at the sides or
corners of a flag to keep it
from blowing away in the
wind.
As well, you can animate
the Nail parameter as
being on or off, making it
easy to create the effect
of a cloth being grabbed
and then let go.
Basics • 227
Section 13 • Simulation
Soft Body Dynamics
As the name would indicate, soft bodies are objects that easily deform
when they collide with obstacles. In fact, the main reason to create soft
bodies is to have collisions with obstacles. You can, for example, use
soft body to deform a beach ball being blown across the sand and gets
squashed when it collides with a pail.
How to apply soft body to an object or cluster
1
Select Animation as the Construction
Mode. This tells XSI that you want to use
soft body as an animated deformation.
2
Select an object or cluster and choose
Create > Soft Body > From Selection
from the Simulate toolbar. The object can
also be animated.
3 Set the soft body physical properties
such as mass, friction, stiffness, and
plasticity.
Soft body is a deform operator meaning that it moves only an object’s
vertices, never the object’s center. Soft body computes the movements
and deformations of the object by means of a spring-based lattice
whose resolution you can define using the Sampling parameter.
You can use soft body on clusters (such as points and polygons),
allowing only that part of an object to be deformed by soft body. For
example, you can have just the cluster of points that form a character’s
belly be deformed by soft body for some jelly-like fun!
If the soft-body object is animated, you can either preserve its
animation or recalculate it according to any forces you apply, such as
wind and gravity. If you keep the object’s animation, soft body acts
only as a deformer on the object, but does not influence its movement.
If you want to convert the soft body simulation to animation, you can
plot it as shape animation using the Tools > Plot > Shape command on
the Animate toolbar.
228 • SOFTIMAGE|XSI
To give you a head start, click a
button on the Presets page to quickly
set properties to make the object
behave like a rubber ball, an air bag,
and more.
4 Apply a gravity and/or wind force.
If the soft body is not already animated, you
need to apply a force to make it move.
5 Select objects as
obstacles for collisions
and choose Modify >
Environment > Set
Obstacle.
Then play the
simulation and watch
the ball bounce!
Section 14
Shaders
A shader is a miniature computer program that
controls the behavior of the mental ray® rendering
software during, or immediately after, the rendering
process. Some shaders are invoked by mental ray to
compute the color values of pixels. Other shaders can
displace or create geometry on the fly.
Shaders are used to create materials and effects in
just about every part of a scene. An object’s surface
and shadows are controlled by shaders. So are scene
lighting and camera lens effects. Even shaders’
parameters are usually controlled by other shaders.
You can even apply shaders at the render pass level
to affect the entire scene.
What you’ll find in this section ...
• The Shader Library
• Connecting Shaders
• The Render Tree
• Building Shader Networks
• Editing Shader Properties
Basics • 229
Section 14 • Shaders
The Shader Library
XSI’s shaders are divided into several different categories, based on how
they are invoked and where they are used. Every shader can be opened
from an explorer or a render tree and edited through its property
editor.
Surface Shaders
Surface shaders are one of the most important types of shaders. All
geometric objects in a scene have an associated surface shader, even if it
is only the scene’s default shader. Surface shaders determine an object’s
basic color and illumination characteristics. Surface shaders are also
responsible for object transparency, refraction and reflectivity.
Texture Shaders
2D texture shaders apply a twodimensional texture onto an object, just
as 3D texture shaders implement a threedimensional texture into an object. They
are connected to the object’s surface
shader to define the object’s texture.
Light Shaders
Light shaders define the characteristics of
the scene’s light sources. For example, a
spotlight shader uses the illumination direction to attenuate the
amount of light emitted. A light shader is used whenever a surface
shader uses a built-in function to evaluate a light.
If shadows are used, light shaders normally cast shadow rays to detect
occluding objects between the light source and the illuminated point.
230 • SOFTIMAGE|XSI
The Shader Library
Lens Shaders
Volume Shaders
Lens shaders are used when a
primary ray is cast by the
camera. They may modify the
ray’s origin and direction to
implement cameras other than
the standard pinhole camera
and they may modify the result
of the primary ray to
implement effects such as lens
flares, distortion or cartoon ink
lines.
Volume shaders modify rays as they pass through an object (local
volume shader) or the scene as a whole (global volume shader). They
can simulate effects such as clouds, smoke, and fog.
Environment Shaders
Environment shaders are
used instead of surface
shaders when a visible ray
leaves the scene entirely
without intersecting an
object or when the
maximum ray depth is
reached.They are used to
create backgrounds for
scenes, create quickrendering reflections, light
scenes with High Dynamic
Range Images, and so on.
BBC “Everyman”: Animation by Aldis Animation
Toon Shaders
Toon shaders apply nonphotorealistic or cartoon style
effects to objects. They control celanimation type properties like
inking and painting.
To get a full toon effect, it’s best to
use the toon material shaders in
conjunction with the toon lens
shaders.
Shadow Shaders
Shadow shaders determine how the light coming from a light source is
altered when it is obstructed by an object. They are used to define the
way an object’s shadow is cast, such as its opacity and color.
Basics • 231
Section 14 • Shaders
Lightmap Shaders
Displacement Shaders
Lightmap shaders are used to sample object surfaces and store the
result in a file that can be used later. For example, you can use a
lightmap shader to bake a complex material into a single texture file.
Lightmaps are also used by the Fast Subsurface Scattering and Fast Skin
shaders to store information about scattered light.
Displacement shaders alter an object’s surface by displacing its points.
The resulting bumps are visibly raised and can cast shadows.
Photon Shaders
Photon shaders are used for global illumination and caustics. They
process light to determine how it floods the scene. Photon rays are cast
from light sources rather than from a camera.
Material Phenomena
Material phenomena are predefined combinations of shaders, usually
designed to create complex rendering effects, that are packaged as
single shader nodes. Connecting a material phenomenon to an object’s
material prevents the material from accepting any other shaders
directly, though you can extend the phenomenon’s effect by driving its
parameters with other shaders. The Fast Subsurface Scattering and Fast
Skin shaders are examples of material phenomena.
Output Shaders
Output shaders operate on images after
they are rendered but before they are
written to a file. They can perform such
as glows, blurs, background colors, and
so on.
232 • SOFTIMAGE|XSI
The Shader Library
Geometry Shaders
Tool Shaders
Geometry shaders are evaluated before rendering starts. This allows the
shader to introduce procedural geometry into the scene. For example, a
geometry shader might be used to create feathers on a bird or leaves on a
tree.
Tool shaders let you create a shader from scratch or extend an existing
one. Although some tool shaders can be used on their own, many of
them must work in conjunction with another to achieve a highly
customized effect. Some examples of tool shaders include:
• Color Channels manipulate red, green, blue, and alpha
components of a color.
• Store In Channel holds color, scalar, vector, boolean, and integertype data to be output with a render pass.
Now you see it...
The cube is a geometry shader
object that appears fully
rendered in the region.
Now you don’t!
In Shaded and other OpenGL
views, only a wireframe
placeholder is visible.
Realtime Shaders
Realtime shaders allow you to build and control the multipass realtime
rendering pipeline, using the render tree. You can connect these
shaders together to achieve a multitude of sophisticated rendering
effects, from basic surface shading to complex texture blending and
reflection.
• Conversion changes one value to another. Especially useful for
changing a scalar-type node to a color one. Scalar, color, vector,
Boolean, and integer nodes can be converted to any other type of
output using these tools.
• Image Processing defines, manipulates, and tweaks color and
scalar values.
• Math performs math functions such as interpolation,
multiplications, additions, and exponential functions.
• Mixers uses one or several equations to mix a few or several colors
or textures into a single color output.
• Share coordinates the sharing of a single value among several
others.
• State exposes raytracing state structures. These shaders are
intended for shader developers who require low-level information
during the raytracing process.
• Texture Generators are the basis for creating textures within a
shader. These shaders create basic 2D and 3D textures.
• Texture Space Controllers manipulate a texture space by
perturbing, rotating, scaling, or using a number of other functions.
• Texture Space Generators generate a specific texture space to be
mapped onto an object in a variety of ways.
Basics • 233
Section 14 • Shaders
Connecting Shaders
There are a number of ways to connect shaders in XSI. Some tools
allow you to connect shaders directly to an object’s material’s ports.
Others allow you to connect shaders to other shaders’ parameters. Still
others are used to apply shaders to render passes or cameras.
• The Get > Shader menu has sub-menus for all of the object’s
Material node’s inputs—or ports. Each port sub-menu has a submenu of its own listing shaders that you’re likely to connect to that
port. For example, the Surface sub-menu lists commonly used
surface shaders.
Toolbar Commands
You can connect shaders to objects’ materials, or to other shaders’
parameters, directly from the Render Toolbar.
• The Get > Material menu lists commonly used surface shaders.
Choosing a shader from the menu creates a new material on the
selected object and connects the chosen surface shader to the
material’s Surface, Shadow, and Photon ports.
Choose Get > Material
from the Render toolbar.
The menu lists surface shaders
that, when chosen, are
automatically connected to the
Surface, Shadow and Photon
ports of a new material.
The new material is assigned to
the selected object.
Choose Get > Shader
from the Render toolbar.
Each port sub-menu connects
shaders to one of the material
node’s ports.
A parameter with a (=) symbol is
already connected to a shader.
• The Get > Texture menu lists commonly used texture shaders and
allows you to connect them to any combination of a surface
shader’s ambient, diffuse, transparency and reflection ports.
Choose Get > Texture
from the Render toolbar.
The menu lists commonly used
texture shaders that can be
connected to any combination of a
surface shader’s Ambient, Diffuse,
Reflection, and Transparency ports.
234 • SOFTIMAGE|XSI
Connecting Shaders
Property Editors
When you apply a shader to an object,
the shader’s property editor opens. To
the right of each parameter, there is a
“plug” connection icon . Clicking the
icon opens a menu that lists shaders
that you can attach directly to the
parameter. Attaching a shader to a
parameter lets you control the
parameter with another shader instead
of a simple color or numeric value.
If a shader is connected to a parameter via one or
more conversion shaders (or the parameter is
simply connected to a conversion shader), a small
yellow “c” appears in the connection icon.
By holding the mouse pointer over the connection icon, you can
display a list of the conversion shaders between the current shader and
the next non-conversion shader.
Hold the mouse pointer over the
connection icon...
...to list the conversion shaders
between the parameter and the
next non-conversion shader.
Conversion Shaders
As you attach networks of shaders to your objects, you’ll notice that
some of them are designated as conversion shaders. Conversion
shaders are utility shaders that modify one shader’s output before it is
connected to other shader’s parameters. Generally speaking,
conversion shaders fall into three categories:
• Type conversion shaders allow you to convert a shader’s output
from one type of information to another. For example, the
Color2Scalar shader converts a shader’s color output into a scalar
value.
• Color conversion shaders allow you to modulate color information
output from a given shader. For example, the Color Correction
shader adjusts the hue, saturation, level, gamma, and contrast of a
shader’s color output.
• Simple math shaders allow you to perform basic mathematical
operations on the output of a given shader. For example, the Scalar
Share shader lets you share a a single scalar value between several
other shaders.
Shader Stacks
Some scene elements, like render passes and cameras, have shader
“stacks” in their property editors. Shader stacks are used to apply
shaders that affect the whole scene, rather than individual objects.
For example, a render pass has four shader stacks: one for environment
shaders, one for output shaders, one for volume shaders, and one for lens
shaders. Cameras have a single shader stack which is used to apply lens
shaders.
Applies a shader to the
stack.
Removes a shader from
the stack.
Opens the selected
shader’s property editor.
Lists every shader
applied to the camera.
When you open a shader’s property editor, you can tell which of its
parameters are connected to other shaders via conversion shaders
because their connection icons are red and marked with a small “c”.
Basics • 235
Section 14 • Shaders
The Render Tree
The render tree lets you connect shaders—or nodes—together to build
a visual effect. Each node exposes a set of properties that can be
dynamically linked by connecting the output of one shader to the input
of another. For example, you can use a 2D or 3D texture to control the
color, specularity, or reflectivity of a material.
Every shader’s node exposes inputs (often called ports) for most or all
all of the shader’s parameters. You connect shaders together by simply
dragging a connection from one shader’s output to a parameter’s input.
This makes the render tree the most versatile tool for connecting
shaders to objects, and to one another, to build a material or effect.
To open the render tree, choose View > Rendering/Texturing > Render Tree
from the main menu or press 7.
Nodes Menu
Allows you to choose
shaders to insert into the
render tree workspace.
Shader Input
Shader parameters in render tree nodes
can be connected to other shaders.
Parameters are color-coded to indicate
what type of data they can accept.
Parameter Group
Parameters of complex
shaders are grouped to
save space. Groups can be
expanded or collapsed.
Shader Output
Every shader outputs information that can be
used to control material attributes, or parameters
of other shaders. The color of a shader’s output
divot indicates the type of data that the shader
outputs (color values, scalar values, and so on).
Material Node
This node is where you
connect the shaders
that define an object’s
look to the object’s
material.
Image Clip Thumbnail
The render tree can display
thumbnails of all of the 2D image
textures used in the effect.
236 • SOFTIMAGE|XSI
Connection Arrow
To connect a shader to another shader’s
ports, you drag connection arrows from
the destination shader’s output divot to
the target parameter’s input divot.
Shader node
Every shader is represented
by a node whose color
indicates the shader’s type
Texture Layer
You can add texture layers to most shaders.
This allows you blend textures to control
one or more of the shader’s parameters.
The Render Tree
The Material Node
Every object has a material node: without it, an object
wouldn’t render. This node acts like a placeholder for
every shader that can be applied to an object. Shaders
can alter an object by invoking one or several of its
shader input types: Surface, Volume, Environment,
Contour, Displacement, Shadow, Photon, Photon
Volume, Bump Map and so on.
Nodes and Codes
Every shader node that appears in the render tree is
color coded, as are each of its parameters. This coding
system helps you visualize which shaders are doing
what within their respective render tree structures.
point to the left of the parameter’s name. The color of a connection
point identifies what type of input value the parameter will accept, and
what type of value it will output.
The following table describes what type of value each input/output
color corresponds to:
Color
Input/
Output
Result
Color
Returns or outputs a color (RGB) value. These
input/outputs are usually used in conjunction with
the surface of an object or when defining a light or
camera.
Scalar
Represents a scalar input/output with any value
between 0 and 1.
Vector
Represents an output/input that corresponds to
vector positions or coordinates.
Material node
Realtime shader
Material phenomenon
Volume shader
• As an output, returns a specific vector
position.
Surface shader
Output shader
• As an input, it is required to map a texture, for
example, to a specific location.
Texture shader
Lens /camera shader
Boolean
Represents an input/output that corresponds to a
0 or 1, or On/ Off.
Lightmap shader
Light shader
Integer
Consists of a single integer (such as 2, 73, or
300).
Texture/Image
Clip
Accepts or returns an image file.
RealTime
Accepts connections from other realtime
shaders and outputs to other realtime shaders
or to the material node’s realtime port.
Lightmap
Outputs the result of a lightmap shader to the
Material node’s Lightmap port.
Material
Phenomenon
Outputs the result of a material phenomenon
shader to the Material node’s Material port.
Environment shader
Click the arrow to expand
or collapse a node.
Click the dot to create a
connection arrow.
Selected nodes are highlighted in white.
Shader node inputs and outputs are also color coded. A node’s output
is indicated by a connection point (colored dot) in the top right of the
node, while each parameter’s input is indicated with a connection
Basics • 237
Section 14 • Shaders
Building Shader Networks
The process of building shader networks in the render tree is best
explained visually. Essentially, you create an effect by connecting
shaders to an object’s material, using other shaders to control those
shaders’ parameters, and so on.
1
To begin with, the mug has a
Phong shader connected to its
material node’s Surface port to
create basic surface shading —
ambient and diffuse colors,
specular highlights, and, in this
case, some reflectivity as well.
3
There are no hard and fast rules for how shaders should be connected,
and experimenting with different connections is usually rewarding.
What follows is a simple example of how to connect shaders in the
render tree to build an object’s material.
2
Since there are no other objects in the scene, the mug’s reflectivity is not
apparent. Connecting an Environment map shader to the material node’s
Environment port makes the reflectivity visible and creates some reflections
on the mug’s surface.
Now it’s time to add some color and detail. Connecting two textures to a
Mix2Colors shader blends the textures together.
The combined result is then connected to the Phong shader’s Ambient
and Diffuse ports, coloring the mug’s surface.
238 • SOFTIMAGE|XSI
Building Shader Networks
4
5
Connecting a Bump Map generator shader to the
material node’s Bump Map port adds some bumpiness to
the mug’s surface. Note how this affects the reflections
from the environment map. The mug now looks more
like stoneware than porcelain.
Finally, connecting an Ambient Occlusion shader between the Phong shader and the material node’s Surface
port darkens the mug where it occludes itself.
The Phong shader’s branch, which includes the textures, is connected to the Ambient Occlusion shader’s
Bright Color port, while the Dark Color is set to black.
The Ambient Occlusion effect is most visible on the inside of the mug and the inner surface of the handle.
Basics • 239
Section 14 • Shaders
Editing Shader Properties
An important part of the process of fine-tuning a scene is editing its
shader properties. You can edit every shader using its property editor.
Property editors contain the various parameters that define the
properties of individual objects, whether they be geometric objects,
lights, or cameras. You can display and use multiple property editors
simultaneously.
Shader Presets
The easiest way to open a shader’s property editor is to double-click the
shader’s node in the render tree.
Presets are useful if you want to select a particular shader and modify
some of its attributes, save the settings, and load them as a shader
preset for more than one object. It saves you from having to select each
shader and set the same parameters each time for other objects.
Keyframe controls
Color Box
Load and Save presets
Presets are files that contain values for all settings in a property editor.
You can load a preset or you can save the values in a property editor
and name them as a preset. You can then load and use them again later.
Presets have a .Preset file name extension.
• To save a shader preset, open the shader’s property editor and click
the Save icon in the upper-right area of the property editor.
• To load a shader preset, open the shader’s property editor and click
the Load icon in the upper-right area of the property editor. A
browser will open, from which you can choose the preset to load.
You can find a number of shader presets in the Shader Presets toolbar
(choose View > Toolbars > Shader Presets from the main menu).
Connection icons for
controlling a parameter
by connecting it to a
shader.
A question mark means
that another shader is
controlling this
parameter’s value.
Information about parameters listed in the property editor is available
from the XSI Reference help
for that property editor.
240 • SOFTIMAGE|XSI
You can apply these
presets by simply
dragging and
dropping them onto
scene objects.
Section 15
Materials and Surface
Shaders
In XSI, an object’s look and feel is defined by one or
more shaders that are plugged into the object’s
material. The material itself provides access to the
object’s attributes while the shaders control how
those attributes appear when rendered. This section
introduces ways of creating and working with
materials themselves.
This section also introduces surface shaders —
commonly used shaders that control basic object
attributes, such as the way the object reacts to direct
and indirect illumination, the object’s transparency
and reflectivity, and so on.
What you’ll find in this section ...
• About Materials
• Material Libraries
• Creating and Assigning Materials
• The Material Manager
• Surface Shaders
• Basic Surface Color Attributes
• Reflectivity, Transparency, and Refraction
Basics • 241
Section 15 • Materials and Surface Shaders
About Materials
Every object needs a material. In XSI, the term “material” is used to
refer to the cumulative effect of all of the shaders that you use to alter
an object’s look and feel. Strictly speaking, though, materials in XSI are
really just containers for, or connection points to, an object’s various
attributes. If an object’s material has no shaders attached to it, nothing
defines the object’s look, and the object won’t render.
The easiest way to understand what a material is to
look at it in the render tree where it is represented by a
Material node. The Material node (shown on the
right) lists all of the inputs to a given material. These
inputs are sometimes referred to as “ports.” Each port
controls a subset of object attributes. When the
material is assigned to an object, the shaders that you
connect to these ports alter the corresponding
attributes.
For example, the Surface port controls object surface
characteristics. By connecting a shader or a network
of shaders to it, you can change an object’s color, transparency,
reflectivity, and so on. The important thing to understand is that nearly
every change you make to an object’s appearance involves connecting
shaders to the object’s material.
The Default Scene Material
Every new scene has a default material, called Scene_Material, which is
assigned to the scene’s root in branch mode. An object (in a hierarchy
or not) that does not inherit a material from a parent, and does not
have a locally-defined material, inherits the scene’s default material. In
the explorer, you can view the default material in the material library’s
hierarchy, or as a sub-node of the scene root, which you can display by
choosing Local Properties from the Show menu.
242 • SOFTIMAGE|XSI
When you assign a local material to an
object, it replaces the default scene
material for that object only. If you
remove or delete the object’s local
material, the object inherits the default
scene material again. You can modify
the default scene material as you would
any other material and the changes are
applied to any objects that inherit it.
Default Scene Material
If you delete the default scene material, the least recently created
(oldest) material in the scene becomes the new default material, and is
assigned to all objects to which the previous default material was
assigned (whether explicitly or through propagation).
Materials and Surface Shaders
Surface shaders are some of the most commonly used shaders in XSI.
Each one defines an object’s basic surface characteristics, like color,
transparency, reflectivity, specularity, and so on, according to a specific
shading model. Choosing the correct surface shader can go a long way
towards helping you get the look you want.
That being said, it’s worth noting that all new materials that you create
in XSI start out with some kind of surface shader attached to them. For
example, if you create a material from within a material library, it has a
Phong shader attached to its Surface, Shadow, and Photon ports. If you
create a material using a command from the Render toolbar’s Get >
Material menu, you can choose a surface shader to attach to the
material. This provides basic surface shading so that the material is
renderable from the beginning.
By default, new materials
have a surface shader, like
the Phong shader shown
here, attached to them.
Material Libraries
Material Libraries
Most properties in XSI are owned by the scene elements to which
they’re applied. Materials, on the other hand, belong to material
libraries. Material libraries are common containers for all of the
materials in a scene. Each time you create a material, it’s added to a
material library. Although all of the materials in a scene belong to a
library, they are used only by the objects to which they are assigned.
You can view a scene’s material libraries by opening an explorer and
setting its scope to Materials.
Set the explorer scope to
Materials to view the
material library.
The current library’s node
appears directly under the
Materials node.
Expand the List node to list
all of the material libraries in
the scene.
you no longer want to use a material, you can simply delete it once,
regardless of the number of objects to which it’s assigned. You can
create as many material libraries as you need in a scene.
The Default Material Library
By default, every new scene has a
material library called DefaultLib.
Initially, the library contains only the
default scene material, but all new
materials that you create in the scene are
added to the default library until you
create or import a new library and set it as the current library.
The Current Material Library
Unless you explicitly create a new material in another library, all newly
created materials are added to the current library. For example, if you
create a material using any of the commands in the Render toolbar’s
Get > Material menu, it is added to the current library.
The current library appears
under the Materials folder along
with the List folder.
The List folder lists all of the
scene’s material libraries.
Expand a library’s node to list
all of its materials.
Storing materials this way makes it easy to share a single material
between several objects. It also allows you to access and edit all of the
materials in a scene from a single place. Furthermore, because
materials belong to libraries and not to individual objects, you can
delete an object from the scene, but keep its material for later use. If
The current library’s name
appears in italics.
To make a library the current library,
right-click it and choose Set as
Current Library from the menu
Basics • 243
Section 15 • Materials and Surface Shaders
Storing Libraries: Internal or External
Creating and Assigning Materials
By default, material libraries are stored internally as part of the scene.
However, you can store them externally, as binary or text dotXSI files,
which allows you to share them between multiple scenes.
Giving an object a material is the first step in defining its look. There
are a couple of different tools you can use to create new materials and
assign them to scene objects. Once you create a material, it belongs to a
material library, and you can assign it to as many objects as you’d like.
• Internal Libraries: by default, every new material library is stored
internally. Storing a material library internally means that it’s part
of the scene and has full access to all scene data. This allows you to
do things like write expressions that reference particular materials
or link a material’s parameter’s to an object’s parameters.
• External Libraries: storing material libraries externally allows you
to share them between scenes within a project or between projects.
By default, external libraries are stored in the MatLib folder, which
is part of the project structure; however, you can store external
material libraries anywhere.
External material libraries can be imported into any scene, either
directly, in which case they become part of the scene, or by
reference, in which case they are read-only.
Using the Render Toolbar
The Render toolbar’s Get > Material menu contains provides several
ways to create new materials and assign them to objects.
Choose Get > Material
from the Render toolbar.
The menu lists surface shaders
that are automatically connected
to the Surface, Shadow and
Photon ports of a new material.
The new material is assigned to
the selected object.
• Choosing any surface shader from this menu creates a new material
with the chosen surface shader connected to its Surface, Shadow,
and Photon ports. The new material is assigned to the selected
object.
By default, new materials
have a surface shader, like
the Phong shader, attached
to them.
• Choosing Assign Material starts a pick session in which you can
choose to assign materials to various targets.
244 • SOFTIMAGE|XSI
Creating and Assigning Materials
The target depends on what was selected when you chose the
command: one or more objects, a material, one material plus one
or more objects, components of an object (polygons, clusters of
polygons, subsurfaces, etc.), or nothing at all.
Using the Explorer
Assigning Materials to Polygons/Polygon Clusters
Using the same tools and techniques that you use to assign materials to
objects, you can assign materials locally to selections of polygons and/
or polygon clusters on a polygon mesh object. If you choose the
former, a cluster is created from the selection. The cluster’s local
material always overrides the one assigned to the entire object.
You can create materials and assign them to objects using the explorer.
• To create a material, right-click a material library and choose
Create Material from the menu.
A new material, consisting of a material node with a Phong shader
connected to its Surface, Shadow, and Photon ports, is added to the
library, but is not assigned to any objects.
• To assign a material, drag and drop the material onto any target
(object, group, cluster, and so on). This is useful when you want to
quickly assign a material to a single target.
Polygon mesh object with
global material assigned.
Object with specific
polygons selected.
Local material assigned
to selected polygons.
In the explorer, a cluster’s material appears under the cluster’s node,
rather than directly under the object’s node. To access it, expand the
object’s Polygon Mesh > Clusters > name of cluster node.
The cluster’s
material is here.
You can drag and drop a material
onto an object in the explorer or in
any 3D view to assign the material
to that object.
The object’s
material is here.
If you remove a material from a cluster, the material inherits the
material either assigned to or inherited by the object.
Basics • 245
Section 15 • Materials and Surface Shaders
The Material Manager
The material manager is a tool for conveniently managing and editing
all your materials and libraries. It allows you to temporarily assign your
materials to special material-manager-only objects and preview them
on shader balls.
To open the material manager, press Ctrl+7. The material manager lets
you apply, edit, and manager materials as you like. Its different areas
are outlined here.
The command bar provides
tools for applying materials,
such as creating, duplicating,
or deleting materials, and tools
managing material libraries.
The left panel displays
either the scene or clip
explorer.
The shelf displays
shaderballs for the materials
in your scene. Multiple
libraries appear on separate
tabs.
Click a material to select it, or
drag a material onto an
object or cluster in the scene
to apply it.
In the Scene explorer, you can
switch between local
materials (applied locally on
object or cluster itself) and
applied materials.
Render tree is the default
view displayed in the
bottom panel.
Selecting a material in the
explorer highlights it in the
shelf and displays it in the
bottom panel.
These tabs can display one of several views in the bottom panel:
• The selected material in the render tree, which is shown here.
• The selected material in the texture layer editor.
• A list of image clips used by the selected material. Right-click on a
clip’s thumbnail for a context menu that allows you to edit a clip’s
properties and other options.
• A list of objects and clusters that use the selected material.
246 • SOFTIMAGE|XSI
Surface Shaders
Surface Shaders
Surface shaders are some of the most commonly used shaders in XSI.
Each one defines an object’s basic surface characteristics, like color,
transparency, reflectivity, specularity, and so on, according to a specific
shading model.
Shading models determine how an object’s surface reacts to scene
lighting. Several different shading models are available. Choosing the
appropriate one can go a long way toward getting your objects looking
the way you want. Each shading model processes the relation of surface
normals to the light source to create a particular shading effect.
Each of the following shading models is available from any toolbar’s
Get > Material menu.
Phong
Uses ambient, diffuse, and
specular colors. This shading
model reads the surface
normals’ orientation and
interpolates between them to
create an appearance of
smooth shading. It also
processes the relation between
normals, the light, and the camera’s point of view to create a specular
highlight.
The result is a smoothly shaded object with diffuse and ambient areas
of illumination on its surface and a specular highlight so that the object
appears shiny, like a billiard ball or plastic.
Lambert
Uses the ambient and diffuse
colors to create a matte
surface with no specular
highlights. It interpolates
between normals of adjacent
surface triangles so that the
shading changes progressively,
creating a matte surface. The
result is a smoothly shaded object, like an egg or ping-pong ball.
Blinn
Uses diffuse, ambient, and
specular color, as well as a
refractive index for
calculating the specular
highlight. Blinn produces
results that are virtually
identical to Phong except that
the shape of the specular
highlight reflects the actual lighting more accurately when there is a
high angle of incidence between the camera and the light.
Blinn is useful for rough or sharp edges and simulating a metal surface.
The specular highlight also appears brighter than the Phong model.
Cook-Torrance
Uses diffuse, ambient, and
specular color, as well as a
refractive index used to
calculate the specular
highlight. It reads the surface
normals’ orientation and
interpolates between them to
Basics • 247
Section 15 • Materials and Surface Shaders
create an appearance of smooth shading. It also processes the relation
between normals, the light, and the camera’s point of view to create a
specular highlight.
Cook-Torrance produces results that are somewhere between Blinn
and Lambert and is useful for simulating smooth and reflective objects,
such as leather. Because this shading model is more complex to
calculate, it takes longer to render than the other shading models.
Strauss
Uses only the diffuse color to
simulate a metal surface. The
surface’s specular is defined
with smoothness and
“metalness” parameters that
control the diffuse to specular
ratio as well as reflectivity and
highlights.
Anisotropic
Sometimes called Ward, this
shading model simulates a
glossy surface using an
ambient, diffuse, and a glossy
color. To create a “brushed”
effect, such as brushed
aluminum, it is possible to
define the specular color’s
orientation based on the object’s surface orientation. The specular is
calculated using UV coordinates.
248 • SOFTIMAGE|XSI
Constant
Uses only the diffuse color. It ignores the orientation of surface
normals. All the object’s surface triangles are considered to have the
same orientation and be the same distance from the light.
It yields an object whose
surface appears to have no
shading at all, like a paper
cutout. This can be useful
when you want to add static
blur to an object so that there
is no specular or ambient
light.
Toon
This model begins with a
constant-shading-like base
color. Ambient lighting, as
well as highlights and rim
lights are composited over the
base color to produce the
final result.
The result is a cel-animation type of shading that can vary enormously
depending on how you configure the highlights and rim lights. The
toon shading model is typically used in conjunction with the Toon Ink
Lens shader (applied to the render pass camera), which creates the
cartoon-style ink lines.
Basic Surface Color Attributes
Basic Surface Color Attributes
Diffuse
You can create a very specific color for an object by defining its
ambient, diffuse, and specular colors separately on the Illumination
page of its surface shader property editor.
This is the color that the light
scatters equally in all
directions so that the surface
appears to have the same
brightness from all viewing
angles. It usually contributes
the most to an object’s overall
appearance and it can be
considered the “main” color of the surface.
To open an object’s surface shader property editor, select the object and
choose Modify > Shader from the Render toolbar.
Ambient
The combined result of the ambient, diffuse, and
specular colors/lighting contributions.
Not all shading models support all of these basic characteristics. For
example, only the Phong, Blinn, Cook-Torrance and Anisotropic
shading models support specular highlights (although the Strauss
shader’s Smoothness and Metalness parameters affect specularity).
Similarly, the Strauss shader does not support an ambient color, while
most other models do.
It’s also worth noting that because different shading models compute
these basic characteristics, the parameters that control the attributes
vary from one property editor to another. For example, the Anisotropic
shader has much more elaborate specular highlight controls than the
Phong shader.
This color simulates a
uniform non-directional
lighting that pervades the
entire scene. It is multiplied
by the scene ambience value,
and blended with the diffuse
color. Often, the ambient
color is set to the same value
as the diffuse color, allowing the scene ambience to provide the
ambient color.
Specular
This is the color of shiny
highlights on the surface. It is
usually set to white or to a
brighter shade of the diffuse
color. The size of the
highlight depends on the
defined Specular Decay
value. Specular highlights are
not visible in all shading models.
Basics • 249
Section 15 • Materials and Surface Shaders
Reflectivity, Transparency, and Refraction
In addition to controlling an object’s basic surface shading
characteristics, surface shaders also control reflectivity, transparency,
and refraction. Parameters for controlling these attributes are on the
Transparency/Reflection tab of the surface shader’s property editor.
As an object becomes more reflective, its other surface parameters,
such as those related to diffuse, ambient, and specular areas of
illumination, become less visible. If an object’s material is fully
reflective, its other material attributes are not visible at all.
To open an object’s surface shader property editor, select the object and
choose Modify > Shader from the Render toolbar.
Reflectivity values are defined using color sliders. Setting the color to
black makes the object completely non-reflective, while setting the
color to white makes it completely reflective. If necessary, you can even
control reflectivity in individual color channels.
Reflectivity
Controlling Reflectivity with Textures
A surface shader’s Reflection parameters control an object’s reflectivity.
The more reflective an object is, the more other objects in the scene
appear reflected in the object’s surface.
You can also control reflectivity using a texture by connecting the
texture to the surface shader’s reflectivity input.
No reflectivity in gray
ball’s material
35% reflectivity
250 • SOFTIMAGE|XSI
In this example, the object’s
surface shader’s reflectivity
parameter is connected to a
simple black and white stripe
texture.
The white areas are
reflective, while the black
areas are not.
Normally, grayscale images are used since black, white and shades of
gray adjust reflectivity uniformly in all color channels. Black areas of the
image make the corresponding portions of the object non-reflective,
white areas make the corresponding portions of the object completely
reflective, and gray areas make the corresponding portions of the object
partially reflective.
Reflectivity, Transparency, and Refraction
Transparency
Controlling Transparency with Textures
A surface shader’s Transparency parameters control an object’s
transparency. The more transparent an object is, the more you can see
through it.
As with reflectivity, you can also control transparency using a texture
by connecting the texture to the surface shader’s reflectivity input.
75% transparency
In this example, the object’s
surface shader’s
transparency parameter is
connected to a simple black
and white stripe texture.
The white areas are
transparent, while the black
areas are opaque.
70% transparency with
30% reflection.
As with reflectivity, transparency affects the visibility of an object’s
other surface attributes. You can compensate for this by increasing the
attributes’ values, such as changing specular color values that were 1 on
an opaque object to 10 or higher on a transparent object.
Normally, grayscale images are used since black, white and shades of
gray adjust transparency uniformly in all color channels. Black areas of
the image make the corresponding portions of the object opaque,
white areas make the corresponding portions of the object completely
transparent, and gray areas make the corresponding portions of the
object partially transparent — or translucent.
Transparency values are also defined using color sliders. Setting the
color to black makes the object completely opaque, while setting the
color to white makes it completely transparent. If necessary, you can
even control transparency in individual color channels.
Basics • 251
Section 15 • Materials and Surface Shaders
Refraction
When transparency is incorporated into an object’s surface definition,
you can also define the refraction value. Refraction is the bending of
light rays as they pass from one transparent medium to another, such
as from air to glass or water.
Refraction value of 0.9
Refraction value of 1.1
You can set the index of refraction from a surface shader’s property
editor. The default value is 1, which represents the density of air. This
value allows light rays to pass straight through a transparent surface
without bending. Higher values make the light rays bend, while values
less than 1 makes light rays bend in the opposite direction, simulating
light passing from air into an even less dense material (such as a
vacuum).
Refractive index values usually vary between 0 and 2, but you can type
in higher values as needed.
252 • SOFTIMAGE|XSI
Section 16
Texturing
Texturing is the process of adding color and texture
to an object. You can use textures to define
everything from basic surface color to more tactile
characteristics like bumps or dirt. Textures can also
be used to drive a wide variety of shader parameters,
allowing you to create maps that define an object’s
transparency, reflectivity, bumpiness, and so on.
What you’ll find in this section ...
• How Surface and Texture Shaders
Work Together
• Types of Textures
• Applying Textures
• Texture Projections and Supports
• Editing Texture Projections
• UV Coordinates
• Editing UV Coordinates in the Texture Editor
• Texture Layers
• Bump Maps and Displacement Maps
• Baking Textures with RenderMap
• Painting Color at Vertices
Basics • 253
Section 16 • Texturing
How Surface and Texture Shaders Work Together
Surface shaders and texture shaders work together to create an object’s
look. A surface shader defines how an object responds to lighting, and
defines other basic characteristics such as transparency and reflectivity.
A texture shader applies either an image or a procedural texture onto
the object. The texture doesn’t “cover” the surface shader; rather, it is
combined with the surface shader such that the object is textured and
responds correctly to scene lighting.
A Blinn shader connected to the Surface port
of the cow’s body’s material node. The hoofs,
horns, and so on are textured separately.
254 • SOFTIMAGE|XSI
In most cases, a surface shader is connected to the material node’s
Surface port, and then a texture shader is connected to the Ambient
and Diffuse parameters of the surface shader. The following example
illustrates how combining texture shaders and surface shaders affects
the final result.
A texture shader connected to the
Surface port of the cow’s body’s material.
Note that without a surface shader, the
lighting appears constant.
Using the texture shader to drive the surface
shader’s Ambient ad Diffuse colors produces a
textured cow that responds properly to lighting.
Types of Textures
Types of Textures
XSI allows you to use two different types of textures: image textures,
which are separate image files applied to an object’s surface, and
procedural textures, which are calculated mathematically.
Image Textures
Image textures are images that can be wrapped around an object’s
surface, much like a piece of paper that’s wrapped around an object. To
use a 2D texture, you start with any type of picture file (PIC, TIFF, PSD,
etc.). These can be scanned photos or any file containing data that
describes all the pixels in an image, RGB or RGBA data.
• An image clip is a copy, or instance, of an image source file. Each
time you use an image source, an image clip of it is created. You can
have as many clips of the same source as you wish. You can then
modify the image clip without affecting the original source image.
Clips are useful because they allow you to create different
representations of the same texture image (source), such as five
different blur levels of the same source image. Also, clips are
memory-efficient because the source is only loaded once,
regardless of the number of clips created from it.
Procedural Textures
Procedural textures are generated mathematically, each according to a
particular algorithm. Typically, they are used to simulate natural
materials and patterns such as wood, marble, rock, veins, and so on.
2D textures are wrapped around objects.
XSI’s shader library contains both 2D and 3D procedural textures. 2D
procedurals are calculated on the object’s surface — according to their
texture projections — while 3D procedurals are calculated through the
object’s volume. In other words, unlike 2D textures, 3D textures are
projected “into” objects rather than onto them. This means they can be
used to represent substances having internal structure, like the rings
and knots of wood.
Image Sources and Clips
Every time you select an image to use as a
texture or for rotoscopy, an image clip and an
image source of the selected image is created.
• An image source is not really a usable scene
element. It is merely a pointer to the
original image stored on disk. It is defined
as read-only and is listed in your scene in the Sources folder of the
Scene Root. It does not have to be reloaded when you re-open
your scene. Image sources can be stored within your project, or
outside of it.
3D textures are defined throughout an object.
Basics • 255
Section 16 • Texturing
Applying Textures
There are a number of ways to connect textures to objects in XSI.
These include:
• Using the Get > Texture menu lists
commonly used texture shaders that
can be connected to any
combination of a surface shader’s
ambient, diffuse, transparency and
reflection ports.
• Using the parameter connection
icon menu in a shader’s property
editor lists textures that you can
attach directly to the parameter.
Attaching a texture to a parameter
lets you control the parameter with
a texture instead of a simple color
or numeric value.
This is a convenient way to connect a
texture to a surface shader’s Ambient
and Diffuse ports immediately after
applying the surface shader to the
object.
• Using the render tree, where you can choose a texture from the
Nodes > Texture menu. Once you choose a texture, it is added to
the render tree workspace and you can connect it to the material’s
or other shaders’ ports.
Choosing a texture from the Nodes > Texture
menu adds it to the render tree workspace.
Adding More Textures
To add a texture in addition to the one applied using Method 1, choose
Modify > Texture > Add from the Render toolbar.
This adds a new texture layer to the object’s surface shader. The
parameters that you add the new texture to are added to the layer, and
the layer’s texture is blended with them.
Choose Modify > Texture > Add from
the Render toolbar.
The menu lists texture shaders
that can be blended with the
surface shader via a new texture
layer.
256 • SOFTIMAGE|XSI
Texture Projections and Supports
Texture Projections and Supports
Whenever you apply a texture to an object, a texture projection and
texture support are created.
The texture support is a graphical representation of how the texture is
projected on the object. It defines the type of projection and applies
textures to your 3D objects using that definition.
By default, an object’s texture support is constrained to the object;
otherwise, animated objects would move through space without their
projection. Transforming the texture support is a useful way of
animating or repositioning a texture on an object.
Texture projections
Texture support
Texture projections exist on the support and record the
correspondence between pixels in the texture and points on the object’s
surface—in other words they define where the texture is projected on
the object.
You can transform a texture projection on a given support to define the
part of the object to which the texture is applied. You can then add any
number of projections, adjacent or overlapping, to the support.
The sphere shown below has three texture projections connected to its
support. The wireframe view on the left shows how the projections are
positioned, and the textured view on the right shows the rendered
result.
Rendered result of how the textures are
projected onto this sphere.
Basics • 257
Section 16 • Texturing
Types of Texture Projections
Choosing the right type of texture projection is an important part of
the texturing process. The more closely the projection conforms to the
original shape of the object, the less you’ll have to adjust the texture to
get the object looking just right. This section describes the types of
texture projections that are available to you.
All of the projections described can be applied to objects from the
Render toolbar’s Get > Property > Texture Projection menu.
You can also create and apply texture projections from any texture
shader’s property editor. Every texture shader needs a projection to
define where the texture should appear on the object.
Planar Projections
Cylindrical Projections
Planar projections are used for mapping textures onto an object’s XY, XZ, and YZ
planes. By default, the projection plane is one pixel smaller than the surface plane,
therefore no “streaking” or distortion occurs on the object’s other planes.
If you map the picture file cylindrically, it is
projected as if wrapped around a cylinder.
XY
YZ
XZ
Planar XY
Lollipop Projections
Spherical Projections
A lollipop projection is a
spherical-type projection that
stretches the texture over the
top of the object so its
corners meet on the bottom,
like the wrapper of a lollipop.
A single pinch-point occurs at
the -Y pole.
A standard spherical
projection stretches the
texture over the front of the
object so that its edges meet
at the back. Distortion occurs
towards the pinch points at
the object’s +Y and -Y poles.
Lollipop
258 • SOFTIMAGE|XSI
Cylindrical
Spherical
Texture Projections and Supports
Cubic Projections
UV Projections
A cubic projection assigns an object’s polygons to a specific face of the cube
based either on the orientation of their normals, or their positions relative to
the cubic texture support. The texture is then projected onto each face using a
planar or spherical projection method.
UV projections are useful for texturing NURBS surface objects. They behaves
like a rubber skin stretched over the object’s surface. The points of the object
correspond exactly to a particular coordinate in the texture, allowing you to
accurately map a texture to the object’s geometry. Even when you deform an
object, its texture follows the object’s geometry.
By default, the entire texture is projected onto each face. However, you can
choose from a number of different cubic projection presets. You can also
transform each face of the cube individually and save the transformations as
presets of your own.
A NURBS surface (left) with a wood
texture applied using an planar XZ map
(below, left) and UV map (below,
right). With the UV map applied, the
pattern accurately follows the contours
of the object.
+Y face (top)
-Z face (back)
-X face (left)
+Z face (front)
A cubic projection is applied
to a cube so that the entire
texture image is projected
onto each face.
+X face (right)
-Y face (bottom)
+Y face (top)
-Z face (back)
-X face (left)
Spatial Projections
A spatial projection is a three-dimensional UVW texture projection that has
either the object’s origin or the scene’s origin as its center. Spatial projections
are used to apply procedural textures that are computed mathematically, rather
than being somehow wrapped around the object.
A cubic projection is
applied to a head so that a
different part of the
texture image is projected
onto each face.
By default, a spatial
projection’s texture
support appears in the
center of the textured
object’s volume.
+Z face (front)
+X face (right)
-Y face (bottom)
Polygon sphere with a vein texture applied using a spatial projection.
Basics • 259
Section 16 • Texturing
Camera Projections
Contour Stretch UVs Projection (Polygons Only)
A simple and convenient way to texture objects is to project a texture
from the camera onto the object’s surface, much like a slide projector
does. This is useful for projecting live action backgrounds into your
scene so you can model and animate your 3D elements against them.
Contour Stretch UVs projections allow you to project a texture image
onto a selection of an object’s polygons. Rather than projecting
according to a specific form, however, a contour stretch projection
analyzes a four-cornered selection to determine how best to stretch the
polygons’ UV coordinates over the image.
Changing the camera’s position changes the projection’s position. Once
you have positioned the texture on the surface to your liking, you can
freeze the projection.
Texture image used
Wireframe view of the
rendered frame.
Contour stretch projections are useful for a number of different
texturing tasks, particularly for applying textures to tracks, and
irregular, terrain-like meshes. They are also useful for fitting regularshaped textures onto curved meshes. For example, they would be
useful to place a label texture on a beer bottle, right at the junction of
the bottle’s neck and body.
Top view showing where
the texture is projected.
The contour stretch projection is ideal for
texturing a curvy path like this road.
Contour stretch projections do not have the same alignment and
positioning options as other projections. Instead, you select a
stretching method that is appropriate to the selection’s topology and
complexity. Also, contour stretch projections do not have a texture
support. You can adjust them only from the texture editor.
Final rendered frame
In this example, the corner of a room was textured using the original
texture (top-left). The texture was projected from a scene camera (top
right). The rendered result shows the modeled teddy bear against the
projected background.
260 • SOFTIMAGE|XSI
Texture Projections and Supports
Unique UVs Projection (Polygons Only)
Unique UVs mapping applies a texture to polygon objects using one of
two possible methods:
• Individual polygon packing assigns each polygon’s UV coordinates
to its own distinct piece of the texture so that no one polygon’s
coordinates overlap another’s.
This is useful for rendermapping polygon objects. You can apply
textures to an object using a projection type appropriate to its
geometry, then rendermap the object using a new Unique UVs
projection to output a texture image that you can reapply to the
object. The texture is applied to texture each polygon properly
without you worrying about “unfolding” it to fit properly.
A Unique UVs projection was
applied to this sphere.
• Angle Grouping, after deciding on a projection direction, groups
neighboring polygons whose normal directions fall within a
specified angle tolerance. This process is repeated until all of the
object’s polygons are in a group. The groups—or islands—are then
assigned to distinct pieces of the texture so that no two islands’
coordinates overlap each other.
This method is useful for unfolding an object’s geometry. Typically,
the object is broken up along the planned seams in the projection.
Then the angle-grouping style unique UVs projection is applied to
the object, creating UV islands in accordance with the predefined
seams.
Using the Individual Polygon Packing method
produces UV coordinates that look like this: each
polygon’s UV coordinates separated from the rest
of the coordinate set so it can be assigned to its
own path of texture.
Using the Angle Grouping method produces “islands” of
polygons that can easily be healed back together in the
texture editor, producing properly “unfolded” UV coordinates
for the sphere.
Basics • 261
Section 16 • Texturing
Editing Texture Projections
A texture projection’s property editor contains options for modifying,
transforming and renaming the projection. You can open a texture
projection’s property editor by selecting an object and choosing one of
the following from the Render toolbar:
• Modify > Projection > Inspect Current UV opens the property
editor for the object’s current texture projection. This is the
projection used when the object is viewed in a textured display
mode (textured, textured decal, and so on).
You would make a texture projection implicit to obtain a better overall
result for spherical and cylindrical projections on a model with few
polygons. For example, when mapping a texture onto a sphere (using
either spherical or cylindrical projection), implicit texturing produces
more accurate results at the spheres’ poles than does explicit
projection.
Wrapping Texture Projections
• Modify > Projection > Inspect All UVs opens a multi-selection
property editor for all of the object’s texture projections.
The texture projection’s wrapping options control whether the texture
extends past the projection’s boundaries to wrap around the object.
• Modify > Texture > name of the texture from the Render toolbar to
open the texture’s property editor.
The examples below show a sphere whose texture projection has been
adjusted such that the texture covers only a portion of the object’s
surface. You can see the effect of wrapping in different directions.
Then click the Edit button on the Texture tab (beside the Texture
Projection list) to open the Texture Projection property editor.
Making Projections Implicit
You can make most texture projections into implicit projections.
Implicit projections are slightly slower to render because it performs its
own projection computation (based on a predefined projection model;
that is, spherical, planar, and so on) at each pixel, as opposed to using
predefined interpolated UV data like explicit projections.
No Wrapping
Wrap in U
Wrap in V
Wrap in U and V
Both spheres have a
texture applied to their
diffuse parameter. The
sphere on the left uses
an explicit projection
and the sphere on the
right uses an implicit
projection.
262 • SOFTIMAGE|XSI
Editing Texture Projections
Transforming Texture Projections
By default, a texture projection fills the entire texture support. For
example, if you apply a simple XZ Planar projection to a grid, the
texture coordinates span the entire projection from one grid corner to
the other. You can transform the texture projection to reposition the
texture, or to make room on the support for other projections in
different locations.
There are two ways to transform texture projections—using the
projection manipulator in a 3D view, or by editing the scaling,
rotation, and translation values in the Texture Projection property
editor.
To activate the projection manipulator, press j, or choose
Modify > Projection Edit Projection Tool from the Render toolbar.
The texture projection manipulator allows you to
reposition a texture projection on an object by changing
the projection’s position on the texture support.
In edit mode, the
mouse cursor
changes to this icon.
Drag the green arrow
to scale the
projection vertically.
Right-click to switch
to another projection,
if one exists.
Drag the green line to
translate the
projection vertically.
Drag the intersection
of the red and green
arrows to translate
the projection freely.
Drag one of the corner
handles or borders to
scale the projection.
Alternatively, you can use the texture
projection definition parameters to
transform a texture on the surface of
an object.
Drag the red arrow to
scale the projection
horizontally.
Middle-click + drag to rotate the
projection about its center.
UVW
Transformation
controls
Drag the red line
to translate the
projection
horizontally.
Muting and Freezing Texture Projections
Once you have scaled, rotated, or translated a texture projection to
your liking, you can freeze it permanently or mute it temporarily.
Freezing a texture projection is the equivalent of freezing the texturing
operator stack. This is useful if you want to avoid accidentally editing
or moving your texture support, especially when the object is
animated.
Basics • 263
Section 16 • Texturing
UV Coordinates
Applying a texture projection to an object creates a set of texture
coordinates — often called UV coordinates or simply UVs — that control
where the texture corresponds to the surface of the object.
• On a polygon object, each vertex can hold multiple UV coordinates
— one for each polygon corner that shares the vertex. The portion
of the texture enclosed by a polygon’s UVs is mapped to the
polygon.
• On NURBS objects, UV coordinates are not stored at the vertices;
instead, they are generated based on a regular sampling of the
object‘s surface. However, as with polygon objects, the portion of
the texture enclosed by, say, four UVs is mapped to the
corresponding portion of the object.
In this example, the image shown
left was used to texture a 2 x2
polygon grid such that each
polygon’s UV coordinates were
mapped to the texture differently.
You can view and adjust UV coordinates using the texture editor, where
they are represented by sample points. When you select sample points,
you are actually selecting the UV coordinates held at the corresponding
position on the object.
For example, as you can see in the images below, the center point of a 2x2
polygon grid holds four UV coordinates. When you select the
corresponding sample point in the texture editor, you are selecting all
four coordinates (although it is possible to select a single polygoncorner’s UV coordinate).
This exploded view of the textured grid
shows how each polygon’s UVs correspond
to the texture image.
The grid’s middle vertex holds four overlapping UVs. Each UV
belongs to a specific polygon and holds a coordinate which,
along with the polygon’s other UV coordinates, defines the
portion of the texture mapped onto that polygon.
264 • SOFTIMAGE|XSI
Editing UV Coordinates in the Texture Editor
Editing UV Coordinates in the Texture Editor
When you apply an image to an object, it’s unlikely that it will fit
perfectly. The next step after applying the texture is to adjust what parts
of the image correspond to the various parts of your object.
You can do this using the texture editor, which displays an object’s UV
coordinates. These are a two-dimensional representation of the object’s
geometry, which, when superimposed on a texture image, shows what
portion of the texture appears on any part of the object’s surface.
Texture editor
workspace is where you
manipulate the selected
object’s UV coordinates.
By selecting the object’s UV coordinates and moving them to a new
location, you can control which portions of the texture correspond to
different parts of the object. The texture editor has a wide variety of
tools to help you select and move UV coordinates.
To open the texture editor, press 7, or choose View > Rendering/
Texturing > Texture Editor from the main menu.
Texture editor menu bar contains
all of the texture editor commands,
including those accessible from the
command bar
UV position boxes allow
you to move selected
sample points to precise U
and V locations.
Texture editor command bars
provide quick access to commonly
used texture editor commands.
Non-active UVs are
displayed in gray. These
are UV coordinates
belonging to other
projections applied to
the object or to other
objects.
Texture image
The image clip
currently applied
to the object.
This character and his head are separate
objects, each with its own projection. Both
sets of UVs are shown in the texture editor.
You can use a nonactive UV set only as a
snapping target for
UVs in the active set.
Status bar displays the UV
coordinates, pixel coordinates,
and RGBA values of the current
mouse pointer position
Active UVs are displayed in yellow with their points and
bisectors visible. Only one set of UV coordinates can be
active at a time, though multiple sets can be displayed.
To change the active UV set, click one of the non-active
sets, or make it active from the UVs menu.
Connectivity Tabs
help you make sense of
the object’s UVs by
highlighting boundaries
shared between of UV
“islands”.
Basics • 265
Section 16 • Texturing
Texture Layers
Texture layering is the process of mixing several textures together, one after
the other, such that each texture is blended with the cumulative result of
the preceding textures. In XSI, you can use this technique to build complex
effects by adding texture layers to an object’s material or its shaders.
The parameters of the grid’s Lambert surface
shader are represented in the base layers. In this
case, nothing is connected to the Lambert shader’s
ports, so only the base colors are shown.
When you add a texture layer to a shader, one or more of that shader’s
parameters, or ports, is added to the layer. The layer is mixed on the
selected ports, in accordance with its assigned strength, or weight, using
one of several different mixing methods.
For texture layering purposes, the shader’s ports are collectively treated as
the base layer with which the texture layers are blended. If some of the
shader’s ports are connected to other shaders, those shaders are considered
part of the base layer as well. For example, if you’ve connected a Cell
texture to a Phong shader’s Ambient and Diffuse ports, the Cell texture is
treated as part of the Phong’s base layer.
What makes texture layers so powerful is that at any time in the texturing
process, you can add, modify, and remove any layer, giving you complete
control over the resulting effect. You can also quickly and easily change the
order in which layers are blended together, something that’s quite difficult
to do when you mix textures using mixer shaders in the render tree.
Because texture layers only affect designated ports, you can blend a
number of layers with each of a shader’s attributes and create a complex
effect for each.
The weatherbeaten road sign
shown here was created by
adding three texture layers to a
basic Lambert-shaded grid. The
images on the right show the
cumulative effect of the layers.
266 • SOFTIMAGE|XSI
The first layer adds the basic sign texture to the
Ambient and Diffuse ports. The texture’s alpha
channel is used to control transparency, cutting
out the shape of the sign.
The second layer adds some rust. The rust
texture is blended with the Ambient and Diffuse
ports according to its alpha channel, and a
separate mask—in this case, a weight map.
The final layer, blended with Ambient, Diffuse,
and Transparency adds the bullet holes. Bump
mapping is activated in the layer’s shader,
creating the depression around each bullet hole.
Editing UV Coordinates in the Texture Editor
The Texture Layer Editor
The texture layer editor is a grid-style editor
from which you can view and edit all of a
shader or material’s texture layers.
The advantage of using the texture layer editor
is that it packs a tremendous amount of
information into a relatively compact
interface. At a glance, you can see which
shaders are directly connected to a shader’s
port, how many texture layers have been
added to the shader, how many ports those
layers affect, and how and in which order the
layers are blended together. Add to this the
ability to modify the majority of each layer’s
properties, and the texture layer editor makes
for quite a powerful tool.
To open the texture layer editor, choose
View >Rendering/Texturing > Texture Layer
Editor from the main menu.
The shader list displays all of
the shaders connected to the
current selection‘s material.
Select a shader to update the
editor with its layers.
The Selected
shader’s ports
can be added to
texture layers and
base layers.
The texture controls allow
you to control the texture
projections assigned to
selected layers’ inputs.
The Base Colors layer
displays color boxes for
unconnected ports
Layer/port
controls indicate
that the port
has been added
to the layer.
Base layers represent shaders
that are directly connected to
the current shader’s ports.
Texture layers are blended
with the base layer and
with each other.
Layer controls and layer/port controls allow you to
set texture layer properties.
An empty cell
indicates that
the port is not
affected by
the layer.
Texture Layers in the Render Tree
Shader ports that have been
added to layers are marked
with a small blue “L”.
Layers section
Collapsed layer
parameter group
When a shader has one or more texture layers, a new section called
Layers is added to its node in the render tree. The Layers section
contains a parameter group for each of the shader’s layers.
Expanding the Layers section reveals all of the individual layer
parameter groups. Expanding an individual texture layer’s parameter
group reveals the ports for its Color and Mask parameters.
Layers behave exactly like any other parameter group in the render
tree, meaning that you can connect shaders to texture layer
Expanded layer
parameter group. parameters as you would to any other shader parameter. This allows
Layer Color and
Mask ports.
you to control each texture layer with its own branch of the render
tree.
Basics • 267
Section 16 • Texturing
Bump Maps and Displacement Maps
Although real surfaces can be perfectly smooth, you are more likely to encounter surfaces with flaws, bumps, and ridges. You can add this kind of
“noise” to object surfaces using bump maps and displacement maps.
Bump Maps
When Not to Use Bump Maps
Bump maps use textures to perturb an object’s shading normals to
create the illusion of relief on the object’s surface. Because they do not
actually change the object’s geometry, they are best suited to creating
fine detail that does not come too far off the surface.
Because bump maps do not actually alter
object geometry, their limitations can
become apparent when too much relief is
required.
The sphere shown here was bumpmapped using the texture shown below.
A negative bump factor was used to
make the white areas bump outward.
Consider the sphere shown here: even with
a very high bump step, the bumping is not
convincing on the silhouette where there is
no indication that the surface is raised.
In these cases, it’s better to either model
the necessary geometry or to use a displacement map.
Displacement Maps
Creating a Bump Map
To give you the most control over surface bumping, the best way to
create a bump map is to connect a Bumpmap shader to the Bump Map
port of an object’s material node.
However, every texture shader has bump map parameters, so you can
create a bump map using textures that you’ve connected to, for
example, a surface shader’s Ambient and Diffuse ports.
268 • SOFTIMAGE|XSI
A displacement map is a scalar map that, for each point on an object’s
surface, displaces the geometry in the direction of the object’s normal.
Unlike regular bump mapping that “fakes” the look of relief,
displacement mapping creates actual self-shadowing geometry.
The sphere shown here was
displacement-mapped using
the texture shown below.
Bump Maps and Displacement Maps
Creating a Displacement Map
Using Displacement Maps and Bump Maps Together
You create a displacement map by connecting a texture, preferably
grayscale, to the Displacement port of an object’s material node. It is
often helpful to add an intensity node between the map and the
material node to help control the displacement.
You can use bump maps and displacement maps together to create
extremely detailed surfaces. Typically, the best approach is to use a
displacement map to create the coarser surface detail — major features
that need to be visible at the object’s edges and can benefit from selfshadowing. You can then use the bump map to create a top layer of fine
detail. The bump-mapping is applied to the displaced geometry.
Setting Displacement Map Parameters
In addition to any shaders that you add to the render tree to modulate
displacement, the main displacement controls are on the Displacement
tab of the object’s Geometry Approximation property editor. From
there, you can choose the type of displacement appropriate to your
object and refine the displacement effect.
This sphere uses the texture on the left as a
displacement map to create coarse surface
detail, and the texture on the right as a
bump map to create fine surface detail.
When Not to Use a Displacement Map
Because they actually modify object geometry, displacement maps can
take considerably longer to render than bump maps. Generally
speaking you should not use a displacement map if you can achieve a
satisfactory effect using a bump map.
The sphere on the left uses a bump map, while the one on the right
uses a displacement map. In this case, the difference is slight enough
that the bump map’s shorter render time makes it the better choice.
Basics • 269
Section 16 • Texturing
Reflection Maps
Reflection maps, also called environment maps, can be used to simulate
an image reflected on an object’s surface, without using actual
raytraced reflections. They can also be used to add an extra reflection
to an object’s reflective, raytraced surface.
When objects are reflective, you can define whether the reflections on
its surface are Raytracing Enabled or Environment Only. Reflection
settings are found on the Transparency/Reflection tab of the object’s
surface shader’s property editor (choose Modify > Shader from the
Render toolbar to open the property editor).
Raytraced reflection only
Note how reflective objects reflect other objects in
the scene. For example, you can see the flask and
the floor reflected in the retort.
Raytraced Reflections are slower to render because they actually
compute reflections for everything around them.
Non-Raytraced Reflection Maps are much faster to compute because
they simulate the reflection of a specified texture or image, defined by
an environment map, on the object’s surface.
When reflection mapping is used without raytracing, only the
reflection map appears on the object’s surface; when used with
raytracing, the map is combined with raytraced reflections.
Reflection map only
Using only a reflection map, no scene objects are
reflected in reflective surfaces. Instead, the only
reflection is that simulated by the reflection map.
You can apply a reflection
map to an object by
connecting an environment
map shader to the
Environment port of the
object’s material node.
270 • SOFTIMAGE|XSI
Raytraced reflection and reflection map
With both types of reflection activated, you get
the real reflections of scene object and simulated
reflections from the map, producing highly
detailed reflections.
You can apply a reflection
map to the entire scene by
adding an environment map
shader to a render pass’
shader stack.
Baking Textures with RenderMap
Baking Textures with RenderMap
RenderMap allows you to capture a wide variety of surface information
from scene objects, and bake that information into image files that can
be reapplied to the rendermapped object and/or used for a myriad of
other purposes.
To rendermap an object, you need to apply a RenderMap property.
Choose Get > Property > RenderMap from the Render Toolbar. This
opens the RenderMap property editor, from which you can configure
all of the maps that you wish to output.
RenderMap captures surface information by casting rays from a virtual
camera in order to sample each point on an object’s surface. The results
are rendered as one or more 2D images that you can apply to the object
as you would any other 2D texture.
The following example shows how you can use RenderMap to create a
single texture (which includes lighting information) out of a complex
render tree.
Color map
Alpha map
Before RenderMap
The disembodied hand shown here was
textured using a combination of several
images mixed together in a complex
render tree, and lit using two infinite
lights. The result is a highly detailed surface
that incorporates color, bump, displacement, and lighting
information, and takes a fair amount of time to render.
Displacement map
Specular map
Bump map
After RenderMap
To bake the hand’s surface attributes into a single texture file, a RenderMap property was
applied to the hand, and a Surface Color map was generated. The resulting texture image
was then applied directly to the Surface input of the hand’s material node. Finally, the scene
lights were deleted, producing the result shown at right—a good approximation of the
hand’s original appearance.
Because the hand’s
illumination is baked into the
rendermap image, you can get
this result without using lights
or an illumination shader.
Basics • 271
Section 16 • Texturing
Painting Color at Vertices
Another way to apply color to polygon objects is to paint their vertices.
Vertex colors aren’t considered to be material or texture shaders: they
are actually a constant color stored directly in the vertices of a polygon
at the geometry level. Each vertex of a polygon has polynodes (a type of
“subvertex”) that hold its UV coordinates and vertex colors.
The Color at Vertices (CAV) property allows you to color an entire
polygon or just its edge rather than the actual vertex (the information
is stored at the vertex level, hence the name). For example, you can
paint each edge of a square polygon a different color. As a result, the
center of the polygon would display a blend of each of the four colors.
1
3
272 • SOFTIMAGE|XSI
Choose Get > Property > Color at
Vertices Map to add a CAV Property to the
selected object. An object can have as many
CAV properties as
you need.
Press Shift+w to activate
the brush tool and paint
the color (or other
attribute) onto the object
in any 3D view. When you
move the brush into any
3D view, the view’s display
mode automatically
changes to Constant.
2
4
If necessary, you can store several color at vertices properties on the
same object.
This feature is often used by game developers because it is an efficient
method of coloring models. Because it is applying a constant color and
allows you to simulate lighting (by painting luminance), game
developers are able to remove lights from their scenes, thereby gaining
a great deal of memory and performance, especially when rendering.
Of course, the quality of results you can obtain by painting colors on
vertices is entirely dependent on the density of the polygon mesh.
Press Ctrl+w to open the Brush
Properties property editor. On the
Vertex Colors tab, you can choose a
paint mode and color, set the brush
size, set falloff and bleeding options
and so on. Basically, you’re defining
how the brush strokes look.
If you’d like, you can render the
result of the color at vertices
property using a Vertex RGBA
shader in the render tree.
Section 17
Lighting
Without lights, it doesn’t really matter what your
scene looks like—you won’t be able to see it, plain
and simple!
Each light in a scene contributes to the scene’s
illumination and affects the way all objects’ surfaces
appear in the rendered image. You can dramatically
change the nature and mood of your images by
modifying lights and adding light effects.
What you’ll find in this section ...
• Types of Lights
• Placing Lights
• Setting Light Properties
• Selective Lights
• Creating Shadows
• Global Illumination
• Caustics
• Final Gathering
• Light Effects
• Image-Based Lighting
Basics • 273
Section 17 • Lighting
Types of Lights
There’s a right type of light for every occasion. You can add
lights to a scene by choosing them from the Render
toolbar’s Get > Primitive > Light menu.
Every light type has its own special characteristics and is
represented by its own icon in 3D views.
Point
Point lights casts rays in
all directions from the
position of the light.
They are similar to light
bulbs, whose light rays emanate from
the bulb in all directions.
Light Box
Light box lights simulate a light diffused
with a white fabric.
The light and
shadows created by
this light are very
soft. Specularity is
still visible, but
noticeably weaker.
Manipulating the box
shapes the projected light.
274 • SOFTIMAGE|XSI
Infinite (Default)
Infinite lights simulate
light sources that are
infinitely far from
objects in the
scene. There is no
position associated with an infinite
light, only a direction. All objects are
lit by parallel light rays. The scene’s
default light is infinite.
Spot
Spot lights cast rays
in a cone-shape,
simulating real
spotlights. This is
useful for lighting a specific object
or area. The manipulators can be
used to edit the light cone’s length,
width, and falloff points.
Neon
Neon lights
simulate realworld neon lights. They are
essentially point lights whose settings
and shapes are altered to resemble
fluorescent tubes. The manipulators
can be used to change the tube into
any rectangular or square shape.
Placing Lights
Placing Lights
You can translate, rotate, and scale lights as you would any other object.
However, scaling a light only affects the size of the icon and does not
change any of the light properties.
Placing Spotlights Using the Spot Light View
The Spot Light view in a 3D view lets you select from a list of spotlights
available in the scene. A spotlight view is useful to see what objects a
spotlight is lighting and from what angle.
Rotating an infinite light. This is the only useful
transformation for infinite lights since their
scale and position do not affect the lighting.
Rotating the light, on the other hand, changes
its direction.
Translating a point light. Rotating and scaling
point lights does not affect the lighting.
Translating a point light changes its position,
which does change the scene lighting.
1
Select a spotlight from
the view menu to see
the scene from the
light’s point of view.
2 Navigate in the spotlight
viewport to change the
position of the light.
The inner and outer circles
correspond to the light’s
spread angle and cone angle
respectively.
Translating a spotlight. When you
translate the spotlight, it rotates
automatically to point toward its
interest.
Scaling a spotlight has no effect on the lighting.
Since the spotlight is normally constrained to its
interest, you cannot rotate it either (unless you delete
the interest).
3 The rendered result
shows the scene lit from
the spotlight.
Spotlights have a third set of manipulators that let you control their start
and end falloff, as well as their spread and cone angles. Area lights also
have a third set of manipulators that let you scale the geometric area
from which the light rays emanate. These manipulators are discussed
later in this section.
Note that the light falls
off exactly where the
cone and spread circles
indicate that it should.
Basics • 275
Section 17 • Lighting
Setting Light Properties
Once you create a light, you can edit its properties from its property
editor. Some of the most commonly edited light properties are
described below. To open a light’s property editor, select the light and
choose Modify > Shader from the Render toolbar.
When you define the color of an object’s material, you should work
with a white light because colored light sources affect the material’s
appearance. You can color your light source afterward to achieve the
final look of the scene.
Setting Light Color
Setting Light Intensity
The color of a light controls the color of the rays emitted by the light.
The final result depends on both the color of the light and the color of
objects.
You can control a light’s intensity by adjusting the Intensity slider in
the light’s property editor. By default, values range from 0 to 1, but you
can set much higher values if needed.
Alternatively, you can control light intensity indirectly using its color
channels. Setting RGB values greater than 1 creates more intense light.
Pale Yellow Light
White Light
Intensity: 0.5
Intensity: 0.25
Pale Blue Light
Intensity: 0.75
276 • SOFTIMAGE|XSI
Setting Light Properties
Setting Light Falloff
Setting a Spotlight
Falloff refers to the diminishing of a light’s intensity over distance, also
called attenuation. This mimics the way light behaves naturally. The
falloff options are available only for point and spotlights.
A spotlight casts its rays in a cone aimed at its interest. Spotlights have
special parameters, called Spread and Cone Angle, that control the size
and shape of the cone. You can set these options using the spotlight’s
property editor or its 3D manipulators. You can also use the 3D
manipulators to set the light’s falloff.
You can set the distance at which the light begins to diminish, as well as
the distance at which the falloff is complete (darkness). This means you
can set the values so the falloff affects only those features you want. In
addition, you can control how quickly or slowly the light diminishes.
The white line
indicates the cone
angle.
Start falloff = 0
End falloff = 4
The yellow line
indicates the light’s
spread angle.
To activate a spotlight’s manipulators, select the light and press B. You
can then adjust the light by dragging any of the manipulators labeled in
the image below.
Start falloff = 0
End falloff = 8
Start falloff = 6
End falloff = 8
The upper circle is the Start
Falloff point.
The wireframe outline is the
spotlight’s Cone Angle.
The inner, solid cone is the
spotlight’s Spread Angle.
Falloff
Start and End Falloff values. Using a point light, umbra = 0;
bottom corner of chess board is 0; top, left corner is 10.
The lower circle is the End
Falloff point.
Basics • 277
Section 17 • Lighting
Selective Lights
Creating Shadows
When you create a light, it affects all visible objects in the scene by
default. However, every light has a selective property that you can use to
make it affect, or not affect, a designated group of objects called
Associated Models. This can help reduce rendering time by limiting
the number of calculations per light.
If you want your scene to have a more realistic look, you can create
shadows that appear to be cast by the objects in your scene. Shadows
can make all the difference in a scene: a lack of them can create a sterile
environment, whereas the right amount can make the same scene
delightfully moody. Shadows are controlled independently for each
light source. This means that a scene can have some lights casting
shadows and others not.
You can set a light’s selective property to be Inclusive or Exclusive,
depending on how you want the light to affect its associated models.
• Exclusive illuminates every object except for those in the light’s
Associated Models group.
• Inclusive illuminates every object defined in the light’s Associated
Models group.
A simple scene illuminated by a point
light. None of the geometric objects
are included in the light’s Associated
Models’ list, so they are not affected
by the light’s selective property.
The King piece (center) has been added to
the light’s Associated Models list, making it
affected by the light’s selective property.
The light has been defined as Exclusive,
thereby not illuminating the objects on the
light’s Associated Models list.
The light is set to Inclusive. Now the
light source affects only the objects
listed in the Associated Models list
(only the King piece) and ignores the
rest.
To create a shadow using the mental ray renderer for a scene or a
render pass, you must set up two things:
• A light that generates shadows.
• Rendering options that render shadows.
There are three basic kinds of shadows you can create using mental ray:
raytraced, shadow-mapped, and soft.
Raytraced Shadows
Raytraced shadows use the renderer’s
raytracing algorithm to calculate how
light rays are reflected, refracted, and
obstructed. The shadows are very
realistic but take longer to render than
other types of shadows.
To create raytraced shadows, you need
to activate shadows in the light’s property editor.
Activates shadows for the light.
You also need to make sure that the primary rays Type is set to
Raytracing on the Renderer > Rendering tab of the Render Manager.
278 • SOFTIMAGE|XSI
Creating Shadows
Shadow-Mapped Shadows
Soft Shadows
Shadow-mapped shadows, also known
as depth-mapped shadows, use the
renderer’s scanline algorithm. They
are quick to render, but not as accurate
as raytraced shadows.
Soft shadows that are created by
defining area lights. Area lights
are a special kind of point light or
spotlight. The rays emanate from
a geometric area instead of a
single point. This is useful for
creating soft shadows with both
an umbra (the full shadow where
an object blocks all rays from the light) and a penumbra (the partial
shadow where an object blocks some of the rays).
The shadow map algorithm calculates
color and depth (z-channel)
information for each pixel, based on its surface and distance from the
camera. Before rendering starts, a shadow map is generated for the
light. This map contains information about the scene from the
perspective of the light’s origin. The information describes the distance
from the light to objects in the scene and the color of the shadow on
that object. During the rendering process, the map is used to determine
if an object is in a shadow.
Shadow-mapping works only with spotlights having a cone angle that
is less than 90 degrees.
To create shadow-mapped
shadows, you need to activate
shadows in the light’s
property editor. You also
need to activate and
configure the Shadow Map in
the light’s property editor.
The shadow’s relative softness (the
relation between the umbra and
penumbra) is affected by the shape
and size of the light’s geometry.
You can choose from four shapes
and set the size as you wish.
To determine the amount of
illumination on a surface, a sample
of points is distributed evenly over
the area light geometry. Rays are
cast from each sample point; all,
some, or none of the rays may be
blocked by an object. This creates a
smoothly graded penumbra.
A rectangular area light emits
light from a rectangular object
like this one.
Then, you need to activate
shadow-mapped shadows on
the Renderer > Shadows tab of the Render Manager.
Basics • 279
Section 17 • Lighting
To create raytraced shadows,
you need to activate shadows
in the light’s property editor.
You also need to activate and
configure the Area Light in
the light’s property editor.
Finally, you need to make
sure that the primary rays
Type is set to Raytracing on
the Renderer > Rendering
tab of the Render Manager.
Global Illumination
Global illumination simulates the way bright light bounces off of
objects and bleeds their color into surrounding surfaces. When global
illumination is activated, photons emitted from a designated light
travel through the scene, bounce off photon-casting objects and are
stored by photon-receiving objects.
Photon casting and reception are not mutually exclusive properties: an
object can do both, but only a light can emit photons. Global
illumination is often used with caustics, which is also a photon effect.
The following is an overview of how to set up global illumination for
the mental ray renderer.
1
Define objects as casters and receivers.
Rendering Methods for Shadows
You can render all of the types of shadows listed previously using
different rendering methods. You can choose the desired rendering
method on the Renderer > Shadows tab of the Render Manager:
• Enabled shadows perform a basic, simple rendering of the shadows.
The amount of light from a light source that passes through a
shadow-casting object is determined. The shadow shaders are used
in random order.
• Sorted shadows are similar to enabled shadows but sorts the
shadow-casting objects so that the shadow shader of the object
closest to the illuminated point is processed first and the object
closest to the light is preprocessed last.
• Segmented shadows are computed by tracing the segments
(between the illumination point, the occluding objects, and the
light source) and then applying volume shaders to these segments
(shadow segments). This process slows down rendering, but is
required if volume effects are to cast shadows.
• Disabled does not allow the light to compute shadows. This option
is usually used to speed up rendering.
280 • SOFTIMAGE|XSI
An object’s visibility
property allows you to
set options that control
how the object
responds to global
illumination photons
emitted from a light.
• Caster controls whether
photons bounce off of the object and continue to travel through the
scene. When this is off, the object simply absorbs photons.
• Receiver controls whether the object receives and stores photons.
When this is off, the photon effect is not visible on the object’s
surface.
• Visible controls whether the object is visible to photons at all. When
this is off, photons simply pass through the object.
Global Illumination
2
Set the light to emit global illumination photons.
3
Adjust the global illumination effect.
Once you’ve defined the caster, receivers and
emitting lights, you need to adjust the rendering
options that control the photon effect on the
Renderer > GI and Caustics tab in the Render
Manager.
Activate Global Illumination on this tab, then set
these two important parameters:
• GI Accuracy specifies the number of photons that
are considered when any point is rendered.
• Photon Search Radius specifies the distance
from the rendered point within which photons are
considered.
Activate Global Illumination on the Photon
tab of the light’s property editor.
You’ll also need to fine-tune the photon intensity
and the number of emitted photons in each of the
emitting lights’ property editors.
You can then set the Intensity of the photon
energy, which determines the intensity of the
color that bleeds onto photon receiving objects.
You can also set the Number of Emitted
Photons.
Typically, both of these values will need to be
set in the tens or hundreds of thousands for the
final global illumination effect.
4
Increase radiance of the receiver object.
To further fine-tune the global illumination effect,
adjust the Radiance of the global illumination
receiver objects.
Radiance controls the strength of the photon effect
on the object’s surface. This is useful for brightening
or darkening photon lighting in specific areas
of a scene. The Radiance parameter is in each object’s
surface shader property editor.
Basics • 281
Section 17 • Lighting
Caustics
Caustic effects recreate the way that light is distorted when it bounces
off a specular surface or passes through refractive objects/volumes. The
classic example is the light sparkling in the middle of a wine glass or the
floor of a swimming pool. In either case, light passes through refractive
surfaces and is distorted, creating complex light patterns on surfaces
that it affects.
As with global illumination, caustics compute how photons emitted
from a light travel across the scene and bounce over and through caster
and receiver objects.
Here is an overview of setting up caustic lighting for the mental ray
renderer, which is almost identical to setting up global illumination:
Define objects as casters and receivers.
1
Adjust the caustic effect.
Adjust the rendering options that control
the photon effect on the Renderer > GI
and Caustics tab in the Render Manager.
Activate Caustics on this tab, then set
these two important parameters:
• Caustic Accuracy specifies the number
of photons that are considered when
any point is rendered.
• Photon Search Radius specifies the
distance from the rendered point within
which photons are considered.
You’ll also need to go back to the
property editors of all emitting lights and
fine tune the photon intensity and the
number of emitted photons.
An object’s visibility
property allows you to
set options that
control how the
object responds
caustics photons
emitted from a light.
2
3
Set the light to emit caustic photons.
4
To make a light into a global
illumination photon emitter,
activate Caustics on the
Photon tab of the light’s
property editor.
You can then set the Intensity
of the photon energy and the
Number of Emitted
Photons.
282 • SOFTIMAGE|XSI
Increase radiance of the receiver objects.
To fine-tune the caustics effect, adjust
the Radiance of the caustics receiver
objects.
Radiance controls the strength of the
photon effect on the object’s surface.
This is useful for brightening or
darkening photon lighting in specific
areas of a scene. The Radiance
parameter is in each object’s surface
shader property editor.
Final Gathering
Final Gathering
Final gathering is a way of calculating indirect illumination without
using photon energy, as with global illumination and caustics. Instead of
using rays cast from a light to calculate illumination, final gathering uses
rays cast from each illuminated point on an object’s surface. The rays are
used to sample a hemisphere of a specified radius above each point and
calculate direct and indirect illumination based on what the rays hit. The
overall effect is that every object in the scene becomes a “light source”
and influences the color and illumination of the objects and
environment surrounding it.
Creating a Final Gathering Effect
Creating final gathering in a scene is more straightforward than
applying caustics or global illumination. Most of the options that
control the final gathering effect for the mental ray renderer are on the
Renderer > Final Gathering tab in the Render Manager.
You can use scene objects’ visibility properties to precisely control how
each object participates in final gathering calculations. Every object has
the following three final gathering visibility parameters:
• Caster: specifies whether or not an object casts final gathering rays
into the scene.
• Visible to Sampling: specifies whether the object is visible to final
gathering rays cast by other objects. Turning this option off causes
final gathering rays to pass through the object.
Turning this option off makes the Sampled option unavailable,
since an object that is not visible to sampling rays cannot be
sampled.
• Sampled: specifies whether an object’s surface is actually sampled
by final gathering rays cast by other objects. Turning this option off
causes the object to absorb final gathering rays.
Accuracy and Number of Rays are the two most important parameters
to consider when defining the final gathering effect.
The Radius parameters define how
detailed the final-gathering effect will be.
Note that values less than 1 may result in
extended render
times.
The Number of
Rays parameter
works in
conjunction with
the Radius
parameter to
smooth out the final
gathering effect.
Surface with a
texture acting as
a light source
Accuracy rays are
sent towards the
light source
Camera ray shot
on surface point
to be evaluated
This scene was rendered using final gathering, which
”collects” the indirect and direct light around illuminated
points on an object’s surface to simulate real-world lighting.
Basics • 283
Section 17 • Lighting
Light Effects
XSI includes a variety of lighting effects that you can use to enhance the
realism of your scenes. Creating glows, flares, and volumic effects are
all ways to alter the look and mood of your rendered scenes. Effects like
ambient occlusion and subsurface scattering can help you create more
realistic surfaces.
Different effects are applied differently. Some are applied as properties
of scenes, lights or objects, while others are defined by shaders in the
render tree. Either way, all of these effects can go a long way toward
making your scenes look just the way you want.
The point light inside of this street lamp
uses a flare effect. Flares are created as
properties of scene lights.
In the background of the scene, you can see
the effect of depth-fading. Even though it
affects the entire scene, the depth fading is
defined by a light’s volumic property.
The neon sign uses a glow effect.
Glows are properties of scene objects
that are generated using output
shaders.
The volumic light shining out from the
window in the stairwell is created using
a volumic property applied to a light.
This scene uses a variety of light effects to capture the feeling of a dimly lit alley on a foggy evening.
284 • SOFTIMAGE|XSI
Light Effects
Ambient Occlusion
Ambient occlusion is a fast and computationally inexpensive way to
simulate indirect illumination. It works by firing sample rays into a
predefined hemispherical region above a given point on an object's
surface in order to determine the extent to which the point is blocked or occluded - by other geometry.
Once the amount of occlusion has been determined, a bright and a
dark color are returned for points that are unoccluded and occluded
respectively. Where the object is partially occluded the bright and dark
colors are mixed in accordance with the amount of occlusion.
In XSI, you can create an ambient occlusion effect by connecting the
Ambient Occlusion shader in the render tree. This is most commonly
done at the pass level to create an occlusion pass — like that seen in the
image above — that can be added in and adjusted during compositing.
You can also use the shader on individual objects to limit the occlusion
calculation.
Fast Subsurface Scattering
In the real world, many materials are translucent to some extent and do
not immediately reflect light at their surface. Instead, light penetrates
the surface and is scattered inside the material before it is either
absorbed or transmitted. This effect, called subsurface scattering, can
also be used to enhance the realism of a wide variety of rendered
materials, even when used sparingly.
You can use the Fast
Subsurface Scattering
shader to create subtle
scattering effects like the
surface of this alabaster-ish
cow...
The image above shows a scene rendered using only the Ambient Occlusion
shader. The bright color is set to white and the dark color to black. This type
of rendering can be composited with other passes to add the occlusion
effect to the scene’s color and illumination.
...or more extreme, brightly
lit translucency effects like
the surface of this crystal
cow.
Cow model ©Digimation, Inc.
Basics • 285
Section 17 • Lighting
The Fast Subsurface Scattering shader simulates the appearance of
subsurface scattering by creating a lightmap that stores the shaded
object’s front and back surfaces, their depths, and light intensity.
During rendering, the lightmap is sampled to create several light layers,
which incorporate color and depth information. These layers are then
added together to produce the final subsurface scattering effect that is
applied to the object’s surface.
Image-Based Lighting
You can light your scenes with images using the Environment shader.
Like other environment shaders, this one surrounds the scene with an
image. However, this shader has a set of parameters that allow you to
control the image’s contribution to final gathering and reflections.
Although you can use any image to light the scene this way, you will get
the best results using a High Dynamic Range (HDR) image. That’s
because HDR images contain a greater range of illumination than
regular images, making them better able to simulate real-world
lighting.
286 • SOFTIMAGE|XSI
Section 18
Cameras
Virtual cameras in SOFTIMAGE|XSI are similar to
physical cameras in the real world. They define the
views that you can render. You can add as many
cameras as you want in a scene.
What you’ll find in this section ...
• Types of Cameras
• The Camera Rig
• Working with Cameras
• Setting Camera Properties
• Lens Shaders
• Motion Blur
Basics • 287
Section 18 • Cameras
Types of Cameras
Each of the images below was taken from the same position, but using
a different camera each time. The image on the right shows a
wireframe view of the original scene, including the position of the
camera.
All of the camera types listed here are available from the Render
Toolbar’s Get > Primitive > Camera menu.
288 • SOFTIMAGE|XSI
Perspective (Default)
Wide Angle
Uses a perspective projection,
which simulates depth.
Perspective cameras are useful
for simulating a physical
camera. The default camera
in any new scene is a
perspective camera.
Creates a wide-angle view by
using a perspective projection
and a large angle (100°) of
view. Wide angle cameras
have a very large field of view
and can often distort the
perspective.
Telephoto
Orthographic
Uses a perspective projection
and a small angle of view (5°)
to simulate a telephoto lens
view where objects are
“zoomed.”
Makes all of the camera rays
parallel. Objects stay the same
size regardless of their
distance from the camera.
These projections are useful
for architectural and
engineering renderings.
The Camera Rig
The Camera Rig
Each camera that you create is made up of three separate parts: the
camera root, the camera interest, and the camera itself. If you look at a
camera in the explorer, you’ll see that the camera root is the parent of
both the camera and its interest. Each of these elements is displayed in
the 3D views as well.
Camera Direction
The camera icon displays a blue and a green arrow. The blue
arrow shows where the camera is “looking”; that is, the direction
the lens is facing. The green arrow shows the camera’s up
direction, which you can change by rolling the camera (press L).
The Camera
The camera is the camera is the camera. In the 3D views, it
is represented by a wireframe control object that you can
manipulate in 3D space. The camera has a directional
constraint to the camera interest.
The Camera Interest
The camera’s interest—what the camera is always looking
at—is represented by a null. You can translate and
animate the null to change the camera’s interest.
The Camera Root
The camera root is represented by a null. By default, it appears
in the middle of the wireframe camera, but you can translate
and animate it as you would any other object. The null is
useful as an extra level of control over the camera rig, allowing
you to translate and animate the entire rig the same way that
you animate its individual components.
Basics • 289
Section 18 • Cameras
Working with Cameras
Once you’ve created your cameras, you’ll probably want to move them
around to capture just the right angles. You may also need to switch
back and forth between different cameras to compare points of view.
Choose a camera from the list to
switch the viewport to that
camera’s view.
Selecting Cameras and Camera Interests
Cameras or their interests can be tricky to select. Luckily, there are
several ways to select either or both. You can:
• Locate the camera or interest in a 3D view and click it to select.
• From any viewport, click the camera icon on its menu bar, then
choose Select Camera or Select Interest. This selects the camera
used in that viewport.
• From the Select panel, choose Explore > Cameras. This opens a
floating explorer that shows every camera in your scene and its
interest. Select a camera or interest from the list. Of course, you can
also do the same thing from a regular explorer once you locate the
cameras.
Selecting Camera Views
Camera views let you display your scene in a 3D view from the point of
view of a particular camera. If you have created more than one camera
in your scene, you can display a different camera view in each 3D view.
Choosing a camera from a viewport’s Cameras menu switches the
viewpoint to that of a “real” camera in your scene. All other views such
as User, Top, Front, and Right are orthogonal viewpoints and are not
associated to an actual camera.
290 • SOFTIMAGE|XSI
You can select a predefined
orthographic viewpoint, but it’s
not an actual camera view.
Choose Render Pass to switch to
the camera view defined for your
render pass. This camera is defined
in the Render Manager.
Positioning Cameras
Once you select a camera, you can translate, rotate, and scale it as you
would any other object. However, scaling a camera only affects the size
of the icon and does not change any of the camera properties.
Generally, the most intuitive way of positioning cameras is to set a 3D
view to a camera view and then use the 3D view navigation tools to
change the camera’s position. As you navigate in the 3D view, the
camera is subject to any transformations that are necessary to keep its
interest in the center of its focal view.
Since positioning cameras is often a process of trial and error, you’ll
probably find yourself wanting to undo and redo camera moves.
• Press Alt+Z to undo the last camera move.
• Press Alt+Y to redo the last undone camera move.
If you’ve zoomed in and out too much and the perspective on your
camera is in need of a reset or refresh, press R. This resets the camera in
the 3D view in which the cursor is.
Setting Camera Properties
Setting Camera Properties
Field of View
The Camera property editor contains every parameter needed to define
how a camera “sees” your scene.
The field of view is the angular measurement of how much the camera
can see at any one time. By changing the field of view, you can distort
the perspective to give a narrow, peephole effect or a wide, fish-eye
effect.
To open the camera property editor, select a camera whose properties
you want to edit and choose Modify > Shader from the Render toolbar.
Camera Format
The camera’s “format” refers to the picture standard that the camera is
using and the corresponding picture ratio. You can also specify a
custom picture standard with a picture ratio that you define. The
default camera format is NTSC D1 4/3 720x486, with a picture ratio of
1.333, but several standard NTSC, PAL, HDTV, Cine, and Slide formats
are also available.
The camera’s Vertical field of view was made large enough to accommodate
the entire building. The Horizontal field of view was automatically calculated
based on the aspect ratio.
Using the same camera in the same location, the Vertical field of view is
much smaller, thus making only a small part of the building visible.
Basics • 291
Section 18 • Cameras
Setting Clipping Planes
Lens Shaders
You can use clipping planes to set the minimum and maximum
viewable distances from the camera. Objects outside these planes are
not visible.
Lens shaders are used to apply a variety of different effects to
everything that a camera sees. Some lens shaders create generalized
effects, such as depth of field, cartoon ink lines, or lens distortion.
Others create more localized effects such as lens flares. Still others are
more utility oriented, and do things like emulate real-world camera
lenses or render depth information.
By default, the near plane is very close to the camera and the far plane
is very far away, so most objects are usually visible. You can set clipping
planes to display or hide specific objects.
Lens shaders can be used alone, or in conjunction with other lens
shaders. For example, you might want to render a bulge distortion and
depth of field simultaneously. You can apply lens shaders to cameras as
well as passes.
This is a camera with no clipping planes set—which means the resulting
image (right) is every object in the scene.
Applies a shader to
the camera.
Removes a shader from
the shader stack.
Opens the selected
shader’s property editor.
Lists every shader applied
to a camera.
Lens shaders are applied via the shader
stack on the Lens Shaders tab of the
camera’s property editor.
This is a camera with near and far clipping planes set. The near plane is
between the first two buildings and the far clipping plane is between the last
two buildings. Everything before the first plane is invisible and everything
beyond the far clipping plane is also invisible, as seen in the resulting image
(right).
292 • SOFTIMAGE|XSI
Lens Shaders
The images below and beside show this scene
rendered using three different lens shaders.
Toon Ink Lens shader
Lens Effects Shader (Fisheye distortion setting)
Depth of Field shader
Basics • 293
Section 18 • Cameras
Motion Blur
Motion blur adds realism to a scene’s moving objects by simulating the
blur that results from objects passing in front of a camera lens over a
specified period of exposure. In XSI, you can easily achieve a
photorealistic motion blur effect for every object and/or camera in
your scene.
You can apply motion blur properties to groups if you wish to toggle
motion blur for several objects at once. You can also apply them to
cameras. This is useful when both the camera and scene objects are
moving, but you only want the blur caused by the object’s movement.
Rendering Motion Blur
Motion blur is active for the scene by default. To view the motion blur
of objects in a scene, activate the motion blur settings in the render
region options and/or the render pass options. As long as these options
are on and you have a moving object in your scene, the motion blur is
visible.
In the Render Manager, set the motion blur Speed for the scene. This
setting specifies the time interval (usually between 0 and 1) during
which the geometry and any motion transformations and motion
vectors are evaluated for the frame. The motion data is then pushed to
the renderer (by default mental ray).
Setting the Speed value to 0 turns motion blur off. Longer (slow)
shutter speeds (a difference of greater than 0.6) create a wider and/or
longer motion blur effect, simulating a faster speed. Shorter (quicker)
shutter speeds (a difference of less than 0.3) create subtler motion
blurs.
Creating Motion Blur
To control motion blur for a specific object in a scene, you must assign
it a motion blur property. This is primarily useful when you want to
force motion blur off for a given object, or when you have a few objects
that need deformation motion blur.
To create the motion blur
property, select one or
more objects and choose
Get > Property > Motion
Blur from the Render
toolbar. This creates a
motion blur property for
the selected objects.
294 • SOFTIMAGE|XSI
In the first image (left), a quick shutter speed (< 0.1) is used, then a slower
shutter speed (middle), and finally (right) a very slow shutter speed (> 0.6).
You can also specify an Offset for the shutter’s time interval which
allows you to push the motion blur trails, even extend them into later
frames. Additionally, you can define where on the frame the blur is
evaluated and rendered.
Section 19
Rendering
After adjusting all of your lights and objects and
defining your camera settings, you’re ready to render
out your scene. Whether you’re rendering a single
frame or hours of animation, rendering a scene is like
developing a photograph. The process is often done
more than once and you will most likely have to
tweak and adjust your options to achieve the look
you set out to create.
What you’ll find in this section ...
• Rendering Overview
• Render Passes
• Setting Render Options
• Selecting a Rendering Method
• Different Ways to Render
Basics • 295
Section 19 • Rendering
Rendering Overview
The process or rendering out your scenes can vary considerably from
project to project. However, there are certain general steps that you’ll
need to follow whenever you want to render a scene. Here is a typical
sequence of tasks you might follow when rendering:
1. Set up render passes and define their options.
Render passes let you render different aspects of your scene
separately, such as a matte pass, a shadow pass, a highlight pass, or
a complete beauty pass. You can define as many render passes as
you want: within each pass, you can create partitions of lights and
objects, then apply shaders and control their settings together.
2. Set up render channels and define their options. These allow you to
output different information about the pass to separate files.
3. Set rendering options.
All objects, including lights and cameras, are defined by their
rendering properties. For example, you can determine whether a
geometric object is visible, whether its reflection is visible, and
whether it casts shadows. Rendering properties can be set per pass
as well.
4. Preview the results of any modifications.
The viewports can display your scene in different display modes,
including wireframe, hidden-line removal, shaded, and textured. In
addition, you can view any portion of your scene in a viewport and
rendered with mental ray by defining a render region. Or preview a
full frame using Render Preview.
5. Render the passes and their render channels.
After previewing a few rendered frames, you can render on your
local computer or distribute the render across a network of
computers. You can render interactively using the options available
in the XSI interface, or from the command-line using xsi -render
(batch), xsi -script (batch with scripts), and the ray3.exe options.
296 • SOFTIMAGE|XSI
6. Composite and apply effects to passes using XSI Illusion, a fully–
integrated compositing and effects toolset. You can also use a postproduction tool such as Avid®|DS.
XSI and mental ray®
SOFTIMAGE|XSI uses mental ray as its core rendering engine. mental
ray is fully integrated in XSI, meaning that most mental ray features are
exposed in XSI’s user interface, and are easy to adjust — both while
creating a scene and during the final renderings. Full integration with
mental ray also allows artists to generate final-quality preview renders
interactively in 3D views, using the render region.
Distributed Rendering
Distributed rendering is a way of sharing rendering tasks among several
networked machines. It uses a tile-based rendering method where each
frame is broken up into segments, called tiles, which are distributed to
participating machines. Each machine renders one tile at a time, until
all of the frame’s tiles are rendered and the frame is reassembled. By
spreading the workload this way, you can decrease overall rendering
time considerably.
Once you’ve set up a distributed rendering network, rendering tasks are
distributed automatically once a render is initiated on a computer. The
initiating computer is referred to as the master and the other
computers on the network are referred to as slaves. The master and
slaves communicate via a mental ray service that listens on a designated
TCP port and passes information to the mental ray renderer.
Render Passes
Render Passes
A render pass creates a layer of a scene that can be composited with any
other passes to create a complete image. Passes also allow you to
quickly re-render a single layer without re-rendering the entire scene.
Later, you can composite the rendered passes back together, making
adjustments to each layer as needed.
This photograph (background pass) is the
background scene over which the dinosaur
will be composited.
Each scene can contain as many render passes as you need. When you
first create a scene in XSI, it has a single pass named Default_pass. This
is a “beauty pass” that is set to render every element of the scene. You
can create additional passes to render specific elements and attributes as
needed.
This image is the composite of all these passes.
Rendering in passes allows you to tweak each isolated
element separately without having to re-render your scene.
This pass is a rendered image of the dinosaur.
Compositing it over the background would
make the scene rather flat and unrealistic.
The matte pass “cuts out” a section of the
rendered image so another image can be
composited over or beneath it.
The specular pass is used to capture
an object’s highlights.
The shadow pass isolates the scene’s shadows
so you can composite them in later. This
allows you to edit a shadow’s blur, intensity,
and color without any additional rendering.
Basics • 297
Section 19 • Rendering
Render Pass Workflow
Creating Passes
If you want to render or edit only certain aspects or areas of your scene,
the following steps provide an overview of how to use render passes:
You will most likely want to create several passes as your scene grows in
size and complexity. You can create a variety of pass types from the
Render toolbar’s Pass > Edit > New Pass menu.
1. Create and name a new render pass. You can also save and re-use
your passes.
2. Select the pass to be edited, using either the explorer or the Pass >
Edit > Current Pass from the Render toolbar.
3. Define partitions and use them to organize and edit the objects and
lights in your render passes. How you divide elements into
partitions depends on what effect you want to achieve with the
pass.
4. Specify the active camera for the pass.
5. Apply shaders (including special effects such as glows,
environments, and volumic effects) to the pass and its partitions.
Setting the Current Pass
6. If necessary, define an override for a partition. An override lets you
control an object’s parameters using shaders without replacing any
of an object’s material properties.
The current pass is the pass to which all pass and partition properties
are applied. The current pass is also the pass displayed in 3D views
when the Render Pass Camera view is chosen from the view’s View
menu.
7. Set rendering options for the objects in each pass partition.
8. Set rendering options for each pass.
9. After you have set up render passes, you can render them.
10. You can then composite and apply effects to the passes using XSI
Illusion.
298 • SOFTIMAGE|XSI
To set the current pass click the
arrow beside the Pass selection menu
on the Render toolbar.
Then from the pass list, choose the
render pass you want to set as
current.
Render Passes
Setting the Pass Camera
Viewing Passes and Partitions in the Explorer
You can specify the camera you want to use for each render pass. The
active camera provides the viewpoint from which the pass is rendered.
In the explorer you can set the scope to Passes (press P) to see a hierarchical
list of all of the render passes in your scene, their contents, and their
properties.
To set the current pass’ camera, choose Render > Render Manager to
open the Render Manager for the current pass. On the Output >
Output page, choose a camera from the Pass Camera list, which lists all
of a scene’s cameras.
In the explorer, a Pass Camera node appears as a sub-node of each
render pass. This doesn’t signify that a new camera is created with each
pass: the Camera node represents the camera for that pass only.
Set the scope to
Passes
Pass list
folder
Render passes
Placing objects in partitions allows you to control their attributes by
modifying them at the partition level rather than at the individual
object level. The modifications affect only the objects in the partition
for the specific render pass to which the partition belongs. This allows
you to change object attributes on a per-pass basis.
Click to define the
pass’ camera options.
Background partitions
usually contain
objects that aren’t
modified in the pass.
Creating Partitions
A partition is a division of a pass that behaves like a group. There are
two types of partitions: object and light. Light partitions can only
contain lights, and object partitions can only contain geometric
objects.
Click to edit the pass’
render options.
Current pass
Objects Partitions
Lights partitions
Each pass has at least two default partitions: a background objects
partition that contains most or all of the scene’s objects, and a
background lights partition that contains most or all of the scene’s
lights. You can add as many additional partitions as you need for a
pass, but an object can only be in one partition per pass.
You can create an empty partition by using the Pass > Partition > New
command on the Render toolbar and then add elements to it. Or you
can select some objects and choose the same command to create a
partition that automatically includes these objects. Either way, you can
add objects to a partition, or remove objects from one.
Basics • 299
Section 19 • Rendering
Applying Shaders to Passes and Partitions
You can apply environment, volume, and output shaders to an entire
pass using the shader stacks in the pass’ property editor.
When you apply shaders to partitions, they override the shaders
applied directly to objects in the scene, but only for the pass. This
means you can change the properties of objects in a particular pass
without losing the properties of objects as defined for the whole scene.
Once you have applied a shader to a partition, you can open its
property editor and change values as necessary.
Applies a shader
to the camera.
Removes a shader
from the stack.
Opens the selected
shader’s property editor.
The applied shaders
are listed in the stack.
Overrides are often used with render passes to get control over a
specific parameter. When using the pass presets (such as highlight,
caustic, and RGB Matte), overrides are used to isolate specific attribute
or areas of your scene. Shader overrides are extremely useful if you
want to add properties to an existing material. Using the example
below (texturing a specular value), if you were to apply the texture
directly to the partition (no override) it would replace any textures or
materials assigned to your objects.
Objects with a texture and diffuse,
ambient, and specular values).
Notice how the dinosaur has a texture
with a bump map applied. If your
client suddenly asks you to make all
of the dinosaurs in your scene black,
there is no need to start from scratch.
Override: You can modify specific
parameters in an object’s material
and/or texture with an override.
In this case, an override was applied
to remove the ambient and diffuse
values, leaving only the specular and
the bump map.
Using Overrides
An override lets you redefine certain parameters in a pass–partition,
group, or hierarchy. For example, if a scene contains several hundred
objects and you want to edit each object’s transparency value without
reapplying a new material, you would create a pass partition that
contains all the objects you want to modify, and define an override
property linked to all of their materials’ transparency parameters. The
override affects only the desired parameter(s) and leaves the other
values untouched.
300 • SOFTIMAGE|XSI
No override: The same objects but
without an override.
Instead, a Constant Black surface
shader was applied and the diffuse
and ambient values. Notice how the
texture has been overridden and the
bump value lost.
Setting Render Options
Setting Render Options
Before you do any kind of rendering, you’ll need to set some render
options. Render options can define many aspects of how a scene will be
rendered, from which camera is used to render to how antialiasing and
motion blur are applied.
Setting up a scene for rendering requires that you set render options at
three different levels: the scene level, the pass level, and the renderer
level. You can manage each of these levels of render options using the
Render Manager, as shown below.
The Render Manager
A pass’ render options are defined in the Render Manager (choose
Render > Render Manager from the Render toolbar). The render
manager provides a convenient view of all the important rendering
options that will help you fine-tune and output your scene.
The render manager also gives you quick access to your rendering
preferences, and a summary of the render options for your passes.
Sorts passes alphabetically
or by creation order.
Use these buttons to render the current pass,
selected passes, or all passes in the scene.
Sets the current pass.
Click here to refresh
the render manager.
Displays render options
for the current pass.
Displays the global options
(scene and renderers) as
well as the preferences used
for new scenes.
Displays render options
for the renderer of the
current pass.
Displays a summary of
render options for each pass.
Basics • 301
Section 19 • Rendering
Render Channels
Render channels are a mechanism for outputting multiple images, each
containing different information, from a single pass. When you render
the pass, you can specify which channels should be output in addition
to the full pass. By default a Main render channel is declared for every
pass (you can think of it as the “beauty” channel rendered for each
pass). You can use these images at the compositing stage, the same way
you would use any render pass.
This scene defines six preset
render channels, each
extracting specific attributes
of the objects’ surface
materials.
Any combination of these
channels can be rendered
with the pass.
The advantage of using render channels is that they are easy to define
and quick to add to any pass. Preset render channels allow you to
isolate scene attributes that are commonly rendered in separate passes.
You do not need to create complex systems of partitions and overrides
to extract a particular scene attribute. All you need is your default pass
and you can quickly output the preset diffuse, specular, reflection,
refraction, and irradiance render channels.
In effect, render channels allow you to turn a single pass into multiple
passes without any complicated setup work.
Controlling Aliasing
Aliasing refers to artifacts that occur in a rendered 3D image when the
scene has not been sampled enough to accurately represent it in pixels.
Because it depends more on contrast and image detail than image size,
it can occur at any resolution. One of the most common aliasing
artifacts is “jaggies”—or jagged edges—on rendered objects, especially
where the edges are curved or diagonal.
Another common aliasing artifact is “popping,” which occurs when an
object—or part of one—is small enough to intermittently fall between
sampling rays. As a result, it “pops” in and out of view as it moves
across the scene. Moiré patterns, which create a shimmering effect in
distant regions of high-detail textures or surfaces, are another common
artifact.
Refraction Channel
Reflection Channel
Irradiance Channel
Ambient Channel
Diffuse Channel
Specular Channel
302 • SOFTIMAGE|XSI
Fortunately, aliasing can be visually curbed by antialiasing. Antialiasing
is a method of smoothing out rough or jagged edges of images, and
other aliasing artifacts, to produce a more polished look. It uses a
mathematical process that subsamples each pixel, then averages the
values of neighboring samples to get the final pixel color. Further
sampling occurs when the difference between samples exceeds a
defined threshold.
Selecting a Rendering Method
Selecting a Rendering Method
No antialiasing
You usually render a scene using the mental ray rendering software,
which is built into XSI. You can also use the hardware renderer, which
renders whatever is displayed in a 3D view (such as a viewport).
This image was rendered with
no antialiasing. Notice the
jagged edges (aliasing) along
the sphere’s surface.
The mental ray rendering software uses two rendering methods:
scanline and raytracing. Normally, these methods are used together.
mental ray uses the scanline method until an eye ray changes direction
(due to reflection or refraction and so on), at which point it switches to
the raytracing method. Once it switches, it does not go back to scanline
until the next eye ray is fired.
Antialiasing
Without scanline rendering, the render is usually slower. Without
raytracing, transparency rays are rendered, but reflection rays cannot
be cast and refraction rays are not computed.
This image uses antialiasing to
achieve a smoother-looking
curve without changing the
sphere’s geometry.
On the Renderer > Rendering page in the Render Manager, you can set
options related to each of these rendering methods, and even turn
them on and off if necessary.
Scanline
Scanline rendering is a rendering method used to determine primary
visible surfaces. Scene objects are projected onto a 2D viewing plane,
and sorted according to their X and Y coordinates. The image is then
rendered point-by-point and scanline-by-scanline, rather than objectby-object. Scanline rendering is faster than raytracing but does not
produce as accurate results for reflections and refractions.
Basics • 303
Section 19 • Rendering
This scene was rendered using
scanline rendering only. Notice
how the transparency has little
depth, and there is no reflection or
refraction.
BSP Raytracing Acceleration
The BSP tree (binary space partitioning) method divides the scene into
cubes to reduce the number of computations. It builds a hierarchical
spatial data structure by recursively subdividing a bounding volume
surrounding the entire scene.
The resulting tree consists of branch nodes that correspond to a
subdivision of a bounding volume into two smaller volumes and leaf
nodes that contain the geometric primitives (triangles).
Raytracing
Raytracing calculates the light rays that are reflected, refracted, and
obstructed by surface, producing more realistic results. Each refraction
or reflection of a light ray creates a new branch of that ray when it
bounces off an object and is cast in another direction. The various
branches a ray constitute a ray tree. Each new branch can be thought of
as a layer: if you add together the total number of a ray’s layers, it
represents the depth of that ray.
This scene was rendered using
the raytracing render method.
Notice how the glass’ reflections,
transparency, and refraction are
more realistic than with Scanline
rendering.
This image shows how the BSP
algorithm divides the scene’s bounding
box into unevenly-sized sections so that
each leaf node contains roughly the
same number of triangles.
Tuning the BSP tree is an important part of optimizing a raytraced
rendering. Typically, you want to strike a balance that prevents having
too large a tree (too many branches) or overly large leaf sizes—both of
which can slow down rendering.
Hardware Rendering
The XSI hardware renderer allows you to output a scene as it appears
when displayed in any 3D view whose viewpoint is that of the pass
camera. Most of the hardware rendering modes correspond to the 3D
views’ display modes Wireframe, Shaded, Textured, and so on).
Hardware rendering is useful for generating previews of your scene
using all of the display options available in 3D views. It is also useful for
outputting realtime shader effects to file.
Using the Render Manager, you can activate the Hardware Renderer for
the entire scene (on the Globals > Scene tab) or for a pass (on the
Output > Output tab.
304 • SOFTIMAGE|XSI
Different Ways to Render
Different Ways to Render
There are several ways to render a scene, from single frame previews to
large sequences rendered to file. Some rendering methods are launched
from XSI’s interface, others from command-line interfaces.
Previewing Interactively with the Render Region
You can view a rendering of any section or object in your scene quickly
and easily using a render region. Rather than setting up and launching
a preview, you can simply draw a render region over any 3D view and
see how your scene will appear in the final render.
To draw a render region, press Q to activate the render region tool and
drag in any 3D view to define the region’s rectangle.
You can resize and move a render region, select objects and elements
within the region, as well as modify its properties to optimize your
preview. Whatever is displayed inside that region is continuously
updated as you make changes to the rendering properties of the
objects. Only this area is refreshed when changing object, camera, and
light properties, when adjusting rendering options, or when applying
textures and shaders.
Comparing Render Regions
The render region has memo regions that allow you to store, compare,
and recall settings. They look similar to the viewports’ memo cams, but
are not saved with the scene.
Middle-click to store, and click to display. The
currently displayed cache is highlighted in
white. Right-click for other options.
The left side shows
the stored region.
The right side
shows the current
settings.
Drag the swiper to show more or
less of one image or the other.
Because the render region uses the same renderer as the final render
(mental ray), you can set the region to render your previews at final
output quality. This gives you an accurate preview of what your final
rendered scene will look like.
Be careful when comparing render regions. You should do
this only when you are tweaking material and rendering
parameters, and not making other changes to the scene. If
you revert to previous settings, either accidentally or on
purpose, you will lose any modeling, animation, or other
changes you have made in the meantime.
Basics • 305
Section 19 • Rendering
Previewing a Single Frame
The Render > Preview command in the Render toolbar lets you
preview the current frame at fully rendered quality in a floating
window. The frame is rendered using the render options for the
current render pass or using the render region options defined in any
of the four viewports.
• To render the current pass, click the Render Pass > Current button
in the Render Manager, or choose Render > Render > Current Pass
from the Render toolbar.
• To render a selection of passes, select the passes in the explorer and
click the Render Pass > Selected button in the Render Manager, or
choose Render > Render > Selected Passes from the Render
toolbar. The passes are rendered one after the other.
Batch Rendering (xsi -render)
You can use xsi -render command-line options to render scenes
without opening the XSI user interface. In addition, you can export
render archives from the command line. The most common rendering
options are available directly from the command line, while other
options can be changed by specifying a script using the
-script option.
ray3.exe Rendering
You can also render scenes using the mental ray standalone — ray3.exe
from a command line. Although many of the ray3.exe commands are
available in the XSI interface, you may want to use the ray3.exe
command line tool to manually override options in exported MI files.
Rendering to File from the XSI Interface
Once you’ve set the render options for the render passes in your scene,
you can render those passes directly from the XSI interface. Since the
options are set, all you need to do is start the render. You have several
options, among them:
• To render all of your scene’s passes, click the Render Pass > All
button in the Render Manager, or choose Render > Render > All
Passes from the Render toolbar.
306 • SOFTIMAGE|XSI
Ray3.exe lets you read and render MI2 files that you can export from
XSI. You can edit the MI2 files to define extra shaders, create objects,
swap textures, or perform other special effects.
To rendering with the ray3 executable, you need to export a scene to
the MI2 file format, then run the ray3 executable from the command
line to render the scene from the MI2 files.
Section 20
Compositing and
2D Paint
XSI Illusion is a fully integrated compositing,
effects, and 2D paint toolset that is resolution
independent and supports 8, 16, and 32-bit floatingpoint compositing.
You can use XSI Illusion operators to perform
compositing and effects tasks ranging from
tweaking the results of a multi-pass render to
creating complex special effects sequences.
The effects that you create are part of your scene that
are accessible from the explorer, are accessible to
XSI’s scripting and animation features, and support
clips and sources, as well as render passes.
What you’ll find in this section ...
• XSI Illusion
• Adding Images and Render Passes
• Adding and Connecting Operators
• Editing and Previewing Operators
• Rendering Effects
• 2D Paint
• Vector Paint vs. Raster Paint
• Painting Strokes and Shapes
• Merging and Cloning
Basics • 307
Section 20 • Compositing and 2D Paint
XSI Illusion
The XSI Illusion toolset consists of three core views: the FxTree, where
you build networks of effects operators; the Fx Viewer, where you
preview the results; the Fx Operator Selector, a powerful tool that
allows you to insert pre-connected operators into the FxTree.
Each of these views can be opened in a viewport or as a floating view
(choose View > Compositing > name of view from the main menu).
Fx Tree
where you create
networks of linked
operators to composite
images and create
effects.
There is also a compositing layout available from the View > Layout
menu. It contains the three core Fx tools arranged in a way that makes
it easy to build and preview effects.
Using this layout for compositing and effects work is usually more
efficient than simply opening the required views in viewports because
the non-compositing tools and views are mostly hidden.
Fx Viewer
2D viewer in which you
can preview each
operator to see how it
contributes to the overall
effect.
You can create multiple
instances of the FxTree
workspace — called
trees — to organize
effects more efficiently.
Fx Operator Selector
Lists all of the available
compositing and effects
operators.
Fx Operators
Operators are
represented by nodes
that you can link
together manually or
connect beforehand
using the Fx Operator
Selector.
308 • SOFTIMAGE|XSI
Once you select an
operator here, you can
pre-set its connections to
existing operators in the
Fx Tree and then
simultaneously insert and
connect it in the Fx Tree.
Adding Images and Render Passes
Adding Images and Render Passes
Before you can composite anything, or create any effects, you need to
import images into the Fx Tree. There are several ways of doing this.
Getting File Input Operators
Importing images into the FxTree creates a File Input operator for each
imported image. The operator points directly to the image on disk
without creating an image source or clip.
When you import an image, the File Input operator’s properties are
automatically updated according to the image’s properties.
To import image files, click the Import Images button
in the
FxTree menu bar or choose File > Import Images from the FxTree
menu bar. A browser opens from which you can select an image to
import.
Select this option to
import an image using a
File Input operator.
Getting Render Passes
In the FxTree, you can import any rendered pass (or all of them at
once) from the Passes menu. A File Input node is created for each pass
that you import, and the file name, start frame, and end frame are all
based on the pass render options. The file extension is based on the
pass’ image format and output channels.
Select this option to add
all rendered passes to the
Fx Tree.
Select a pass to add it to
the Fx Tree.
Getting Image Clips
The Fx Tree has direct access to all of the
image clips in your project. Inserting
image clips into the FxTree creates a pair
of Image Clip operators for each imported
clip.
Clip In
Operator
Clip Out
Operator
Image Clip operator pairs consist of two
operators:
• Clip In (or From): reads from the image clip.
• Clip Out (or To): writes back to it.
You can modify the image clip itself by adding effects operators
between the Clip In and Clip Out operators. This updates the clip
wherever it is used in the scene.
The Clip In and Clip Out operators are primarily used to modify
images that are used outside of the Fx Tree. For an actual composite or
effect that you intend to render to file, it’s better to use File Input
operators.
To import image clips, select an image clip from the FxTree’s Clips
menu.
Setting Image Defaults
Before you begin building effects, you may want to adjust the Fx Tree’s
image defaults to conform to your chosen picture format. The image
defaults affect all operators that create an image (the Pattern Generator
operator, for example), and are applied when you opt to output an
operator with the default size, and/or bit-depth. Each tree that you
create has its own set of image defaults that specifies the width/height,
bit depth, and pixel ratio.
To set the image defaults, choose File > Tree Properties from the Fx Tree
menu.
Basics • 309
Section 20 • Compositing and 2D Paint
Adding and Connecting Operators
The FxTree is where you create networks of linked operators to
composite images and create effects. Operators are represented by
nodes that you can link together manually, by dragging connection
lines, or connect beforehand using the Fx Operator Selector.
Navigation Control
Allows you to navigate in the Fx Tree workspace when a
network of operators becomes to large to display all at once.
Fx Tree Menu
Provides access to operators, render
passes, image clips, and Fx Tree tools
and preferences.
1
2
• Dragging in the rectangle pans in the Fx Tree workspace.
• Dragging the zoom slider up and down zooms in and out.
Start by adding images and/or
sequences to the Fx Tree. These
are the images that you want to
composite together and/or build
effects on.
Operator Connection Icons
• Green icons accept image
inputs. You can connect
almost any operator to
green inputs.
• Blue icons accept matte (A)
inputs, which are generally
used to control transparency.
Next you need to add and connect
the operators required to build
your effect.
• Red connections icons are
outputs, plain and simple.
You can get any operator from
the Ops menu and connect it
by dragging connection lines
from other operators’ outputs to
its inputs.
Fx Operator Selector
A tool for inserting operators into the Fx Tree. Select
an operator from the list, then consecutively middleclick the existing operators you wish to connect to its
inputs and output. Middle-click in an empty area of
the Fx Tree workspace to add the operator.
You can also use the operator
selector to pre-define operator
connections before you inset the
operators into the Fx Tree.
3
Once you’ve built your composite/
effect, you can render it out using
a File Output operator.
Operator information
Positioning the mouse pointer over
an operator displays information at
the bottom of the Fx Tree.
310 • SOFTIMAGE|XSI
If you need to build several different networks, you can create multiple
instances of the FxTree workspace—called trees—to organize them
more efficiently. Each tree is a separate operator in the scene with its
own node in the explorer.
Once you define all of the
needed connections,
middle-click an empty area
of the Fx Tree workspace
to add the operator.
Adding and Connecting Operators
Fx Operator Types
Whether you’re compositing a simple foreground image over a background, or applying a complex series of effects to an image, every step of the
process is accomplished by an operator in the FxTree. By connecting these operators together, you can create composites and special effects.
Operator Type
Description
Operator Type
Description
Image
Image operators act as the in and out points for each
effect in the FxTree.
Color Curves
Use the Color Curves operators to graphically adjust
color components of images in the FxTree, and to
extract mattes for foreground images so that you can
composite them over background images.
Grain
Grain operators alter the appearance of film grain in
your image sequences. You can add and remove grain,
as well as adding and removing noise.
Optics
Optics operators create optical effects in images in the
FxTree. These include depth-of-field, lens flares, and flare
rings.
Filter
Filter operators let you control the appearance of
images in the FxTree. Among other things they can
reproduce the effects of different lens filters, apply
blurs, and add or remove noise.
Distort
Distort operators simulate 3D changes to images in the
FxTree. Use these operators to apply distortions and
transformations
Transform
Transform operators adjust the dimensions and/or
position of Images in the FxTree. Besides cropping and
resizing images, you can also use the 3D Transform
operator to transform an image in a simulated 3D
space, as well as warp and morph images.
Plugins
The plugins operators offer a variety of patterns and
special effects that you can use in your FxTrees. All of
the Plugins operators are custom operators—called
UFOs— that were created using the UFO SDK.
Painterly
Effects®
Painterly Effects operators allow you to apply a variety
of classic artistic effects to images in the FxTree. The XSI
compositor’s three sets of Painterly Effects operators let
you apply effects like Chalk & Charcoal, Watercolor, Bas
Relief, Palette Knife, Stained Glass, and many more, to
images in the FxTree.
• File input operators are placeholders for images in the
tree.
• Paint Clip operators are used to import images into
the FxTree for raster painting.
• Vector Paint operators are used to create vector paint
layers in the FxTree.
• PSD Layer Extract operators extract a single layer from
a .psd image.
• File Output Operators let you set the output and
rendering options for your composites and effects.
Composite
Composite operators offer you several ways to combine
foreground images with a background image to
produce a composited result. Most compositing
operators require a foreground image, a background
image, and an internal or external matte.
Retiming
Retiming operators allow you to change the timing of
image sequences. You can, for example, convert from
24 to 30 frames-per-second and vice versa, interlace
and de-interlace clips, and change the duration of clips
by dropping frames, or combining them together in
different ways.
Transition
Transition operators create animated changes from one
image clip to another. You can use transition operators
to apply dissolves, fades, wipes, pushes, and peels.
Color Adjust
Color adjust operators let you color correct clips in the
FxTree. You can modify and animate hue, saturation,
lightness, brightness, contrast, gamma, and RGB values.
You can also perform various operations like inverting,
images, premultiplying images, and so on.
Basics • 311
Section 20 • Compositing and 2D Paint
Editing and Previewing Operators
A big part of building an effect is previewing operators and editing
their properties. As you adjust an operator’s parameters, you can see
the effect of your changes reflected in the Fx Viewer. There are several
ways to edit and preview operators, but the easiest is to use the View
and Edit hotspots that appear when you position the mouse pointer
over an operator.
The Edit hotspot opens the operator’s property editor, while the View
hotspot previews the operator in the Fx Viewer. This allows you to
open one operator’s property editor while you’re previewing another
operator. For example, you might want to see how color correcting one
image affects the composited result of that image and another one.
Operator Info
Displays info about the operators
being viewed and edited.
Navigation Tool
Drag in the rectangle to
pan. Drag on the slider
to zoom.
Click the Edit hotspot
to open the operator’s
property editor.
Click the View hotspot
to preview the operator
in the Fx Viewer.
Compare Area
Displays a portion of one
image while you're editing
another image. This is
useful for seeing one
operator’s effect on
another.
Image courtesy of Ouch! Animation
Split viewers
A and B.
Display image’s alpha
channel as a red overlay.
Display Area
Displays the operator that
you’re previewing.
Displays the current image at full size.
Toggles the Compare Area
Updates the Compare Area
with the current image.
Switch viewers
A and B.
312 • SOFTIMAGE|XSI
Isolate one of the
image’s color channels.
Forces the current image to fit in the viewer.
Mixes the view with the Merge Source.
Rendering Effects
Rendering Effects
Once you have your effect looking the way you want it, you can render
it and output it to a variety of different image formats using a File
Output operator.
The File Output property editor is where you set all of the effect’s
output options, including the picture standard, file format, and range
of frames.
Rendering Effects From the Command Line
You can also render effects non-interactively using XSIBatch. Make
sure that your script contains the following line:
RenderFxOp "OutputOperator", False
where OutputOperator is the name of the FileOutput operator that
you want to render. The False statement specifies that the Fx Rendering
dialog box should not be displayed during rendering.
Click here to open the
Rendering window.
Enter a valid filename,
path and format here.
When the sequence is
rendered, click here to open
a flipbook and view it.
Specify the range of frames
to render.
Once you’ve set the output options, all you need to do is click the
Render button to start the rendering process. In the Softimage|FX
Rendering window, you will get information regarding the rendering of
the sequence.
Basics • 313
Section 20 • Compositing and 2D Paint
2D Paint
XSI’s compositing and Effects toolset includes a 2D paint module
which offers 8 and 16-bit raster and vector painting. To paint on
images, you set up paint operators in the FxTree and then paint on
them in the Fx viewer, where a Paint menu gives you access to a variety
of paint tools.
You work with paint operators the same way you work with other Fx
operators, making it easy to touch up images, fine-tune effects, edit
image clips, paint custom mattes, create write-on effects, and so on.
You can also use blank paint operators to paint images from scratch.
Paint Menu
When you edit a paint operator, the paint menu is
added the Fx Viewer, giving you access to all of the
paint-related commands and tools.
Fx Paint Brush List
Lists all of the paint
brushes available for
painting strokes.
All of the brushes are
presets based on the
same core set of
properties.
The Fx Paint Brush List is
an optional view in the
compositing layout
(shown here).
To open: choose View
> Compositing > Fx
Color Selector from
the main menu.
Paint Operators
Behave exactly like other
operators in the Fx Tree, and
can be connected manually or
using the operator selector.
314 • SOFTIMAGE|XSI
Fx Viewer
When you edit and
preview a paint operator,
the Fx Viewer is where you
actually paint strokes and
shapes.
Fx Color Selector
Allows you to choose
foreground and
background paint colors
using a variety of different
color models.
To open: position the
mouse pointer in the Fx
Viewer and press 1, or
choose View >
Compositing > Fx Color
Selector from the main
menu.
Vector Paint vs. Raster Paint
Vector Paint vs. Raster Paint
XSI’s paint tools allow for both vector- and raster-based painting. Each
has its advantage, as well as its own operator to use in the Fx Tree.
Vector Paint
Vector painting is a non-destructive, shape-based
process where every brush stroke is editable even
after you’ve painted it. Rather than painting
directly on an image, you paint on a vector
shapes layer that is composited over an input image or other operator.
In the Fx Tree, you add a vector shapes layer overtop of an image by
connecting the image’s operator to Vector Paint operator’s input.You
can then paint on the vector shapes layer in the Fx viewer.
A Vector Paint operator has a small paint brush/shape icon in its
upper-left corner. This differentiates it from non-paint operators,
which you cannot paint on, and from raster paint operators, which use
a different icon.
One convenience of painting in vector paint operators is that you don’t
have to manage changes to each frame the way you do with raster paint
clips. Every shape in a vector paint operator is stored as part of the
operator’s data, and is animatable. This allows you to paint shapes and
strokes that stay in the image for as many frames as you need.
Vector paint operators are blank by default and do not have source
images. Instead, they are more like other Fx operators in that they have
both an input and an output and use other operators’ outputs as their
sources. However, there’s nothing preventing you from keeping them
blank and painting their contents from scratch.
Mask Shapes
The Mask Shapes operator is an alpha-only
version of the Vector Paint operator. You can use
the vector paint tools in a Mask Shapes to paint a
matte that you can use in any other Fx operator.
Raster Paint
Raster painting is the process of painting directly
on an image. It is destructive, meaning that each
time you paint a stroke, you’re directly altering
the image’s pixels. Once you’ve painted on the
image, the stroke or shape cannot be moved or altered (unless, of
course, you paint a new stroke over it).
In the Fx Tree, you can paint on images or sequences (but not a movie
file —.avi, Quicktime, and so on) loaded in a Paint Clip operator,
which is available from the Ops menu. You can also insert a blank paint
clip and configure it later.
A Paint Clip operator has a small paint brush icon in its upper-left
corner.
Managing Frames in Raster Paint Sequences
When you paint on a sequence, you can manage changes to frames
using the tools on the Modified Frames tab of an Paint Clip’s property
editor. You can revert painted frames back to their last saved state, and
save changes when you’re ready to commit them.
The Modified Frames tab is where
you manage painted frames.
The Save Frames controls allow
you to save changes to frames.
The Revert Changes controls
allow you to revert frames to
their original state.
The Modified Frames list lists, in
numerical order, every unsaved
frame that you’ve changed.
Basics • 315
Section 20 • Compositing and 2D Paint
Painting Strokes and Shapes
At its most basic, painting on an image is a simple matter of inserting a
paint operator in the FxTree, choosing a paint color, brush and tool,
and using the mouse pointer to paint in the Fx viewer.
1
The following a general overview of the paint process, intended to give
you an idea of workflow, as well as a sense of where to set the options
necessary for defining strokes and shapes.
Set the active paint brush from the Fx Paint
Brush List. The active paint brush is used by any
paint tool that can paint a stroke (the paint brush
tool, the line tool, the shape tools, and so on).
2
Add a paint operator to the Fx Tree
workspace and edit its properties.
This activates the Fx Viewer’s paint
menu, giving you access to paint
tools and options.
3
Select a brush
from the list.
Choose a
brush category
from the
brush-type list.
Choose a paint tool from
the Fx Viewer’s Draw menu.
4
If necessary, edit the tool
properties. To open the tool
property editor, position the
mouse pointer in the Fx
Viewer and press 3.
316 • SOFTIMAGE|XSI
If necessary, edit the brush properties.
To open the brush property editor,
position the mouse pointer in the Fx
Viewer and press 2.
Choose the foreground and (if needed) the
background color from the Fx Color Selector.
The five most recently used colors are
stored in the selector for easy access.
Painting Strokes and Shapes
5
Paint on the operator in the Fx Viewer.
The Flood Fill tool (not shown) fills pixels that
you click, and neighboring pixels of similar
color, with the specified foreground color.
The Draw Rectangle and Draw Ellipse
tools are unique in that they are the only
shape tools that work in both raster paint
clips and vector paint operators (all other
shape tools are vector-paint only). In either
mode, the shapes are drawn using the
current colors and paint brush settings.
The Brush tool is the most
basic tool for painting
brush strokes. You use it to
paint on images as if you
were using a real paint
brush, or one of the
myriad tools simulated by
the brush presets in the Fx
paint brush list. Painting is
a simple matter of clicking
and dragging on a paint
operator’s image.
The Mark Out Shape tool allows you to
create an editable vector shape by clicking to
define the locations of the shape's points. As
you add points, each new point is connected
to the previous point by a line segment. The
line segment’s curve, or lack thereof, depends
on the type of shape you’re drawing: Bézier,
B-Spline, or Polyline.
The Mark Out Shapes tool is only available in
vector paint operators.
The Line tool, as you might imagine,
allows you to draw straight lines. This
is especially useful for painting wires
out of an image or sequence.
In vector paint operators, drawing a
line creates a two-point color shape
drawn using the outline (stroke) only
The Freehand Shape tool allows you to
draw editable vector shapes as if you were
using a pen and paper. You need only drag
the paint cursor around the outline of the
shape that you wish to draw.
The Freehand Shape tool is only available in
vector paint operators.
6
If you are using vector paint operators, you can edit any
vector shapes that you’ve painted. The two images
below show the manipulators used to transform a
vector shape and to edit a vector shape’s points.
Basics • 317
Section 20 • Compositing and 2D Paint
Merging and Cloning
Merging and cloning are both ways of painting using an image’s pixels
as the paint color. In the Brushes category of the Fx paint brush list,
you’ll find the Merge brush and the Clone brush, which you can use to
paint strokes and lines, or draw shapes that use a source image as the
paint color.
Merging
Cloning
Cloning is the process of painting pixels from one region of an image to
a different region of the same image. This can be useful for duplicating
elements in an image, as in the example below. It is also often used to
paint out unwanted elements. For example, you can remove wires from
a clear blue sky by painting over them with adjacent pixels.
Merging is the process of painting pixels from a source image — called
the merge source — onto the corresponding portion, or a different
portion, of a destination image. This is useful for painting unwanted
elements, like wires, out of images. It is also useful for painting new
elements into images, like the clouds in the example below.
Before
In this example, the
trumpet player and his
shadow are cloned into
the left side of the frame.
Destination
Offset
Source
After
Merge Source
In this example, the image of the clouds is set as the
merge source and is being painted into the image of
the field, as shown below.
Clone
You can set any operator in the Fx Tree as the
merge source by right-clicking it and choosing
Set as Paint Merge Source from the menu. This
adds a small paint-bucket icon to the operator
to help you identify it as the merge source.
318 • SOFTIMAGE|XSI
Merge Source icon
Original
When you paint using the Clone brush, you’ll only see a result if you
use a brush offset. The offset is the distance between the area from
which you’re painting and the area to which you’re painting. You can
offset the brush in any direction and use any offset distance, as long as
both the source and destination cursors can be placed somewhere on
the target image simultaneously.
Section 21
Customizing XSI
You can extend XSI in a variety of ways by
customizing it. Many customizations are too
involved to cover here, but you can get more details
in the XSI Guides and SDK Guides.
What you’ll find in this section ...
• Plug-ins and Add-ons
• Toolbars and Shelves
• Custom and Proxy Parameters
• Displaying Custom and Proxy Parameters in
3D Views
• Scripts
• Key Maps
• Other Customizations
Basics • 319
Section 21 • Customizing XSI
Plug-ins and Add-ons
You can extend the functionality of XSI using plug-ins and add-ons:
Installing Add-on Packages
• A plug-in is a customization, for example, a command or operator,
implemented in a single file (possibly with a separate help file).
The easiest way to install (and uninstall) a packaged add-on is to use
the Plug-in Manager.
• An add-on is a set of related customization files stored together in
an Add-on directory. It may consist of a toolbar and its associated
commands, operators, properties, and so on. Add-ons can be
packaged into a single .xsiaddon file to distribute to others.
To install an .xsiaddon
Plug-ins and add-ons can be managed using the Plug-in Manager.
Plug-in Manager
The Plug-in Manager is the central location for managing your
customizations. You can display the Plug-in Manager using File >
Plug-in Manager or in the Tool Development Environment (View >
Layouts > Tool Development Environment).
1. In the Plug-in Tree, right-click User Root or the first workgroup in
the tree and choose Install .xsiaddon.
If you want to install the add-on in a different workgroup, go to the
Workgroup tab and move that workgroup to the top of the list. You
can install add-ons only in the first workgroup.
2. In the Select Add-on File dialog box, locate the .xsiaddon file you
want to install, and click OK.
• You can also install an add-on by dragging an .xsiaddon
file to an XSI viewport. This installs the add-on in the
User location or the first workgroup, depending on the
value in the DefaultDestination tag of the .xsiaddon.
• The SDK Guides contain additional information about
other methods of installing add-ons.
To uninstall an .xsiaddon
• In the Plug-in Tree, right-click an add-on and choose Uninstall
Add-on.
Installing a simple plug-in is as easy as copying the script or library file
to the Plugins directory of your user or workgroup location.
320 • SOFTIMAGE|XSI
Toolbars and Shelves
Toolbars and Shelves
XSI lets you create and edit your own custom toolbars and shelves. This
gives you convenient access to commands, presets, and other files.
To create a new toolbar, choose View > New Custom Toolbar from the
main menu.
• Toolbars are floating windows that contain buttons for running
commands or applying presets.
• To add presets to the toolbar, drag them from a file browser
(View > General > Browser).
• Shelves contain tabs. Each tab can be a toolbar, display the contents
of a file directory, or hold other items.
• To add commands and tools, choose View > Customize Toolbar,
select a command category, and drag items onto the toolbar. Use
the Toolbar Widgets category to organize your toolbar.
Toolbars and shelves are stored as XML-based files with the .xsitb
extension in the Application\toolbars subdirectory of the user,
workgroup, or factory path.
At startup, XSI gathers the files it finds in these locations and adds
them to the View > Toolbars menu. Toolbars and shelves that are
found in your user location are marked with [user] in the Application
menu, and those that are found in a workgroup location are marked
with [workgroup].
You can remove toolbars and shelves stored in the user location.
Choose View > Manage, check any items you want to remove, and click
Delete. The items are not physically deleted but they are marked for
removal. When you exit XSI, the file extensions are changed to .bak so
they won’t be detected and loaded when you restart.
Custom Toolbars
You can create your own toolbar and use it to hold
commonly-used tools and presets. Tools and presets
are represented as buttons on the toolbar.
XSI also includes a couple of blank toolbars that are
ready for you to customize by adding your own
scripts, commands, and presets:
• The lower area of the palette and script toolbar.
• The Custom tab of the main shelf (View > Optional Panels > Main
Shelf).
• To add a script, drag lines from the script editor or a script file from
a browser and choose Script Button.
• To remove an item from a custom toolbar, right-click on a toolbar
button and choose Remove Button.
To save the toolbar, right-click on an empty area of the toolbar and
choose Save or Save As.
Shelves
To create a custom shelf, choose
View > New Custom Shelf. To add
a tab, right-click on an empty part
of the tab area and choose an item
from the Add Tab menu. If no tabs
have been defined yet, you can
right-click anywhere in the shelf.
• Folder tabs display files in a specific directory. You can drag files
like presets from a folder tab onto objects and views in XSI.
• Toolbar tabs hold buttons for commands and presets.
• Driven tabs can be filled with scene elements such as clips by using
the object model of the SDK.
To save a custom shelf, right-click on an empty portion of the tab area
and choose Save or Save As.
Basics • 321
Section 21 • Customizing XSI
Custom and Proxy Parameters
Custom parameters are parameters that you create for your own
purpose. Proxy parameters are linked copies of other parameters that
you can add to your own custom parameter sets. Both custom
parameters and proxy parameters are contained in custom properties,
also known as custom parameter sets.
Custom Parameters
Custom parameters are parameters that you create for any specific
animation purpose you want. You typically create a custom parameter
then connect it to other parameters using expressions or linked
parameters. You can then use the sliders in the custom parameter set’s
property editor to drive the connected parameters in your scene.
Next, create a new parameters using Create > Parameter > New
Custom Parameter. If your object has only one custom parameter set,
the custom parameters are placed in it. If there are multiple sets, you
should select the desired one beforehand. If there aren’t any custom
parameter sets on the selected object, one is created using a default
name.
At this point, the custom parameter set exists only in the scene in
which it was created. It is not installed at the application level. You can
copy it to other objects in the same scene, or save a preset to apply it to
objects in other scenes.
If you want, you can convert the custom parameter set into a selfinstalling custom property by right-clicking in the light gray header bar
of the property editor and choosing Migrate to Self-installed. This lets
you distribute the property as a script plug-in. You can also edit the
script file to control the layout and logic of the property.
Proxy Parameters
For example, you can use a set of sliders in a property editor to drive the
pose of a character instead of creating a virtual control panel using 3D
objects.
First, create a custom parameter set by selecting an element and using
Create > Parameter > New Custom Parameter Set on the Animate
toolbar, and then giving it a meaningful name.
322 • SOFTIMAGE|XSI
Proxy parameters are similar to custom parameters, but with a
fundamental difference. Custom parameters drive target parameters,
but they are still separate and different parameters. This means that
when you set keyframes, you key the custom parameter and not the
driven parameter. So what do you do when you want to drive the actual
parameter, or create a single parameter set that holds only those
existing parameters you are interested in? You can use
proxy parameters.
Unlike custom parameters, proxy parameters are cloned parameters:
they reflect the data of another parameter in the scene. Any operation
done on a proxy parameter has the same result as if it had been done on
the real parameter itself (change a value, save a key, etc.).
While you can create proxy parameters for any purpose, it’s most likely
that you will use them to create custom property pages. You can create
your own property pages for just about anything you like: for example,
locate all animatable parameters for an object on a single property
Custom and Proxy Parameters
page, making it much quicker and easier to add keys because all the
animated parameters are in one place. Or as a technical director, you
can expose only the necessary parameters for your animation team to
use, thereby streamlining their workflow and reducing potential errors.
Select one or more objects with a DisplayInfo custom parameter set. If
nothing is selected, the DisplayInfo set of the scene root is displayed (if
it has one).
First, create a custom parameter set, then open an explorer and drag
and drop parameters into the custom property editor or onto the custom
parameter set node in an explorer. Alternatively, use Create >
Parameter > New Proxy Parameter to specify parameters with a
picking session.
Displaying Custom and Proxy Parameters in 3D
Views
You can display and edit parameter values directly in a 3D view. This is
sometimes called a heads-up display or HUD.
You do this by creating a custom parameter set whose name starts with
the text DisplayInfo. You can simply display information, for example,
about your company or a particular scene shot, or you can mark
parameters and change their values.
Viewing the Information in a 3D View
To view the DisplayInfo information in a 3D View, click the eye icon in
a 3D view and choose Visibility Options. On the Stats page in the
Camera Visibility property editor, select Show Custom “DisplayInfo”
Parameters.
Changing Parameter Values in a 3D View
You can easily modify the parameters displayed in the 3D views. There
is a preference that controls the interaction:
• If Enable On-screen Editing of “DisplayInfo” Parameters is on in
your Display preferences, you can modify the values as well as
animate them directly in the display.
• If on-screen editing is disabled, you can still mark the parameters
and modify them using the virtual slider.
If on-screen editing is enabled, the parameters appear in a transparent
box in the view. The title of the parameter set is shown at the top
(without the “DisplayInfo_” prefix). Each parameter has animation
controls that allow you to set keys.
Basics • 323
Section 21 • Customizing XSI
• Double-click on a numeric value to edit it using the keyboard. The
current value is highlighted, so you can type in a new value. Only the
parameter you click on is affected even if multiple parameters are
marked.
• Double-click on a Boolean value to toggle it. Only the parameter
you click on is affected even if multiple parameters are marked.
• Click on an animation icon to set or remove a key for the
corresponding parameter.
• Right-click on an animation icon to open the animation context
menu for the corresponding parameter.
• Click the triangle in the top right corner to expand or collapse the
parameter set.
The color of the animation icon indicates the following information:
You can do any of the following:
• Click and drag on a parameter name to modify the value. You don’t
need to explicitly activate the virtual slider tool.
- Drag to the left to decrease the value, and drag to the right to
increase it.
- Press Ctrl for coarse control.
- Press Shift for fine control.
- Press Ctrl+Shift for ultra-fine control.
- Press Alt to extend beyond the range of the parameter’s slider in
its property editor (if the slider range is smaller than its total
range).
If the parameter that you click on is not marked, it becomes marked.
If it is already marked, then all marked parameters are modified as
you drag.
324 • SOFTIMAGE|XSI
• Gray: The parameter is not animated.
• Red: There is a key for the current value at the current frame.
• Yellow: The parameter is animated by an fcurve, and the current
value has been modified but not keyed.
• Green: The parameter is animated by an fcurve, and the current
value is the interpolated result between keys.
• Blue: The parameter is animated by something other than an
fcurve (expression, constraint, mixer, etc.).
If there is a DisplayInfo property on the scene root, you
cannot edit its parameters on-screen unless the scene root is
selected.
Scripts
Scripts
Scripts are text files containing instructions for modifying data in XSI.
They provide a powerful way to automate many tasks and simplify
your workflow.
Command box displays the most
recent command. Modify the
contents or type a new command,
then press Enter to execute it.
Selects any of the last 25 commands.
Script editor icon opens
the script editor.
Run the line selected in the editing pane. If
no lines are selected, the entire script is run.
Get help on the command selected in the editing pane.
History pane contains the most recently used commands in
your current session. Drag and drop lines into the editing
pane to get a head start on your own scripts.
The history pane also contains messages related to importing
and exporting, debugging information, and so on.
Editing pane is a text editor in which you can create scripts
by typing or pasting. Right-click for a context menu.
Basics • 325
Section 21 • Customizing XSI
Key Maps
Key maps determine the keyboard
combinations that are used to run
commands, open windows, and activate
tools. You can create your own key
maps to create new key bindings or
change the default ones.
Keyboard shortcuts are grouped by
interface component.
Click an interface component in
the Group list to display its
commands and their keyboard
shortcuts in the Command list.
Click a command in the
Command list to display its
keyboard shortcut in red.
To see which command is mapped to a key,
click the appropriate modifiers (Alt, Ctrl,
Shift) from the check boxes or the keyboard
diagram, then rest your mouse pointer over a
key on the keyboard diagram.
Key maps are stored as XML-based
.xsikm files in the
\Application\keymaps subdirectory of
the user, workgroup, or factory path. At
startup, XSI gathers the files it finds at
these locations and makes them
available for selection in the Keyboard
Mapping editor.
When you change a key mapping, the
new key automatically appears next to
the command in menus and context
menus. For some menus, you must
restart XSI to see the new label.
Open the keyboard mapping editor by
choosing File > Keyboard Mapping
from the main menu. Select an existing
Key Map, or click New to create a new
one.
Create or modify a shortcut by dragging
a command label to a shortcut key.
Hold down the Shift, Ctrl, or Alt key while dragging
to add a modifier to the new shortcut command.
326 • SOFTIMAGE|XSI
Remove a shortcut key by
selecting a command from the
Command box and pressing Clear.
Other Customizations
The keyboard keys are color-coded to indicate the following:
• White: no keyboard shortcut has been assigned to this key.
• Beige: a keyboard shortcut from another interface component has
been assigned to this key.
• Light Brown: a keyboard shortcut from the currently selected
interface component has been assigned to this key.
• Red: this keyboard shortcut corresponds to the currently selected
item in the Command box.
To see key conflicts with other windows, select View and choose a
window from the adjacent list. Keys that are used by the selected
window are highlighted in dark brown. For combinations involving
modifiers, select the appropriate Ctrl, Shift, and Alt boxes or press and
hold those keys on your keyboard.
Other Customizations
In addition to the customizations briefly mentioned so far, there are
many other ways you can extend XSI:
• Custom commands can automate repetitive or difficult tasks.
Commands can be scripted or compiled.
• Custom operators can automatically update data in the operator
stack. Operators can be scripted or compiled.
• Layouts define the main window of XSI. You can create layouts
based on your preferences or common tasks.
• Views can be floating or embedded in a layout. You can create
views for specialized tasks.
• Events run automatically when certain situations occur in XSI.
• Synoptic views allow you to create custom control panels for a rig.
• Net View allows you to create an HTML interface for sharing
scripts, models, and other data.
• Shaders give you complete control over the final look of your work.
For more information about customizing XSI, see the SDK Guides, as
well as Customization in the XSI Guides.
Basics • 327
Section 21 • Customizing XSI
328 • SOFTIMAGE|XSI
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement