EM_2012_025_-_Reichard_-_MSc_

EM_2012_025_-_Reichard_-_MSc_
Department of Precision and Microsystems Engineering
Topology Optimization For Localizing Design Problems: An Explorative
Review
Christopher Reichard
Report no
Coach
Professor
Specialisation
Type of report
Date
:
:
:
:
:
:
EM 2012.025
Matthijs Langelaar, Shinji Nishiwaki
Fred van Keulen
Optimization
Thesis Report
17 September 2012
T O P O L O G Y O P T I M I Z AT I O N F O R L O C A L I Z I N G
D E S I G N P R O B L E M S : A N E X P L O R AT I V E R E V I E W
chris reichard
MSc Thesis report
Department of PME
Faculty of Mechanical, Maritime and Materials Engineering (3mE)
Delft University of Technology
2012
ABSTRACT
Topology optimization is a valuable tool that can assist engineers in the design of many
complex problems. However optimization techniques often have difficulties dealing with
problems that have a strong tendency to localize in the global design domain forming a
sparse design. Typically an extremely fine mesh is needed to capture these results, with
high computational costs. Examples are seen in spanning of large bridges, reinforcements
of thin plates, and the branching effect of a heat conduction problem.
The aim of this explorative review is to identify different possibilities to improve the
computational efficiency. Next to reviewing current techniques, also two novel ideas
were developed and evaluated. The first idea is inspired by techniques present in fields
such as computer graphics. In this field 3D objects used in animation are represented
by skeleton models to reduce the complexity of the problem. Similar techniques are
investigated to determine if such an approach can be used within topology optimization
to develop an optimum structure. The second idea employs an adaptive substructuring
technique in a less traditional manner. The computational domain is split in a part that
is changing and one that is static during the optimization process. The overall goal of
both proposed approaches is to improve the solution process by reducing the number of
variables needed to be solved to obtain the response of the design.
From these investigations and research many ideas are presented which bring benefit
to the sparse problem in optimization. The idea of skeleton modeling is a potential
method but has several issues in developing the model and obtaining the temperature
response. The sub-structuring approach is a promising development offering significant
improvements with up to approximately 67% time savings in the finite elements were
shown for a design problem using 1% of the orignal volume.
iii
CONTENTS
iii
abstract
1
introduction
1
2
background and problem definition
2.1 Preliminaries / Background . . . . . . . . . . . .
2.1.1 Topology Optimization . . . . . . . . . .
2.1.2 SIMP Method . . . . . . . . . . . . . . . .
2.1.3 Heat Conduction: Optimization Problem
2.2 Localization Problem . . . . . . . . . . . . . . . .
2.3 Motivation & Project Goal . . . . . . . . . . . . .
2.3.1 Motivation . . . . . . . . . . . . . . . . . .
2.3.2 Project Goal & Approach . . . . . . . . .
2.4 88 Line Optimization Code . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
3
3
3
5
6
7
7
7
8
9
3
4
5
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
observations / analysis of simp problem for
structure
3.1 Test Problem Definition . . . . . . . . . . . . . . . .
3.2 Observations . . . . . . . . . . . . . . . . . . . . . . .
3.2.1 Development & Characteristics of Structure
3.2.2 Effect of Conductivity Ratio . . . . . . . . . .
3.3 Efficiency Analysis . . . . . . . . . . . . . . . . . . .
3.3.1 Test: Efficiency . . . . . . . . . . . . . . . . .
3.3.2 Test: Design Changes . . . . . . . . . . . . .
3.4 Summary of Observations and Analysis . . . . . . .
3.5 Areas of Improvement . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
developing local
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
11
11
12
12
14
14
14
17
18
19
methods to exploit local structure
4.1 Finite Element Solution . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.1 Local Mesh Refinement / Adaptivity . . . . . . . . . . . . .
4.1.2 Global Mesh Refinement . . . . . . . . . . . . . . . . . . . .
4.1.3 Mesh Superposition Methods (S-FEM) . . . . . . . . . . . .
4.1.4 Adaptive Sub-Structuring of Optimization Domain . . . . .
4.1.5 Structural Re-Analysis . . . . . . . . . . . . . . . . . . . . . .
4.2 Finite Element Assembly . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 Optimization Problem Formulation . . . . . . . . . . . . . . . . . .
4.3.1 Level Set Method Through Skeletonization . . . . . . . . . .
4.3.2 SIMP with Gray Scale Suppression . . . . . . . . . . . . . .
4.4 Update / Convergence . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.1 Sequential Approximation Optimization Update Algorithm
4.5 Summary of Methods . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
21
21
22
23
25
27
29
32
32
33
34
35
35
37
investigation
5.1 Level Set Method Through Skeletonization . . . . . . . . . . . . . . . . . . .
39
39
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
v
Contents
5.2
5.3
5.4
5.1.1 Development of Skeleton Curve from the Level Set Function . . . .
5.1.2 Alternative Formulation . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.3 Current Issues of Skeleton Approach . . . . . . . . . . . . . . . . . .
SIMP with Gray Scale Suppression . . . . . . . . . . . . . . . . . . . . . . . .
5.2.1 Effects of Gray Scale Suppression Parameter . . . . . . . . . . . . . .
5.2.2 Comparison of Traditional Solid Isotropic Material with Penalization (SIMP) to SIMP-Gray Scale Suppression (GSS) . . . . . . . . . .
5.2.3 Effects on Design Variables . . . . . . . . . . . . . . . . . . . . . . . .
Adaptive Sub-Structuring of Optimization Domain . . . . . . . . . . . . . .
5.3.1 Comparison of Optimal solutions . . . . . . . . . . . . . . . . . . . .
5.3.2 Analysis of Solution Process . . . . . . . . . . . . . . . . . . . . . . .
Methods for Structuring the Optimization Domain . . . . . . . . . . . . . .
5.4.1 Radial Buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4.2 Sensitivity Buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4.3 Issues with Sensitivity and Radial Buffer Method . . . . . . . . . . .
5.4.4 Combined Buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4.5 Comparison of Buffer Zone Methods . . . . . . . . . . . . . . . . . .
5.4.6 Performance of Combined Buffer for Different Volume Fractions . .
5.4.7 Buffer Zone Improvements . . . . . . . . . . . . . . . . . . . . . . . .
5.4.8 Performance of Domain Selection and Buffer Zone Development . .
5.4.9 Adaptive Substructuring Method Summary . . . . . . . . . . . . . .
39
42
46
46
47
48
50
51
52
54
57
59
59
61
62
62
66
68
68
69
6
conclusion
71
7
recommendations for future work
73
appendix a
1
Efficiency Analysis for Design 2 . . . . . . . . . . . . . . . . . . . . . . . . .
2
Effects of Design Change on Design 2 . . . . . . . . . . . . . . . . . . . . . .
75
75
76
appendix b
3
Effects of Sparsity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
Buffer Method with Traditional SIMP . . . . . . . . . . . . . . . . . . . . . .
77
77
78
appendix c
5
Method of Moving Asymptotes . . . . . . . . . . . . . . . . . . . . . . . . . .
81
81
References
83
vi
LIST OF FIGURES
Figure 1.1
Figure 2.1
Figure 2.2
Figure 2.3
Figure 2.4
Figure 3.1
Figure 3.2
Figure 3.3
Figure 3.4
Figure 3.5
Figure 3.6
Figure 4.1
Figure 4.2
Figure 4.3
Figure 4.4
Figure 4.5
Figure 4.6
Figure 4.7
Figure 5.1
Figure 5.2
Figure 5.3
Figure 5.4
Figure 5.5
Figure 5.6
Figure 5.7
Figure 5.8
Figure 5.9
Figure 5.10
Figure 5.11
Figure 5.12
Figure 5.13
Skeleton modeling as presented in a paper by Cornea et al. (8) . .
Optimization Process . . . . . . . . . . . . . . . . . . . . . . . . . . .
Generalized Design Problem . . . . . . . . . . . . . . . . . . . . . .
Heat conduction optimization problem . . . . . . . . . . . . . . . .
Localization of optimal structure . . . . . . . . . . . . . . . . . . . .
Design domains for test problems. Left: Design 1, Right: Design 2
Development of heat conducting structure . . . . . . . . . . . . . .
Effect of different conductivity ratio . . . . . . . . . . . . . . . . . .
Break down of computational cost in terms of increase in complexity, time differences, and percentage of total optimization process
for design 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Computational cost breakdown . . . . . . . . . . . . . . . . . . . . .
Percentage of design with change less than 10−4 for different volume fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Topology optimization on an adaptive mesh with Cartesian grid.
Reproduced from the paper by Costa Jr. and Alves (9) . . . . . . .
Topology optimization with adaptive unstructured mesh. Reproduced from the paper by Maute et al. (23) . . . . . . . . . . . . . . .
Example of mesh superposition (Picture developed byYue (36)) . .
Multi-level superposition (Picture developed byYue (36)) . . . . . .
Substructuring of Domain . . . . . . . . . . . . . . . . . . . . . . . .
Processes involved with adaptive structuring of the optimization
domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Discretization of level-set function with skeleton structure . . . . .
Extracting skeleton from level set structure with ridge detection
method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Determining principle curvature with the use of the first and second fundamental form . . . . . . . . . . . . . . . . . . . . . . . . . .
RBFs to determine skeleton and its corresponding width . . . . . .
Determining width of skeleton structure from Φ function . . . . .
Example of placing skeleton and determining width of a 2D structure
Issues with Radial Basis Function (RBF) approach . . . . . . . . . .
Connectivity issue of skeleton structure . . . . . . . . . . . . . . . .
Optimized structure for SIMP-GSS with varying gray scale suppression parameter (at iteration 60) . . . . . . . . . . . . . . . . . . .
Comparison of optimized structures between SIMP and SIMP-GSS
with continuation method . . . . . . . . . . . . . . . . . . . . . . . .
Comparison of optimized structures between SIMP and SIMP-GSS
Percentage of design domain with chaning design elements . . . .
Static structure for testing optimization domain . . . . . . . . . . .
Comparison of optimized structures computed from FI and ASSM
(Top: Design 1, Bottom: Design 2) . . . . . . . . . . . . . . . . . . .
2
4
4
6
8
11
13
15
16
17
18
23
24
25
26
28
29
34
40
41
43
44
44
45
46
47
49
50
51
52
54
vii
Figure 5.14
Figure 5.15
Figure 5.16
Figure 5.17
Figure 5.18
Figure 5.19
Figure 5.20
Figure 5.21
Figure 5.22
Figure 5.23
Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Efficiency of static condensation method for number of static iterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Establishing static and changing domains . . . . . . . . . . . . . . .
Establishing static and changing domains with use of sensitivity
information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Comparison of the buffer implementations at some instance in the
optimization process (Iteration: 145) . . . . . . . . . . . . . . . . . .
Using sensitivity information and blur radius to form buffer zone
Efficiency plots of radial buffer for different number of filter passes
Efficiency plots of sensitivity buffer for different threshold levels .
Efficiency plots of combined buffer for different threshold levels
and filter passes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Efficiency plots of sensitivity buffer for different threshold levels .
Percentage of computational cost for development of the buffered
changing domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Break down of computational cost in terms of increase in complexity, time differences, and percentage of total optimization process
for design 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Computational cost breakdown, Design 2 . . . . . . . . . . . . . . .
Percentage of design with change less than 10−4 for different volume fractions, Design 2 . . . . . . . . . . . . . . . . . . . . . . . . .
Comparison of full stiffness matrix with stiffness matrices of the
substructured domain . . . . . . . . . . . . . . . . . . . . . . . . . .
Behavior of traditional SIMP with use of combined buffer at iteration 50 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56
58
60
61
62
64
65
66
67
69
75
75
76
77
79
L I S T O F TA B L E S
Table 3.1
Table 3.2
Table 3.3
Table 3.4
Table 4.1
Table 5.1
Table 5.2
Table 5.3
Table 5.4
Table 5.5
Table 5.6
viii
Design domain criteria . . . . . . . . . . . . . . . . . . . . . . . . . .
Criteria for example of the developing structure . . . . . . . . . . .
Effect of sparsity on convergence . . . . . . . . . . . . . . . . . . . .
Efficiency and design variable test settings . . . . . . . . . . . . . .
Summary of methods to improve the solution process of localizing
topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Comparison of performance valus at 60 iterations . . . . . . . . . .
Comparison settings for comparing SIMP to SIMP-GSS . . . . . . .
Comparison of performance values for full optimization process
between SIMP and SIMP-GSS with continuation method . . . . . .
Comparison of performance values for full optimization process
between SIMP and SIMP-GSS . . . . . . . . . . . . . . . . . . . . . .
Comparison of Adaptive Sub-Structure Method (ASSM) to Full
Implementation (FI) settings . . . . . . . . . . . . . . . . . . . . . . .
Performance data of optimized structures computed from FI and
ASSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
12
13
15
38
47
48
48
49
53
53
Table 5.7
Table 5.8
Table 5.9
Table 5.10
Comparison of solution process between FI and ASSM . . . . . . .
Comparison of buffering methods . . . . . . . . . . . . . . . . . . .
Effects of volume fraction on combined buffer method . . . . . . .
Percentage of time used in radial buffer of the total buffering and
domain selection method . . . . . . . . . . . . . . . . . . . . . . . .
Settings for combined buffer with traditional SIMP method and
sensitivity filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Results for combined buffer with traditional SIMP method and
sensitivity filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Table 1
Table 2
55
63
67
69
78
78
N O M E N C A LT U R E
∂Ω
Boundary contour.
C
Global local transformation matrix.
D
∂T
∂(·)
Design domain.
Spatial temperature derivative.
o
Eijkl
e
η
Material property of isotropic material.
Change Threshold.
Dampening parameter.
f
fapp
fi
fo
Internal body forces.
Force applied.
Constraint function.
Objective function.
Γq
Γt
Heat flux boundary condition.
Temperature boundary condition.
I
II
Identity matrix.
Second fundamental form.
K
κi
∆K
KCC
KCS
KGG
KGL
kH
kL
General stiffness matrix.
Principal curvature.
Change in stiffness matrix.
Changing domain stiffness matrix.
Coupling stiffness matrix.
Global stiffness matrix.
Global local transformation matrix.
High conductivity.
Low conductivity.
ix
x
KLL
KO
k
KSS
Local stiffness matrix.
Stiffness matrix at previous analysis.
Conductivity.
Static domain stiffness matrix.
E
Stiffness.
Ne
Set of elements.
Ωmat
Ω
Material Domain.
Refernce domain.
p
φ
Penalty parameter.
Level set function.
q
qc
qg
ql
qs
Nodal heat flux.
Nodal heat flux changing domain.
Nodal global heat flux.
Nodal local heat flux.
Nodal heat flux static domain.
RB
rmin
Matrix of basis vectors.
Filter radius.
S(u, v)
Parametric surface.
T
t
τ
tc
tg
ti
tl
t
ts
Temperature.
Nodal temperature response.
Sensitivity threshold.
Nodal temperature of changing domain.
Nodal global temperature.
Principal directions.
Nodal local temperature.
Surface tractions.
Nodal temperature of static domain.
U
u
Upper triangular matrix.
Displacement.
Vol (Ωmat )
Vmax
Volume of design variables.
Maximum volume.
xc
xi
Change in design variables.
Elemental design variables.
y
Vector of unknown coefficients.
Acronyms
ACRONYMS
ASSM
Adaptive Sub-Structure Method.
CONLIN
Convex Linearization.
DOF
Degree of Freedom.
FE
FI
Finite Element.
Full Implementation.
GSS
Gray Scale Suppression.
LSF
LSM
Level Set Function.
Level Set Method.
MMA
Method of Moving Asymptotes.
OC
Optimality Criteria.
RBF
Radial Basis Function.
SAO
S-FEM
SIMP
Sequential Approximation Optimization.
s-refinement (Mesh superposition technique).
Solid Isotropic Material with Penalization.
VF
Volume Fraction.
xi
1
INTRODUCTION
Topology optimization has become a powerful tool that can assist designers in the development of many complex structures. Through a numerical solution method and with
the use of optimization algorithms the distribution of material for a given objective and
constraints can be determined inside a given domain. For approximately the past two
decades, topology optimization has received a vast amount of attention and has been
the subject of development to solve many different forms of problems. These problems
range from different forms of structural problems as can be seen from the MBB-beam
example by Sigmund et al. (3), heat conduction optimization as presented by Ha and
Cho (15), and fluid domain optimization as presented by Kreissl, Pingen, and Maute (19).
Essentially the possibilities of what can be optimized are endless if the problem can be
formulated numerically.
The methods to achieve an optimal structure have developed through many different
approaches. A microstructure / homogenization approach was developed by Bendøse
and Kikuchi (5) in 1988. Here the general idea was through the use of composite materials
to described the material properties in the design space. Later Bendose proposed an
alternative approach called Solid Isotropic Material with Penalization (SIMP)(4). This
approach assumes the material properties to be constant within each element and is
scaled through the use of densities which are the design variables of the problem. The
densities are penalized to force the design towards a discrete 0-1 design. More recently
is the development of the Level Set Method (LSM) in topology optimization with Osher
and Sethian developing the concepts of level sets in 1988 (25). The idea uses the contours
of a level set function to implicitly define the boundary of the structure. This gives a clear
distinction between what is structure and what is void.
A specific class of problems that has received little research is in the area of designs
that develop into sparse, local structures through the optimization process. Thus in general these structures develop with the use of low volume fractions meaning the structures
occupy a small fraction of the original design domain. The present issue with this type
of problem is an extremely fine mesh is often needed to resolve the structure and the
responses in order to have any meaningful design once the optimization process has finished. The development of this type of structure can be seen in several areas within topology optimization. Such areas include large spanning bridges, slender reinforcements of
thin plates, compliant mechanisms, and heat conduction optimization problems.
Research in this area is important in order to develop an efficient implementation to
deal with such problems. Therefore the goal of my research is to identify and test approaches in order to improve the efficiency of the optimization process when dealing
with sparse designs. From the literature study methods using mesh refinements have
been currently used to resolve some of the computational issues for this class of problem. Swan and Rahmatalla (30) presented a method through the use of a global mesh
refinement and a reduction technique. Hence, the mesh was globally refined and void elements were removed from the problem. However removing void elements is not always
1
introduction
a possibility, such as in the heat conduction problem. Other approaches use adaptive
mesh refinements to remove the need for highly refined meshes in the void region as
presented by Wang et al. (34) and Costa Jr. and Alves (9).
Through my research two new ideas were also developed to take advantage of the
sparse, local structure. One idea looks into the possibility of sub-structuring the optimization domain based on the status of the design variables. The status can be simply
explained as, whether the design variables are changing or remain static in the optimization process. Based on this criteria the design will be sub-structured into two seperate
domains, a static and changing domain. The computational advantage of this matrix
decomposition comes from a reduction of the Degree of Freedom (DOF) in the changing
domain. With a pre-computed matrix of the static domain, these DOFs can be cheaply
solved. Another method investigates the use of skeleton curves to model the structure.
Such methods are often used in many other fields including computer graphics, medical
imaging, and scientific visualization. An illustration of the method is presented in Figure
1.1. The idea is to represent the structure through bar elements to represent the skeleton
structure. Along with the bar elements a width will be specified to correspond to the
width of the structure.
Figure 1.1: Skeleton modeling as presented in a paper by Cornea et al. (8)
Through my thesis I examine current methods that are available as well as introduce
two new novel ideas on ways to exploit the characteristics of local problems. Hence,
the structure of this thesis is in the form of an explorative review. The feasibility of
these novel ideas will be assesed and if applicable the performance of the methods will
be determined. Therefore, the objective is not about finding a specific solution to the
problem and implementing it fully. By reviewing, exploring and evaluating, at the end of
this thesis recommendations can be made on how to deal with sparse designs and where
future research in this topic can be directed.
The rest of my thesis will be structured as follows. In the second Chapter, a brief
introduction to topology optimization will be given. Here the research problem will be
explained along with the motivation for this research. An overview of the test case will
also be given. In Chapter 3, the observations of the formation of local structure in the
optimization domain will be discussed. This will be followed by different tests to analyze the performance and the behavior of the design variables through the optimization
process. Chapter 4 looks at the research of the current methods as well as a description
of the two novel ideas developed from research. A short summary of the methods and
the advantages and disadvantages was developed and concludes the section. Chapter 5
looks at presenting a more detailed investigation of the two novel ideas. Chapter 6 and 7
will summarize the final conclusion of the thesis and present some recommendations for
future work.
2
2
BACKGROUND AND PROBLEM DEFINITION
2.1
preliminaries / background
2.1.1 Topology Optimization
Topology optimization is a mathematical method used to optimize the material layout
of a structure for a given design domain. This is achieved based on specified boundary
conditions and loads while also trying to meet specific performance criteria. In general
topology optimization is implemented through the use of Finite Elements to analyze the
design and optimization algorithms to update the design to the next design iteration.
These algorithms are often based on the method of moving asymptotes, optimality criterion, and level sets. The general formulation of the optimization problem is given below:
min : f 0 (x)
x
s.t. f i (x) ≤ 0
x min
≤ x j ≤ x max
j
j
i = 1, ..., m
j = 1, ..., n
(2.1.1)
where f 0 is the objective function sent to the optimizer, f i are the constraints to limit the
design (Stress, volume, displacements), and x are the design variables which are manipulated / controlled by the optimizer to formulate a new design based off of sensitivities
or other gradient information. The design variables often are constrained in a way that
limits the values they can take.
Topology optimization is essentially a material distribution problem requiring placement of material where it is needed and removal where it is not based on the fundamental equations of the design problem. Thus the purpose of topology optimization is to
determine this placement of material. Generally known before formulating the problem
are the following:
• Applied loads
• Support conditions
• Volume of final structure
• Design restrictions
Unknown before formulating the problem and to be determined through optimization
are the following:
• Physical Size
• Shape
3
background and problem definition
• Connectivity
Optimization is an iterative process that can be summarized in Figure 2.1 from the
method presented by Bendøse and Sigmund (6, p. 2)
Figure 2.1: Optimization Process
A typical type of problem in topology optimization is a minimum compliance problem.
This is a natural starting point in understanding optimization processes as it is a simple
design problem with simple resource constraints with the objective of minimizing the
compliance in the structure. The problem is formulated by considering the structure
represented as the design domain Ωmat that is occupying a subset of the reference domain
Ω in either R2 or R3 .
The choice of the reference domain allows for the definition of boundary conditions and applied loads. Using such
a reference domain also allows the optimization design problem to be formulated
as the optimal choice of stiffness tensor
Eijkl ( x ) which vary within the domain. By
taking advantage of the variational form
of the problem and discretizing the solution over the domain, a finite element approach to the problem can be used. Two
parameters of interest are the displacement U and the stiffness E. If both are
discretized over the same finite element
mesh and the stiffness E is kept constant
in the element, Bendøse and Sigmund (6)
showed the design problem can be formuFigure 2.2: Generalized Design Problem
lated as:
min :f Tapp u
u,Ee
s.t. K( Ee )u = f app
e = 1, ..., N
Ee ∈ E
(2.1.2)
were fapp is the force applied due to the surface tractions, t, and internal body forces, f .
K is represented as:
N
K=
∑ Ke (Ee )
e =1
4
(2.1.3)
2.1 preliminaries / background
In the design problem we are interested in the optimal placement of the material Ωmat
as a subset of the given design space Ω. This can be visualized as a black and white rendering with the pixels given by the Finite Element (FE) discretization. Thus black pixels
correspond to an area with material and white corresponds to void. Mathematically this
is represented as a scaling of the stiffness tensor by the following equation:
0
Eijkl = 1Ωmat Eijkl
(
1 if Ω ∈ Ωmat
1Ωmat =
0 if Ω ∈ Ω/Ωmat
(2.1.4)
To restrict the design to a 0-1 design (black and white design), a constraint on the
design must be imposed. This constraint limits the amount of volume available in the
reference domain Ω and therefore this amount of material used to develop the structure.
Z
1Ωmat dΩ = Vol (Ωmat ) ≤ Vmax
(2.1.5)
Ω
where Vol (Ωmat ) is the volume from the contribution of all the design variables and Vmax
is the maximum volume allowed limited by the volume constraint set by the user. The
volume constraint is often termed as Volume Fraction (VF), meaning the percentage of
volume remaining in the domain from the initial overall volume.
2.1.2 SIMP Method
A common approach to solving the minimum compliance problem presented in Section
2.1.1 is through the solid isotropic material with penalization (SIMP) method as developed by Bendøse and Sigmund (6). This method formulates the design problem with
continuous variables instead of integer values and uses a penalty function to steer the
solution to discrete values of 0 and 1. The density is then the design variable and the
optimization process tries to optimize the distribution of the density to have either zero
density or full density. The SIMP method can be modeled as:
o
Eijkl ( x ) = ρ( x ) p Eijkl
Z
ρ( x )dΩ ≤ V
p≥1
0 ≤ ρ( x ) ≤ 1
(2.1.6)
Ω
ρ( x ) : Design Function
x∈Ω
o is denoted as the material property of the isotropic material. The interpolation
were Eijkl
o
of the density gives a stiffness of the problem between 0 and Eijkl
meaning the final
structure will have either 0 stiffness at a design point or the full material property.
With this penalization method, one can choose the penalty value in order to penalize intermediate densities. Thus usually a penalty value, p greater than one meaning
intermediate densities (stiffness) becomes to costly for the given amount of material volume present. Thus the optimization process will force the intermediate value material
densities either down to 0 or up to 1.
5
background and problem definition
2.1.3 Heat Conduction: Optimization Problem
For conducting research on localization effects, the heat conduction problem was chosen
as the design problem to investigate in topology optimization due to its tendency to
develop localized structure. This problem is also a bit more challenging than a structural
problem in order to efficiently implement. The field outside of the structure for heat
conduction must also be modeled.
The development of the heat conduction structure as controlled by the optimizer can be
visualized by imagining the main branches of a tree. The heat conduction optimization
problem is essentially a two material design problem consisting of the placement of high
and low conductive material on a square or rectangular plate as shown in Figure 2.3.
Uniformly applied over the plate is a constant heat source transferring energy into the
domain which must be removed through the heat sink. The goal of the optimizer is to
develop the optimum path to move the heat from the domain to the heat sink, which can
be defined anywhere on the boundary or the interior of the domain.
Figure 2.3: Heat conduction optimization problem
As a practical example, think of the plate as a method to remove the heat generated
from a CPU. Two types of material are available to conduct the heat away from the
CPU. For formulation of the optimization problem, the high conductive material is considered expensive and the low conductive material is considered cheap. Thus through
optimization, an optimal placement of the high conductivity material is needed in order
to minimize the average temperature in the domain through the given amount of high
conductive material specified.
The formulation of the heat conduction optimization problem is developed from the
governing differential equation for heat transfer. For isotropic material and steady state
conditions, the heat transfer equation is simplified down to the following:
2
∂ T ∂2 T ∂2 T
k
+ 2 + 2 + q = 0 in Ω
(2.1.7)
∂x2
∂y
∂z
with the following boundary conditions:
6
qn = q T n = q̄
on
Γq
T = T̄
on
ΓT
(2.1.8)
2.2 localization problem
were k is the conductivity, T is the temperature, ∂∂T
is the spatial temperature derivative
(·)
in the corresponding direction, q is the internal heat load, and Γq is the heat flux boundary
condition, and Γt is the temperature boundary condition.
Through the Galerkin method, a finite element formulation can be developed which is
used in the optimization problem to develop the objective function. Here the objective
of the problem can be interpreted as minimizing the thermal resistance of the problem
in order to reduce the average temperature (only if a uniform heat load is applied) of
the design domain. The optimization formulation of the the heat conduction problem is
shown below.
min :q Tapp t
ρ
s.t. K(ρ)t = q app
e = 1, ..., N
∑ ρi Vi ≤ Vmax
0 ≤ ρmin ≤ ρi ≤ 1
(2.1.9)
with q app the uniformly applied heat source and K represented as:
K(ρ) = (k L + (k H + k L )ρ P )
(2.1.10)
were k L and k H are the low and high conductivity respectively.
2.2
localization problem
Optimization of certain design problems often tend towards a localized design structure
which can be defined as the development of fine detail in a fraction of the global design
domain. Thus a majority of the domain is left void of material. Examples of such phenomena can be seen in reinforcement of thin plate structures, heat conduction through
two different conductive materials with a uniform heat source, placement of conductive
elements in solar panels, the span of large bridge structures, and compliant mechanisms.
This tendency of localized effects is directly influenced by the volume constraint for the
optimization process. As the volume constraint approaches zero, the material is forced
to concentrate in specific parts of the domain due to high strains, thermal resistances, or
other criteria in which the objective function of the optimization process is developed for.
With such local features, it is important to use extremely refined models incorporating high numbers of degrees of freedom in the finite element mesh. If such detail is
not achieved, it will be difficult to capture such a structure and resolve the necessary
response of the system to accurately estimate the system performance. Such refinement
obviously presents some serious issues in performing an optimization, as efficiency will
be compromised. As this research is focused around the heat conduction problem an
example of this localizing effect is shown in Figure 2.4 for different volume constraints.
2.3
motivation & project goal
2.3.1 Motivation
Capturing such localized features in an optimization problem in an efficient manner has
received little research in the topology optimization field and is limiting the practical
applicability of topology optimization for sparse problems. For a minimum compliance
7
background and problem definition
Figure 2.4: Localization of optimal structure
problem such a solution as removing elements from the model that are not directly contributing to the structure may be used. There are some drawbacks with this method as
the structure can not re-enter the area where elements have been deleted. However such
an approach is not always available, such as in a heat conduction problem were the temperature response is needed in the rest of the domain. Adaptive mesh refinement could
be employed, but the drawback is this method is not always the most efficient way.
Missing in the topology optimization field is a method that exploits the structure and
sparsity of localized problems and takes such information into account during the optimization process. Instead of optimizing and then further refining the solution to capture
such local details, a method that explicitly develops the structure and determines the
response locally from the initial design point is needed. Once developed, further investigations on the behavior of localized phenomena can be further investigated in an efficient
manner.
2.3.2 Project Goal & Approach
The goal of this thesis is to increase the understanding of topology optimization for structures that localize and to investigate ways to improve topology optimization efficiency
for sparse problems. This will be achieved by:
• Examining current topology optimization methods (Chapter 3),
• Research techniques dealing with local refinement (Chapter 4),
• Develop and test ideas to take advantage of local structures (Chapter 5).
An explorative overview of the problem will be developed which first examines the
behavior of the heat conduction problem through a SIMP implementation. This offers a
simple starting point / introduction to the problem giving a basis for initial analysis and
observations. With knowledge of the problem behavior further research on localization
and other optimization techniques will be conducted. Exploration of some typical solutions will be presented as well as some new novel ideas will be investigated along with
some tests of their implementations. From this review, the reader should be more aware
of the present issues around topology optimization for localizing structures and have an
idea of how to proceed with such problems.
8
2.4 88 line optimization code
2.4
88 line optimization code
As the 88 line topology optimization code by Sigmund (3) was adapted in this research
for the earlier stages of investigation of the localization problem, a brief description of
the code as well as how it was used in the research will be given. This optimization
code is a simple yet efficient MATLAB implementation of the SIMP method developed by
Sigmund for research use. The code takes advantage of the ability to vectorize equations
in MATLAB to avoid costly construction of the finite element matrices. The following is
implemented within the code:
• A basic finite element package is used to develop the conductivity matrix for a
rectangular grid,
• MATLAB backslash operator to determine the temperature response,
• Filtering methods (both density and sensitivity filtering) to provide for unique solutions and avoid checkerboard patterns in the design,
• Optimality criteria method for updating the design variables.
Due to its simplicity, some modifications to the original code were needed to account
for changing grid sizes and the use of heat conduction elements for a heat conduction
problem. Since little information on efficient methods for dealing with localization problems was found in the initial research, it was decided the 88 line code will be the basis for
the initial investigation. Also since the implementation is very efficient, it will provide for
a basis of comparison for any ideas or implementations developed through the research
performed during the thesis.
9
3
O B S E R VAT I O N S / A N A LY S I S O F S I M P P R O B L E M F O R
DEVELOPING LOCAL STRUCTURE
Through this chapter, an understanding of the behavior of the SIMP method for a localizing design problem involving heat conduction will be observed and analyzed at low
volume fractions. These observations and analysis will develop knowledge of localization which are beneficial for focusing the research. Insight to new ideas that could benefit
the modeling of such structures during optimization routines will also be obtained. This
information will be used in Chapter 4 which focuses on methods to handle such problems.
3.1
test problem definition
The design domain for the heat conduction problem is developed under the criteria provided in Table 3.1 with a uniform mesh (Unless otherwise specified), where the units are
relative and uniform:
Domain Size
Heat Flux
kL
kH
Element Type
DOF per Element
Min. Density
10x10
0.1
0.001
1
Rectangular Element
4
0.001
Table 3.1: Design domain criteria
From here two different designs are used for observations and testing as shown in
Figure 3.1. The first design only has a small heat sink (T=0) located on the left boundary
centered in the middle with a width of 1/10th of the length of the overall side. The second
design has heat sinks (T=0) placed on all of the boundaries surrounding the domain.
Figure 3.1: Design domains for test problems. Left: Design 1, Right: Design 2
11
observations / analysis of simp problem for developing local structure
3.2
observations
Through observation of the optimal heat conduction structure as well as how this structure develops can provide useful information on improving the optimization process or
insight on how to develop a new process. These observations will focus on how the
structure develops along with the characteristics of this structure and the influence of
different conductivity values specified in the design domain.
3.2.1 Development & Characteristics of Structure
Development
This development would be best shown through a movie of the process iterations, but is
hard to show in report format. Thus a description of the process will be given. These
observations focus on how the design changes within the structure and the optimal configuration for both designs 1 and 2 . They are performed under the criteria provided in
Table 3.2.
Convergence
Parameters
VF
Sensitivity Filter Radius
Mesh
Temp. Change < 0.05%
p: 3
0.1
1.2
200x200
Table 3.2: Criteria for example of the developing structure
The structure initially develops by concentrating all of the material near the center of
the domain as a result of the high penalty factor suppressing the design sensitivities on
the outer regions. This is illustrated in Figures 3.2a and 3.2c.
From here the material is arranged into branch like structures which eventually fill the
rest of the domain as shown in the optimal design of Figures 3.2b and 3.2d. However
this redistribution of material after it has clustered is a slow process as it must move
material into a fine local structure in areas with no initial density. Already this shows the
inefficiency of starting with a penalty of 3 and a reason for the use of a penalty continuation method, along with avoiding local optimal solutions. Using a penalty continuation
method, the structure develops similarly except the material does not initially cluster to
the center of the domain but remains dispersed.
The areas of design change after the majority of the material has been redistributed
into the branches remains as local changes in the structure. These changes are mostly
concentrated around the boundaries as shown in Figures 3.2e and 3.2f. In the figure, the
gray areas correspond to no change in the design, the white areas correspond to adding
material, and the black areas correspond to removal of material
As the optimization process continues, most of these design variables remain oscillating or only decrease in a slow manner until eventually convergence is reached. However
there always remains design changes located on the boundaries due to the use of a sensitivity filter making the sensitivities inconsistent. This occurrence is less prevelant with
the use of a density filter with a similar formation of the structure (If penalty continuation
methods are used).
The effect of the volume fraction, or the sparsity of the problem, on the convergence
rate in general is rather significant as shown in Table 3.3. As the volume fraction decreases the number of iterations to converge to a solution greatly increases. This is
partially due to the strict penalty of 3 as all the material concentrates which is more
profound for local structures. Thus differences may develop based on the settings used.
12
3.2 observations
(a) Developing structure (Design 1: 10 iterations)
(b) Optimal structure (Design 1: 146 iterations)
(c) Developing structure (Design 2: 10 iterations)
(d) Optimal structure (Design 2: 84 iterations)
(e) Areas of design change (Design 1: 146 iterations)
(f) Areas of design change (Design 2: 84 iterations)
Figure 3.2: Development of heat conducting structure
Volume Fraction
0.01
0.1
0.3
Mesh Size
Iter. for Convergence
100x100
200x200
100x100
200x200
100x100
200x200
128
221
114
146
55
62
Table 3.3: Effect of sparsity on convergence
13
observations / analysis of simp problem for developing local structure
Characteristics of Structure
The general characteristic of sparse localized structure (especially at low volume faction,
see Figure 2.4 in Section 2.2 as an example) is a form of a discretized beam like structure
with varying cross sections of each of the individual beam members. This is seen in
many sparse structural designs, such as optimal configuration of a spanning bridge and
as shown from the heat conduction problem. This distinction from other topology optimization problems could possibly give modeling of the structure a distinct advantage
which presently has not been exploited in continuum topology optimization methods.
3.2.2 Effect of Conductivity Ratio
For the purpose of this section the conductivity ratio is defined as :
kratio =
kH
kL
(3.2.1)
hence it is a ratio of the high conductivity to the low conductivity.
The selection of the conductivity ratio places an interesting and important role in the
development of the structure. It directly influences the amount of branches forming in
the optimization problem. As the high and low conductivities become closer together
there is less incentive for the structure to branch. This is due to the ability of the low
conductivity material being able to conduct the heat more efficiently. As the difference in
conductivities increase, the low conductive material is less able to transfer heat to the high
conductive material. If large pockets of low conductive material exist, high temperatures
will develop. In order to minimize these pockets of low conductive material and the
temperature in these regions, the structure begins to develop more branches. Effects of
different conductivity ratios is presented in Figure 3.3
3.3
efficiency analysis
Through this analysis two main tests were performed to help understand the behavior
of the optimization process for localizing design problems. The first test investigates the
effect of increasing design variables which directly increases the number of degrees of
freedom in the system. The number of design variables controls the resolution of the
structure that can be obtained and the degrees of freedom influences the resolution of
the response. Typically in SIMP implementations, as well as in the 88 line code, the
number of elements and design variables are equal but this does not necessarily have
to be true. The second test looks at the design variables and how they change during
the optimization process. Through these two tests, a better understanding of the SIMP
Method and how the heat conduction problem localizes through optimization will be
used to determine areas were improvements or changes in implementation of algorithms
can be made. This topic will be discussed further in Chapter 4. For both of these tests
the settings are presented in Table 3.4.
3.3.1 Test: Efficiency
The development of update algorithms for optimization has produced a variety of efficient processes even for large amount of design variables. Therefore it is expected the
optimization formulation to not be a big issue in terms of efficiency. Finite element
14
3.3 efficiency analysis
Figure 3.3: Effect of different conductivity ratio
Convergence
100 iterations
VF
0.1
Sensitivity Filter Radius
Mesh
1.2
Elements doubled in each
direction, reference mesh of
50x50
Table 3.4: Efficiency and design variable test settings
solvers, however for large problems, still consume considerable amounts of time. Many
advancements have been achieved with the use of different factorizations, parallelization,
and iterative methods, but there is still a large dependence on the size of the problem.
One of the most efficient direct methods is the use of Cholesky factorization with forward
and backward substitutions. Even with such an implementation the cost of computation
of the solution Ax = b is (1/3)n3 flops with the forward and backward substitution of
2n2 flops. This is the worst case and depending on sparsity of the problem, efficiency may
be improved (7). Through the MATLAB implementation the backslash operator is used
where the method of solution was determined to be through Cholesky factorization.
The first test will examine the time increase for doubling the number of elements in
each direction (subsequently increasing the amount of degrees of freedom (DOF) and
design variables) for each of the components in the optimization process: FE (Assembly
and Solution), Objective function, Update of design variables, and filtering along with
the total time for the optimization process. The overall optimization process can be seen
in the flowchart presented in Section 2.1.1 in Figure 2.1. This test does not include the
start up time before the optimization loop is performed. The results from this efficiency
test are shown in Figure 3.4
15
observations / analysis of simp problem for developing local structure
(a) Increase in computational complexity
(b) Time differences per iteration
(c) Percentage of total optimization process
Figure 3.4: Break down of computational cost in terms of increase in complexity, time differences, and percentage of total
optimization process for design 1
Figure 3.4 breaks down the time per iteration of the components involved in the optimization process for design 1. The results for design 2 are presented in Appendix A,
Section 1, however the behavior of the results are the same as design 1. Figure 3.4a shows
how each of the components increases with an increasing mesh size on a log scale. This
better depicts the increase in computational complexity of the method. Overall all the
methods increase at approximately a similar rate. Figure 3.4b better presents the time
difference as the mesh size increases and shows how much the finite elements is the controlling time factor in the optimization process especially at large mesh sizes. The time
to compute the finite elements is significantly longer than all the other processes. Figure
3.4c presents a numerical comparison of the percentage each process takes in the overall
optimization process. This once again shows the domination of the finite element process
in optimization.
Figure 3.4 is further elaborated on by splitting the finite element contribution into its
individual components of the finite element assembly of the stiffness matrix and the solution of the finite element to obtain the temperature response in the design domain. Some
interesting occurrences in the behavior of the optimization process are better seen as plotting the data in a 2D line plot. This breakdown is shown in Figure 3.5. The plot shows
the percentage of each component for each of the different mesh sizes, were it is noted
that the FE assembly and FE solution is a subsequent breakdown of the FE and reflects
the percentage of time towards the total contribution of the finite elements (blue solid
line). Initially the finite elements increases in percentage for the smaller mesh sizes and
begins to decrease. This is either due to inherit changes within the internal workings of
16
3.3 efficiency analysis
the methods, such as possibly needing more search iterations for the bisection algorithm
in the optimality criteria method, or perhaps due to the computer architecture changing
the storage of the matrices in memory. After examining the the number of iterations
needed for the bisection method, no irregularities were noticed.
Figure 3.5: Computational cost breakdown
From these results the following conclusions can be made about the increasing mesh
size and the cost it has on the performance.
• Roughly all the components increase in the same manner with the FE method increasing by the fastest and the objective increasing slowest.
• Inside the finite elements the solution method is the dominate cost and increases
faster than the assembly cost with increasing mesh size.
3.3.2 Test: Design Changes
This final investigation takes a look at the design variables and examines how they change
within the design domain. Thus during what part of the optimization process is there a
global change in design variables and when is the change dominated by local fluctuations.
It is expected the degree in which it is global or local change will be dependent on
the volume fraction specified, the design implementation (i.e. the boundary conditions,
conductivity ratios, filter radius, etc.), and the mesh size. In this test a design element is
considered not to be changing if the change in the element is less than 10−4 . The results
from this test are shown in Figure 3.6 for changing volume fraction for design 1.
The number of design variables changing within the domain is significantly dependent
on the VF specified in the optimization process as can be seen in Figure 3.6d. The higher
the VF the more global fluctuations remaining in the optimization process for greater
amounts of iterations. Likewise the lower volume fractions experience significant local
fluctuations early on in the process. It can be see with the black dotted lines in Figures
3.6a, 3.6b and 3.6c, the percentage of design domain not changing after 20 iterations.
For a VF of 30%, 10%, and 1% about 50%, 20%, and 2% of the design domain remains
changing respectively. This is due to the greater number of design variables available for
17
observations / analysis of simp problem for developing local structure
(a) Percentage no change for VF: 0.3
(c) Percentage no change for VF: 0.01
(b) Percentage no change for VF: 0.1
(d) Comparison between
1600x1600 mesh
volume
fractions
with
Figure 3.6: Percentage of design with change less than 10−4 for different volume fractions
change with higher VF and the greater length of the perimeter of the structure that can
change (See Figure 2.4 for example).
Also interesting is the differences between mesh resolution and the fact the design
changes never reach zero during the optimization process especially for lower mesh resolution. This is due to the inconsistencies of the sensitivity filter resulting in small fluctuations in the design variables throughout the process. Optimization runs on smaller mesh
resolutions become more affected by the inconsistencies of the sensitivities meaning there
is a less percentage of design variables being static. This is due to the use of a fixed filter
radius of 1.2 used for all of the mesh resolutions.
The percentage of design domain with no change was also determined for design 2
with the results shown in Appendix A, Section 2. The same main findings as determined
for design 1 also were seen in design 2. However the percentages throughout the iterations seem to be less stable as some fluctuations occur.
3.4
summary of observations and analysis
A short summary of the principle findings will be given from the previous observations
and analysis. The findings are the following:
18
3.5 areas of improvement
• Overall the structure of a sparse design problem develops in a manner that resembles a series of connected beams each with varying width. This is a potential area
were better representation of the model could be used.
• In general the number of iterations for convergence increases for sparse problems.
• Roughly all the processes within optimization increase in the same manner and the
finite elements is the dominate cost within topology optimization.
• With sparse problems there is a great percentage of design variables non-changing
early on in the optimization process, which is another potential area of exploitation.
3.5
areas of improvement
Since the 88 line code is not optimized for a design problem with local structure, potentially some significant improvements can be made. As can be seen from the observations
and analysis in Sections 3.2 and 3.3, the following areas need attention:
1. Increase efficiency per iteration of finite element solution
2. Increase efficiency in assembly of finite element stiffness matrix
3. Reducing gray solutions and making boundaries of structure well defined
4. Number of iterations for convergence
The focus of these improvements will be in the form to improve the overall efficiency
in terms of a reduction of variables. In a sense that, a smarter approach of how to model
the structure in the physics of the problem is needed. Thus the main area of focus of this
research investigation is:
• Smarter assembly of stiffness matrix to reduce redundancy of updating non-changing
design variables
• Smarter modeling of the structure in the physical domain (reduced DOF) to obtain
response and sensitivities accurately to update the design domain (design variables)
This task will be assisted by looking into different optimization problem formulations
which can better describe the boundaries of the structure, bringing greater potential to
methods to represent the sparse structure. Lastly, it was noticed in general when compared to a less sparse design, a sparse design often required more iterations for convergence. Examination of mathematical programming methods may bring some benefits to
the methods looking to exploit the sparse structure.
19
4
METHODS TO EXPLOIT LOCAL STRUCTURE
This chapter focuses on presenting a diverse set of methods that can bring benefits to
sparse, localized design problems in topology optimization. The area of improvements
identified are general areas that effect the computational efficiency of the optimization
process. Thus the areas presented in Section 3.5 will be used to guide the research and
discussion. Some of these methods are ideas that are already familiar in the areas of finite elements and have been more or less adopted into the realm of optimization, such as
using mesh refinement / adaptivity and matrix re-analysis. The topics presented on current methods were chosen because they best target the issue of sparse structures and this
list is not meant to be comprehensive of all the techniques available. Besides the current
methods, two novel ideas incorporating concepts from substructuring in optimization
and an idea that was developed based off the observations of the previous chapter will
be shown. These include ideas such as an adaptive domain substructuring and skeleton
curve representation of the structure.
The discussion for this Section will be broken down into the main corresponding areas
of improvement: Finite Element (FE) solution, FE assembly, formulation of optimization
problem, and convergence. In each of these sections the method will be described while
keeping in mind the objective of the research of local structures. In terms of the current methods, only research has been performed on the subject as to implement them all
would provide for considerable amount of time and resources to perform. For the novel
ideas, background research was performed and some implementations / tests were developed to shown the functionality of such methods and how they can benefit the localized
representation of a structure. These investigations will be presented in Chapter 5. At
the end of this chapter, a summary of the methods is presented in Section 4.5, giving an
overview of the complexity of the implementation, expected improvement, advantages,
and disadvantages as determined from the research.
4.1
finite element solution
As determined from Section 3.3.1 the solution of the finite element procedure is a considerable cost in optimization, whether it be for a local structure or not. Either way there are
several different existing techniques that are used within topology optimization to help
improve efficiency. Some existing methods will be reviewed along with the introduction
of a novel method. Moreover some of these methods could be combined to achieve an
even better overall efficiency.
21
methods to exploit local structure
4.1.1 Local Mesh Refinement / Adaptivity
The concept of adaptive mesh refinement within topology optimization is still rather new
and under development, but the main idea is to obtain the same design as a uniform
mesh but with a lower cost. Thus it is a method of balancing resources by not wasting
time computing overly fine meshes or obtaining inaccurate results from to coarse of a
representation. Such a method can bring considerable computational gains to the problem by preventing resolution of response in void areas which are significant in this type
of problem. However since the structure is constantly changing, it must be assured that
such adaptions allow for accurate responses and computations of sensitivities to drive
the changes in the material distribution accurately. Thus through a dynamic adaptive
strategy, refinement and definement of the mesh can occur to adapt to the position of the
optimization structure. Advantages of such a scheme are:
• Increased computational savings
• Increased storage savings of degrees of freedom
• Control of grid resolution, providing resolution where it is needed.
and the disadvantages are:
• Extra cost due to increased complexity in mesh handling
• Deciding when are where to refine or coarsen the mesh.
Two main methods have evolved through research being performed on the topic. Either
the mesh can remain in a fixed format, usually Cartesian / structured grid, and then
further refined in each individual element. Or the mesh can adapt itself to the form of
the structure. Each method offers its own benefits to the problem. Since these methods
are often involve numerous steps to implement, only a description of the methods will
be presented here.
The simplest mesh adaptivity methods employ the use of regular Cartesian grids that
are further refined in each of the individual grid cells. The general procedure of such
a method begins by covering the computational domain with the use of a coarse mesh.
As the optimization process progresses, cells are tagged for refinement based on the
supplied criterion such as the density in the element. The cells that are tagged are then
refined after a certain period of time and a procedure is performed to ensure conserved
quantities are balanced between the boundaries of the cells.
One of the early approaches in this method was developed by Costa and Alves (9),
presenting a method that refines the mesh through a sequence of converged solutions at
fixed intervals in the optimization process. The method aims at refining the coarse mesh
design and never goes back to derefine any previous details. These refinements occur in
all elements once they reach a specific density level. As a down side to this method, as
reported by the author, is the dependency on the initial mesh. The results obtain were
often not the same optimal design found with the use of a uniform mesh. This was due
to the designs being confined to earlier results of coarse meshes.
Another work presented by Wang, Sturler and Paulino (34) looked to improve upon
Costa and Alves adaptive mesh refinement by offering a method that dynamicly adapted
to the optimization process but still keeping with the simplicity of the Cartesian grid. The
resulting grid for an optimal structure based on the authors methods is shown in Figure
22
4.1 finite element solution
4.1. The method allows for the meshes to change continually in both refinement and derefinement. The author reported better gains in efficiency (approximately 30% reduction
in time) through the use of de-refinements in void regions as well as structures that were
similar in optimal solutions with in a margin of error (2.58% difference in designs) to that
of a uniform mesh. This greater accuracy can be attributed to the dynamic nature of the
refinement as the results after the refinements occur can be used to update the design
and is no longer confined to earlier coarse meshes. Such a method makes it possible to
readapt the mesh if the structural region moves towards a boundary of fine and coarse
mesh.
Figure 4.1: Topology optimization on an adaptive mesh with Cartesian grid. Reproduced from the paper by Costa Jr. and
Alves (9)
A few papers by Maute, Schwarz, and Ramm (22)(23) present a different version of
adaptive mesh refinement. An approach in which the orientation of the mesh adapts to
the structural boundaries of the optimization problem as shown in Figure 4.2, but greatly
adds to the complexity of implementing such a solution. Such an approach better represents the material boundaries and the stresses along the boundary. Adaptive refinement
is first used to develop the structure and transform the mesh into the configuration along
the boundary through the use of isolines. Once void areas are determined they are eliminated from the problem. In the examples presented, the authors report 16% reduction
in computational effort compared to a conventional procedure while producing similar
results.
Overall this approach adds a larger amount of complexity to the problem especially if
the mesh deviates from a Cartesian mesh. Methods need to be devised to select elements
for refinement and the error in the solutions need to be monitored. Thought has to be
given about how the sensitivities develop and the effects mesh refinements play on the
final optimal design. The results achieved are usually similar to conventional uniform
mesh but discrepancies are still present.
4.1.2 Global Mesh Refinement
A conceptually simpler mesh refinement strategy can be employed through the use of
global mesh refinement. This eliminates many of the complexities and complications
involved in a dynamic mesh adaptivity method while also still achieving fine mesh detail.
However this type of refinement needs coupling with another method to bring the true
benefits. Global mesh refinement will develop a fine mesh where the structure is located
23
methods to exploit local structure
Figure 4.2: Topology optimization with adaptive unstructured mesh. Reproduced from the paper by Maute et al. (23)
but also contributes to a fine mesh in void areas. The void areas need to be dealt with
separately.
An advantage of such a mesh refinement technique is the amount of iterations needed
to be performed on a fine mesh. By initially developing the structure on a coarse mesh,
feasibly less iterations will be needed for the fine mesh calculations. Improvement is
gained based on the reduction of iterations on the fine mesh. A concern of such a
method is how the coarse meshes effects the final optimization results. The solutions
before refinement of the mesh are based on solutions of a coarse mesh, which are often
considerably different from a fine mesh solution. Since the structure is already developing to an optimal solution based on the coarse mesh, perhaps it is not the same path that
would be taken if an initial finer mesh had been used.
A paper by Swan and Rahmatalla (30) presents an implementation of a global mesh
refinement technique. They proposed a method were the preliminary problem is solved
on a coarse mesh using a moderate volume fraction. Through a sequential process they
map the results of the previous mesh onto a finer mesh and reduce the volume fraction in
the domain. They repeat the process until a optimal shape and performance is achieved.
Through this method they also apply a size reduction strategy in which the void structural elements can be removed from the problem. If needed the elements can be returned.
This strategy brought considerable computational gains. The author reported similar designs in the examples they used and expressed considerable computational gains, but
specifies no objective value on the improvement.
A strategy of removing void elements from the problem is computationally beneficial
if the type of problem is able to take advantage of it. For a heat conduction problem
and many others, this is not the case. The void areas in the domain need to be retained
in order to calculate the overall temperature response in the design. A different method
would be needed in order to reduce computational effort in these regions.
24
4.1 finite element solution
4.1.3 Mesh Superposition Methods (S-FEM)
A FE technique developed by Yue (36) is a technique that defines a global coarse mesh
followed by a sequence of finer meshes superimposed over the original coarse mesh.
This method is similar to adaptive h-refinement and is called s-refinement (Mesh superposition technique) (S-FEM). Global and local refinement can be achieved in an efficient
manner where the local refinement is structured in the areas where it is needed. The
mesh can be constructed in a simple fashion with the use of a structured mesh superposition as shown through the illustration below (unstructured mesh provides for greater
complications, but perhaps better representation or better efficiency)
Figure 4.3: Example of mesh superposition (Picture developed byYue (36))
A simple implementation and the main equations to structure the problem will be
presented, in order to indicate the computational complexity of this approach. The assembled system of equations for the problem as presented by Yue (36), becomes:
"
KGG KGL
K LG
K LL
#( )
tG
tL
(
=
qG
qL
)
(4.1.1)
with KGG and KLL corresponding to standard stiffness matrices of the global and local mesh respectively and KGL and KLG representing the coupling matrix between the
different meshes.
Multi - level mesh superposition is possible which helps eliminate the influence of the
global discretization on the composite mesh. This also improves the condition of the
matrix. The assembled equations are:
25
methods to exploit local structure
Figure 4.4: Multi-level superposition (Picture developed byYue (36))

K00
K01

 K10 K11

 .
..
 ..
.

Kn0 Kn1
   

· · · K00 
q0 
 t0 














 t1   q1 

· · · K1n 

=

..   ..   .. 
..
 . 
.
. 
.



 
 


 
 
 

· · · Knn
tn
qn
(4.1.2)
with layer 00 corresponding to the global layer and layers ii corresponding to the subsequent superimposed meshes and indices ij represent the coupling matrices between
the layers. The coupling matrices are obtained through the use of the local matrix KLL
and a global local transformation matrix C which is an expression developed by the interpolation functions of the global an local meshes. The coupling matrix is described
by:
KeGL (i, j) =
4
∑ Cik KeLL (k, j)
(4.1.3)
k =1
A method could be envisioned of an assembling of multiple meshes either in a structure case or an unstructured case. First in a structured case, the method of assembly is
relatively simple as the discretization follows the main structural elements and uses it as
a guide to construct the finer mesh. Overall this method of assembly is relatively simple
and efficient. It can be used to develop a local mesh around the local structure while
maintaining a sparse global surrounding mesh. Another method of using an unstructured refinement, could be thought of in such a way of overlaying bar elements where
the structural members are needed in the heat conduction problem. Based on the amount
of conduction that is needed through the element a certain size can be given based on
some sizing algorithm. This method is likely to benefit from the use of a level-set method
which will give a more accurate placement of the bar elements do to its smooth boundaries.
A part that was not clear from the authors description of the approach, is the effect of
changing values in the global and local stiffness matrix. In other words if a density of an
element would change in the local mesh, would the matrix for the global mesh also need
to be recalculated. Also this method was presented through the use of a single material,
thus it is not clear how the formulation would be effected through a varying density field
in the case of continuum topology optimization.
26
4.1 finite element solution
4.1.4 Adaptive Sub-Structuring of Optimization Domain
The idea of sub-structuring (static reduction) is not a new topic in the field of finite element analysis and is used in many large scale problems to reduce the size of the problem
to efficiently obtain a solution or to perform different forms of analysis on different components. This technique first arised in the aerospace field in the late 1960’s in order to
perform a break down of the components to be analyzed in the complex structure of the
airplane. This was analyzed in a paper by Przemieniecki (26).
Recently, such methods have been explored in topology optimization as a technique
to reduce the computational cost. Vemaganti and Lawrence (31) developed a method
using a decomposition of non-overlapping domains to take advantage of parallel methods to solve the optimization problem. This consisted of developing a parallel algorithm
for SIMP as well as the testing of three different parallel linear solvers for the equilibrium problem. Mahdavi et al. (21) developed a similar technique using sub-structuring
to develop a parallel optimization method. They avoid developing the global stiffness
matrix and look to directly decompose the problem. The equilibrium equations are then
solved through a preconditioned conjugate gradient algorithm. Ma et al. (20) examined
a method of substructuring that allows the designer to have specific control over the subdomains to give some control back to the user to develop more realistic design problems.
Different types of control the user is given are allowing the user to use multiple materials
in various subdomains of the structure, controlling the material distribution, and following a desired pattern or tendency of the material distribution. In common with these
traditional substructuring techniques, is the use of parallel processes to achieve a gain in
efficiency. Hence, each sub-domain is solved with the use of a different processor.
Through the method of static condensation a different method is proposed unlike the
traditional approaches. Here it might be possible to improve efficiency by reducing the
size of the stiffness matrix that needs to be inverted every iteration. Since a large part
of the domain has unchanging design variables, it is possible to only take the inverse of
these design variables once every several iterations. When the number of design variables
greatly increases in the domain, this method could prove to be especially important. Thus
the effected set of design variables that needs to be inverted every iteration is reduced.
An approximate example will illustrate the improvements of such a method. Through
the Cholesky decomposition method the worst cast computation cost is of O(n3 ). For
calculating the full domain which is represented by 1, the computational cost is 1. Now
for calculating 10% of the domain being represented as 0.1, which was shown to arise
quickly in the optimization process in Section 3.3.2, and assuming a similar behavior
of O(n3 ), then the computational cost is 10−3 . This shows 99.9% improvement in the
computational cost. Obviously this comparison is just a crude approximation to represent
the idea, but in reality such improvements will be considerably less. As missing from
this example are the extra costs to construct the seperate domains and build the matrices.
A simple illustration of how the domain would be structured is shown in Figure 4.5.
The domain is essentially structured from a single domain to a dual domain in a way
that separates the design variables that are changing from those that are not changing.
Thus the finite element equation can be sub structured in the following manner:
"
#( ) ( )
KCC KCS
tC
qC
=
(4.1.4)
KSC KSS
tS
qS
where KSS , ts , and qs are the stiffness matrix, temperature response, and heat flux of
the static domain respectively. KCC , tc , and qc are the stiffness matrix, temperature re-
27
methods to exploit local structure
Figure 4.5: Substructuring of Domain
sponse, and heat flux of the changing domain respectively. KCS and KSC are the coupling
matrices. The stiffness matrix in the previous equation is developed by the following subdivision of the nodes:

Kcc Kci

 Kic
0
0
Kii
Kis
Ksi
Kss

"
#
KCC KCS

=
KSC KSS
(4.1.5)
with 0 being a matrix of zeros.
Arranging the matrix equation into its individual components yields:
1
tS = K−
SS ( qS − KSC tC )
(4.1.6)
−1
tC = KCC
(qC − KCS tS )
(4.1.7)
The temperature of the static domain can be substituted into the matrix equations to
determine the temperature of the domain that is changing.
1
−1
−1
tC = (KCC − KCS K−
SS KSC ) ( qC − KCS KSS qS )
(4.1.8)
The temperature in the part of the domain that is static can be recovered after the
temperature in the domain that is changing is known.
1
tS = K−
SS ( FS − KSC tC )
(4.1.9)
Thus the size of the problem to be calculated each iteration is reduced to the amount
of changing design variables in the design domain for several design iterations.
A concern to the process is the effect of the interface nodes shown in Equation 4.1.5 as
the size of KCC is [c + i ] × [c + i ] were c and i is the number of changing and interface
DOFs respectively. This will depend on the length of the boundary of changing variables.
If the boundary is large a significant amount of degrees of freedom will be added to
the changing domain through the terms Kii , Kic and Kci . This addition will reduce the
overall benefit of the method.
28
4.1 finite element solution
An illustration of how this process is performed is presented in Figure 4.6. The overall
process is similar to the original optimization process presented in Figure 2.1, the major
differences are the development of the separate domains and separate solution of the
temperature for the static and changing design variables.
Initialize
Optimization
Determine
Changing and
Static Domains
Develop Buffer
Zone
Generate Static
Stiffness Matrix
Solve Static
Inverse
Generate
Changing
Stiffness Matrix
Update Domains
Update
Densities
Solve
Optimization
Problem
Compute
Solution of
Static Domain
Compute
Solution of
Changing
Domain
Check
Domains
No
Convergence
Yes
End
Optimization
Domains Fixed
Figure 4.6: Processes involved with adaptive structuring of the optimization domain
This approach will be further explored and tested in Section 5.3
4.1.5 Structural Re-Analysis
Since a great deal of computation time within an optimization procedure is due to the
solution of the equilibrium equation, a significant reduction in computational cost can
be gained by performing an approximate solution to the analysis problem. Often in the
optimization of these localized structures there is a high percentage of design variables
within the optimization domain that do not change thus leaving few to be updated,
especially as the design is almost at the point of convergence. The remaining few changes
in design can be easily and accurately approximated through a method of structural reanalysis, and an approximate temperature response in the domain can be achieved.
Within the current formulation of the optimization routine these rough approximations
are acceptable as the errors developed in the optimization process is directly reflected
in the analysis of the sensitivities in order to maintain consistency. Hence, any errors
developed within the approximation process will also be accounted for in the sensitivity
analysis. This means large inaccuracies in the solution of the response can be tolerated.
This can be seen through the development of the sensitivity equations as presented by
Oded Amir (2) and which will be shown further on in this discussion.
Before getting into the implementation of re-analysis techniques within topology optimization, an overview of static re-analysis incorporating the combined approximation
approached proposed by Kirsch (18) will be discussed. In general the idea behind reanalysis is to reduce the number of DOFs needed to solve a solution to a system of
equations. Therefore the solution is approximated with a significantly smaller group of
variables to determine the response of a larger system. This is the basic idea used within
the reduced basis method, where the smaller group of variables is a linear combination
of a few preselected basis vectors.
Through a discretization of the computational domain in the optimization problem, a
linear system of equilibrium equations is developed at each iteration step of the optimization routine.
29
methods to exploit local structure
Kt = q
(4.1.10)
with K being the stiffness matrix given in its factorized form of K = U T U, U an upper
triangular matrix, t the unknown temperature vector, and q the applied heat flux on the
domain. Throughout the optimization process it is assumed that the heat flux, q, does
not change. Now to increase efficiency an approximate solution can be developed for
the temperature response with t ≈ t̃ through a formulation of a change in the stiffness
matrix due to a change in structure:
K = Ko + ∆K
(4.1.11)
where KO is the stiffness matrix (symmetric and positive definite) from a previous full
analysis and ∆K is a resulting change in the stiffness matrix due to a design change. The
approximate solution is obtained by:
(Ko + ∆K)t = q
(4.1.12)
A recurrence relation is then formulated in order to develop the binomial series expansion that is used to approximate the solution. Rearranging equation 4.1.12 in order to
develop the recurrence relation yields:
Ko t = q − ∆Kt
(4.1.13)
The recurrence relation is then formulated as:
1
−1
k −1
tk = K−
o q − Ko ∆Kt
(4.1.14)
1
B = K−
o ∆K
(4.1.15)
1
t1 = K −
o q
(4.1.16)
t k +1 = ( 1 − B ) t k
(4.1.17)
Renaming variables forms:
Thus a binomial series can be developed as
(I − B + B2 − B3 + ...)t1
(4.1.18)
where I is the identity matrix and its important to note that t1 is obtained from a previous
full analysis of the optimization problem. Additional terms of the series are given by:
ti = −Bti−1
(i = 2, ..., s)
(4.1.19)
The local approximations were formed from the binomial series expansion, and the
rest of the basis vectors to be used within the global approximations are constructed
through the use of a forward and backward substitution. The reduced basis form to
approximate the temperature response is determined from a linear combination of s
linearly independent basis vectors t1 , t2 , ..., ts . The approximate solution can be expressed
as:
t̃ = y1 t1 + y2 t2 + ... + ys ts = R B y
30
(4.1.20)
4.1 finite element solution
with RB an nxs matrix containing the basis vectors and y is the vector of unknown
coefficients.
R B = [t1 , t2 , ..., ts ]
y T = y1 , y2 , ..., ys
(4.1.21)
Substituting equation 4.1.20 into equation 4.1.10 and pre-multiplying by R TB gives the
approximate system of sxs.
R TB KR B y = R TB q
(4.1.22)
Simplifying the notation:
K R = R TB KR B
q R = R TB q
(4.1.23)
The reduced system is thus described by:
KR y = qR
(4.1.24)
Based on the development of the previous equations an optimization cycle can be developed consisting of a full analysis of the optimization problem performed once every few
iterations. Within this full analysis an approximate optimization problem can be formulated based on the approximate solutions of the equilibrium equation. The optimization
problem as formulated by Oded Amrir (2) is show below:
min c(ρ) =
ρ
y T R TB K(ρ)R B y
N
s.t. :
∑ ve ρe ≤ V
e =1
0 ≤ ρmin ≤ ρe ≤ 1,
with :
R TB K(ρ)R B y
=
e = 1, ..., N
R TB q
K0 ( ρ0 ) t1 = q
K0 (ρ0 )ti = −∆K(ρ, ρ0 )ti−1 ,
i = 2, ..., s
(4.1.25)
The matrix K(ρ) of the approximate problem is split into parts:
K(ρ) = K0 (ρ0 ) + ∆K(ρ, ρ0 )
(4.1.26)
with the first part corresponding to the previous factorization of the full problem and
the second part is the changes to the stiffness matrix due to a change in design. For the
approximate problem both parts are written as shown below:
N
K0 ( ρ0 ) =
∆K(ρ, ρ0 ) =
∑ ρe,[0] Ke
e =1
N
p
∑ (ρe − ρe,[0] )Ke
p
p
(4.1.27)
e =1
31
methods to exploit local structure
In order to keep the consistency in the optimization problem the sensitivity analysis
of the model also needs to be modified to accept the approximations. The modified
expression is shown below with full details on the implementation can be found in the
paper by Amir (2).
s
∂c
∂K
∂K
= −yT RTB
RB y − ∑ λiT
t i −1
∂ ρe
∂ρe
∂ρ
e
i =2
(4.1.28)
Amir tested his technique with the conventional topology optimization minimum compliance problem. The results he achieved were nearly identical (0.007% error) to the
known MBB-beam example by Sigmund et al. (3). Perhaps the most remarkable part
is the improvement in efficiency. The full problem was computed with a total of 92 iterations and through the use of re-analysis, 190 iterations were performed with only 19
iterations being full computations of the equilibrium equations. Unfortunately the author
did not present an exact savings in time for this example but mentioned the larger the
size of the problem, the greater the savings with this approach. Another approach was
also developed in a similar manner but the approximations in this method were developed through the use of a Krylov subspace iterative solver. With this method the author
reported a reduction of 40% on the time spent on the analysis. For more details on the
implementation within topology optimization the reader is refered to Amir (2)
One of the main benefits of this approach is the ability to control the accuracy and
efficiency of the method. By using more basis functions, RB greater accuracy in the
solution is achieved but at the cost of a reduction in efficiency of the method. Likewise
the opposite is achieved. Therefore in the beginning of the optimization process when
the structure is roughly formed, cheaper approximations can be used. Closer to the end
when greater accuracy is needed more expensive approximations can develop the final
structure. An approach used by Amir (2) was took look at the error developed in the
solution, based on the error either more or less basis functions were used while also
telling the algorithm when to construct new approximations.
4.2
finite element assembly
From Figure 3.6 in Section 3.3.2, it is shown that a significant amount of design variables
in the optimization domain are unchanging and thus a method can be envisioned where
only the elements that are changing are updated in the stiffness matrix. As the assembly
process of the finite elements is still a large part of computational cost, approximately
30% of the optimization process, great gains in efficiency could be achieved.
By examining the design elements, it can be determined if there is a change in the
design between successive iterations. If there is no change in design of a particular element then there is also no corresponding change in the stiffness matrix for that element.
Therefore the update of non-changing elements can be removed, reducing the number of
element matrices that need to be updated within the stiffness matrix.
4.3
optimization problem formulation
As determined from the observations in Section 3.2 there are different possibilities for
improvement in the type of method used. These improvements can occur in the way the
physical model of the structure is developed as well as a better description of the structure
in terms of better defined boundaries. Two methods are explored in this Section: a level
32
4.3 optimization problem formulation
set approach to change the modeling of the structure from the traditional discretization
of small elements to bar like elements and a SIMP method that offers better suppression
of the gray scale structure in the design. This suppression will be beneficial to the method
of adaptive substructuring presented in Section 4.1.4
4.3.1 Level Set Method Through Skeletonization
The concept of using level sets for moving interfaces was originally developed by Osher
and Sethian (25) in 1988 and since then has only recently been applied to research areas
in topology optimization with much of the focus on structural optimization[(1),(32),(27)].
Formulations for heat conduction problems have more recently been developed by Ha
and Cho (15) and further papers looking at effects of design dependent effects (35), multiple load cases (17), and nonlinearities (37).
Many researchers have used the level-set formulation as an alternative to traditional
optimization techniques such as SIMP or Homogenization Methods to overcome draw
backs of these methods such as checkerboard effects and large regions of intermediate
density in the resulting designs. The level set method unlike traditional methods uses
an implicit representation of the interface between void and solid through the use of
the contours of a level set function. Thus a clear distinction between the two regions
can be achieved which is advantageous for obtaining accurate responses and avoiding
ambiguity. Many different formulations have been developed over the last several years,
involving different ways to map the design to the physics of the system, update methods
for the Hamilton Jacobi equation, regularization techniques, and different methods for
sensitivity analysis. An overview of these methods was presented by van Dijk (10).
First a short and simple description of the formulation of the level set method will be
presented followed by a description of how this approach can be used and beneficial for
localizing designs in topology optimization. The idea of this description is not to present
a comprehensive review of level sets but to introduce the main characteristics to determine how they can be used to reduce the computational effort of topology optimization.
To start off, the formulation of the optimization problem for the level set method is similar to that of the traditional methods, the reader is referenced to equation 2.1.1 in Section
2. Here a level set function φ is introduced in a fixed design domain, D, which represents the boundary contour ∂Ω between material Ωmat and void D \ Ω domains. The
discrimination between the two domains can be expressed by the following property:


φ(X) > 0

φ(X) = 0


φ(X) < 0
for ∀X ∈ Ω
for ∀X ∈ ∂Ω
(4.3.1)
for ∀X ∈ D \ Ω
The update of this contour is often controlled through the use of the Hamilton Jacobi
equation defining an initial value problem, however other methods exist as well. The
type of update method used depends on the problem.
∂φ
− vn k∇φk = 0
∂t
(4.3.2)
For a given normal velocity, the level set function can be updated to produce a new design in the optimization domain, where the normal velocity is usually obtained through
sensitivities of the problem.
33
methods to exploit local structure
Figure 4.7: Discretization of level-set function with skeleton structure
Since the localizing problem develops into a sparse branched network in a fraction
of the optimization domain, a level set formulation can be developed in order to take
advantage of this. The advantage of a level set approach is a smooth boundary structure
with a crisp interface thus potentially making it better for a substructuring approach.
However the drawback of this method is the increased complexity of the optimization
process as well as the slow update and convergence of the method making the results less
promising. Another idea to use the level set function is to parameterized it separately
from the physics of the problem. This could possibly be achieved by developing the
structure through the use of cheap bar elements along the skeleton of the structure, with
the bars receiving a width from the level set function as shown in Figure 4.7. This bar
element mesh is superimposed over a coarse global mesh giving both the response of
the local structure as well as the response of the entire optimization domain. Hence, an
optimal layout of the structure could be obtained much cheaper than using a traditional
dense fixed mesh. This idea of the bar elements will be further discussed in Section 5.1.
4.3.2 SIMP with Gray Scale Suppression
Through a simple modification of the Optimality Criteria (OC) statement within the 88
line code by Sigmund (3), a better suppression of gray scale structures can be achieved
then with just the use of the SIMP method with a penalty value alone. In a paper by
Groenwold and Etman (14) about gray scale suppression, they modified the OC update
statement to the following form:
xinew =

η

 Γ ( xi β i )

x̌i


 x̂
i
if x̌i < Γ( xi β i ) < x̂i
η
if Γ( xi β i ) ≤ x̌i
η
if
η
Γ ( xi β i )
(4.3.3)
≥ x̂i
where xi are the elemental design variables, xinew are the updated elemental design variables and η is the dampening parameter. The suppression operator, Γ(•), biases the
update of individual gray scale design variables to a black and white design during the
iterations within the update method itself, without effecting the volume constraint. The
only difference of Equation 4.3.3 from the traditional OC method is the addition of Γ(•).
Groenwold and Etman (14) mention in there paper two different gray scale suppression
parameters, q, based either on a power law or linear suppression. Both are shown below:
34
4.4 update / convergence
Γ ( xi β i ) = ( xi β i )q
Power
η
Γ ( xi β i )
Linear
η
η
=
η
qxi β i
−q+1
(4.3.4)
These gray scale suppression parameters further bias gray scale variables to a black
and white design by affecting the update parameter β i , were β i is defined as the ratio of
the design sensitivity to constraint sensitivity with λ being a scaling parameter.
βi =
∂ fo
∂xi
∂g
λ ∂xi
(4.3.5)
Using the definition of Lagrange optimality, a β i value equal to 1 means the design
variable is satisfying optimality conditions. Thus if β i is smaller than one material needs
to be removed and if β i is larger than one material needs to be added. Therefore the
further β i is from one, there is a greater promotion to add or remove material as the β i
is raised to the power of q incase of the power operator being used. So essentially, q can
be seen as a parameter that promotes greater movement of densities within the design
update to get the design variables to an optimal condition. This is done by underestimating the gray scale material developed in xi βη and further forcing greater movements of
material through x̌i and x̂i .
4.4
update / convergence
Finally this last Section looks at convergence. It was noticed through observations of the
structures developing at lower volume fractions, it often takes more iterations to converge
to a final solution for local structures. The OC method was used in the testing and observations, thus it raised questions if possibly a different update method might offer better
convergence results, with corresponding computational savings. Particularly a method
that offers different forms of dampening and move limits for the individual design variables seemed desirable to allow for different design variables to change at different rates.
For example, allowing better movement of material where the design variables seem to
be stuck with small local changes in design. Thus mathematical programming techniques
were researched.
4.4.1 Sequential Approximation Optimization Update Algorithm
Use of Sequential Approximation Optimization (SAO) update algorithms can bring about
several benefits to topology optimization, such they are intuitive and have a physical
background not relying on heuristic reasoning, provide more flexibility in the optimization process if exponential approximations are used, and can provide for an efficient
update method. SAO methods use gradient based algorithms that depend strictly on
convex separable approximation problems. Familiar SAO algorithms are the Convex
Linearization (CONLIN) algorithm as shown by Fleury (12) and a more general form
called Method of Moving Asymptotes (MMA) developed by Svanberg (28). Essential to
the success of these problems is the formulation of an analytical approximation of the
real objective f o (x) and constraint f i (x) functions. These approximate functions f˜0 ( x ) and
f˜i ( x ) generate an equivalent problem that is easier to solve but have a limit in the domain
they are applied. These approximations can be in the form of linear, reciprocal, or exponential approximations. An iterative procedure is applied to update the design variables
35
methods to exploit local structure
until convergence and the problem is no longer constrained by the temporary constraints.
These approximations are sampled in the sub-problems as they are less expensive.
A design problem can be approximated several ways, commonly through a truncated
first-order Taylor series expansion such as linear, reciprocal, and exponential approximations but higher-order Taylor series can also be used. As improved efficiency and
accuracy is obtained with the exponential intervening variables used by Groenwold and
Etman (13), further discussion will focus here as it provides results and flexibility that
could be beneficial to our / the considered problem. More information about the details of the SAO algorithm and the reciprocal approximation is discussed by Groenwold
and Etman (13). The exponential approximation is obtained by substituting exponential
intervening variables,
yi = xia ,
i = 1, 2, ..., n
(4.4.1)
into the truncated Taylor series, yielding the following approximation:
n
f˜Eα (x) = f α x{k} + ∑
a
{k}
a
{k} !
{k}
xi iα − xi iα
i =1

 xi

{k}
1− aiα
{k}


{k}
aiα
∂ fα
∂xi
{k}
(4.4.2)
with α = 0 denoting the objective function, α = 1 denoting the constraint function, and
{k}
variables aiα can be estimated through a method by Fadel et al. (11) which uses first
order gradient information.
{k} ∂ f α { k −1}
∂ fα
ln
/
∂xi
∂xi
{k}
, i = 1, 2, ..., n
aiα = 1 +
(4.4.3)
ln ( xi ){k−1} / ( xi ){k}
{k}
{k}
The default range for aiα according to Fadel et al. is −1 ≤ aiα ≤ 1, with -1 corresponding to a reciprocal approximation and 1 corresponding to a linear approximation.
{k}
To start off the algorithm it is suggested to set aiα = 1.
The update method proposed by Groenwold and Etman (13) is the following (specific
details can be found in their paper):
 {k} 1/ 1− ai0


β
(λ)



xi (λ) = x̌
i





x̂i
if x̌i
{k}
1− ai0
< β i (λ) < x̂i
{k}
1− ai0
{k}
if β i (λ) ≤ x̌i
if β i (λ) ≥ x̂i
{k}
1− ai0
{k}
1− ai0
(4.4.4)
{k}
They also propose to restrict the range of ai0 to amin ≤ ai0 ≤ e < 0 to ensure
convexity of the sub-problem, with amin being able to be selected arbitrarily. Through
some manipulations and setting a = −1 , it can be shown to exactly correspond to the
’OC Method’.
The author does not provide any specific details of the exact improvements, but he
mentions with the use of exponential approximation functions contribute to better efficiency due to the different exponent computed for each design variable. Thus each
design variable has a different dampening factor based on the history of the problem.
Thus if movement of material is slow, the problem can be relaxed to move more material.
If the problem begins to oscillate the problem can be tightened with more dampening to
36
4.5 summary of methods
suppress fluctuations. This may be beneficial for a sparse structure in optimization were
the movement of material in the optimization process greatly varies.
MMA, a similar method to SAO, was implemented and tested for the heat conduction
optimization problem, however it presented some issues. The MMA problem is breifly
described along with the issue in Appendix C.
4.5
summary of methods
A quick reference summary for the reader has been developed for this chapter and is
presented in table 4.1. The summary presents each method considered along with the
complexity of the implementation, expected performance, disadvantages and advantages
of the method. As most of the methods were analyzed through literature research, an
objective opinion based on what was presented in literature, is given. Hence the implementation of such methods could possibly result in different findings and it is left to the
reader to determine the final performance and benefits of each method for their particular implementation. In Chapter 5 the methods of sub-structuring and skeleton modeling
with level sets is further addressed with a more in depth and practical evaluation.
The information is expressed as an overview of the methods and not directly as a direct
comparison between each of the methods. It looks at how the methods would possibly
perform for the type of problem explored in this research: sparse structures in topology
optimization.
37
38
Structured: Moderate
Unstructured: Complex
Moderate
Moderate
Simple
Complex
Simple
Simple
Mesh
Superposition
(Section: 4.1.3)
Adaptive
Sub-Structuring
(Section: 4.1.4)
Structural
Re-Analysis
(Section: 4.1.5)
Stiffness Matrix
(Section: 4.2)
Level Set Method
with Skeleton
Curve (Section:
4.3.1)
SIMP - GSS
(Section: 4.3.2)
SAO (Section:
4.4.1)
Improved Convergence (Heat
conduction problem: sensitive to
the parameters used)
N/A
Moderate (Difficult to assume as
there are many unknowns left in
the idea)
Moderate
Moderate - High (Depending on
accuracy wanted)
Moderate - High (Depending on
method to develop the matrices
and volume fraction)
Structured: Low-Moderate
Unstructured: Low-Moderate
Moderate if void elements
removed
Distorted Mesh: Low-Moderate
Distorted Mesh: Complex
Simple
Cartesian Mesh: Low-Moderate
Expected Performance
Cartesian Mesh: Moderate
Complexity of
Implementation
Global Mesh
Refinement
(Section: 4.1.2)
Local Mesh
Refinement
(Section: 4.1.1)
Description
Need gradient information
Heuristic in two parameters
Increased nonlinearity,
relatively slow update, still
need fine FEM mesh, expensive
overlay of meshes
Need to determine elements
that are changing
Solution of response is
approximate, better
approximations are costly
Method becomes less intuitive
and likely a costly method to
form the matrices
Increases bandwidth of
stiffness matrix, method of
arranging layers difficult for
moving structure
Need additional method for
void elements, questionable
effects on optimal solution
Cost and complexity of
algorithm, less intuitive, need
adaptivity information
Disadvantages
Table 4.1: Summary of methods to improve the solution process of localizing topologies
Variable dampening,
mathematical reason for update
Reduction of gray scale and
changing variables
Smooth boundary description,
possibility of cheap bar
elements to represent structure
Only update stiffness matrix
with changed design values
Can control level of efficiency
and accuracy, great savings
with large problems
Reduction in the number of
variables to be solved every
iteration
Local refinement with better
efficiency then mesh adapting,
improves condition of matrices
Conceptually simple, able to
produce local design with
reduced cost compared to
uniform mesh
Local structure with minimum
design variables
Advantages
methods to exploit local structure
5
I N V E S T I G AT I O N
Through this Chapter a select few topics from Chapter 4 will be explored further. These
topics mainly focus on the idea of using a level set method to develop a form of skeleton
structure and the use of sub-structuring methods to divide the optimization domain
into changing and static design variables. Also presented here is the implementation of
SIMP-GSS as it was used in the development of the sub-structuring approach. Thus it
is fitting to describe how the implementation performs. These topics were selected for
further investigation due to the fact they are new ideas and lack investigation in present
literature.
5.1
level set method through skeletonization
The level set method (LSM) which was initially introduced in Section 4.3.1 is a method
that uses a function called the Level Set Function (LSF) to define the structure developed
in the optimization process. From this function the contours are used to define the
boundary of the structure. Inside this boundary are the elements used to represent
the temperature of the structure. Instead of representing a sparse structure through
expensive quadrilateral elements, an idea is to develop the structure with the use of
cheap bar elements. As was shown in Section 3.2.1 about the characteristic structure of
sparse structures, they closely represent a bar with varying cross section. Thus such a
discretization through bar elements would be a suitable representation for these types
of structures. These bar elements will form the skeleton of the structure and through
the definition of a width obtained from the LSF the original description of the structure
will be maintained. Such a curve is advantageous to this situation as a normal boundary
curve developed from the LSF would be difficult to represent a branch like structure
which is necessary to develop the skeleton structure properly. However, a method needs
to be implemented in order to determine the placement of the skeleton from the LSF.
This will be the focus of investigation of this section. Through this research another
alternative idea was developed which looked at developing the skeleton directly instead
of extracting the skeleton from the LSF. This technique as well as some current issues
with the two approaches will also be discussed.
5.1.1 Development of Skeleton Curve from the Level Set Function
The idea of skeleton structures has been used in a variety different fields already, specifically within many different visualization tasks including computer graphics, medical
imaging, and scientific visualization. The reason for the use of skeleton structures in
these fields is the need for compact representation of the more complex 3D models. Es-
39
investigation
sentially it is a way to reduce the dimensionality of the object down to its simplest form
while still maintaining the characteristic and topology of the structure.
A variety of methods have developed in order to determine the underlying skeleton of
an object. A summary of these techniques was presented by Cornea et al. (8) as well as
a more formal description of skeletonization techniques. Among some of the methods
presented are topological thinning, distance transforms (ridge detection), and Voronoi
diagram based methods. The method which most closely relates to the current problem
of determining a skeleton from the LSF is a ridge detection method. An example to
illustrate this is shown in figure 5.1
Level Set Function
Structure
Skeleton Curve
Figure 5.1: Extracting skeleton from level set structure with ridge detection method
In the example a level set function was defined to represent a branch of the heat conducting structure. From the function the structure can be determined from the level
contours. Conceptually it is very easy to see from this example, the ridge of the function
corresponds to the skeleton within the structure itself. Hence, a method to determine the
ridge of the level set function is a viable way to extract the skeleton.
A lot of literature has been found on the subject along with many different methods
on how to determine a ridge. Ridges mark important intrinsic features of a surface and
form a stable way of calculating a skeleton of a surface as they are view independent.
The most common way to define a ridge through mathematics is through the use of principal directions and curvatures. A paper by Musuvathy et al. (24) discusses the details
about ridges and presents a method to determine a ridge from a given surface. Here a
ridge is defined on a parametric surface S(u, v) ∈ R3 through the use of principal curvatures κi and there respective principal directions ti . Each point on the surface has two
corresponding principal curvatures (κ1 , κ2 ) with κ1 > κ2 . The corresponding principal
directions are t1 and t2 . Thus ridges are the points where the principal curvature reaches
a local maximum (local minimum) in its respective principal direction. The mathematical
notation used in the paper of Musuvathy et al. and several others to define a ridge is the
following:
h5κi , ti i = 0
i = 1, 2
(5.1.1)
were κ1 provides the ridges and κ2 determines the valleys.
Hence, given the level set function the skeleton curve can be defined as the gradient
of κi in the direction of ti is equal to zero. The principal curvatures and directions of the
surface can be obtained through the use of the surface normal, the tangent plane at the
40
5.1 level set method through skeletonization
point in question and the first and second fundamental forms as described by Musuvathy
et al.. The surface normal is specified as:
n(u, v) =
Su × Sv
k Su × Sv k
(5.1.2)
were the subscripts indicate partial derivatives with respect to the parameter.
Figure 5.2: Determining principle curvature with the use of the first and second fundamental form
An illustration of the idea is presented in Figure 5.2 showing the normal, the tangent
plane, and the point on the surface in question. The first fundamental form is determined from the inner product on the tangent space of a surface in 3D Euclidean space
and the second fundamental form at a given point on the tangent plane from second
partial derivatives of surface parameterization being projected onto the normal line perpendicular to the point on the surface. The first,I, and second, II, fundamental form are
defined as respectively:
"
# "
#
E F
h Su , Su i h Su , Sv i
I=
=
(5.1.3)
F G
h Su , Sv i h Sv , Sv i
"
# "
#
L M
hSuu , ni hSuv , ni
II =
=
(5.1.4)
M N
hSuv , ni hSvv , ni
Defining A, B, C as:
A = EG − F2
(5.1.5)
B = 2FM − GL − EN
(5.1.6)
C = LN − M2
(5.1.7)
At a point on the surface the principal curvatures are defined as:
√
B2 − 4AC
√2A
− B − B2 − 4AC
κ2 =
2A
κ1 =
−B +
(5.1.8)
(5.1.9)
41
investigation
The corresponding principal directions are:
# "
#
" # "
−( M − κ1 F )
−( N − κ1 G )
t11
or
t1 = 2 =
t1
L − κ1 E
M − κ1 F
# "
#
" # "
−( M − κ2 F )
−( N − κ2 G )
t12
or
t1 = 2 =
t2
L − κ2 E
M − κ2 F
(5.1.10)
(5.1.11)
Already this process is getting quite involved and a method would also be needed to
guaranty that the ridges obtained from this method correspond to the actual skeleton
curve as it was originally defined. As a ridge is not necessary defined as the maximum
values along the function. To clarify, take again the example presented in Figure 5.1. In
this function there is also a slight bend in the structure. This bend will also be determined
as a ridge from the above solution process. Thus some separation of the ridges occurring
due to this occurrences and the ridges needed for the skeleton structure would have to
be sorted out.
Also this definition of determining a skeleton curve is an implicit relation. This would
make it more complicated to develop sensitivities to update the structure as there is no
direct connection between the skeleton curve structure and the design variables. Due to
these issues, methods were investigated to determine if the skeleton structure can directly
and explicitly be developed and controlled initially. This is the topic of the next section
on alternative approach.
Thus a skeleton curve is captured as the peaks within the level set function itself and
many algorithms have been developed to obtain such a curve but often times in an
implicit formulation. Such an implicit formulation would complicate other processes
in the optimization procedure needed to represent such a structure. Such as obtaining
sensitivity information which is easily achieved if an explicit representation of the design
variables can be used.
5.1.2 Alternative Formulation
An alternative idea looks at directly developing a skeleton structure in an explicit manner
instead of calculating it from a surface function. Thus the approach is not directly a level
set approach anymore but was inspired from a level set optimization paper by Kreissl
et al. (19). In the paper he describes the development of an explicit level set approach
through the use of radial basis functions (RBF)s. The independent optimization variables
used in the method are the heights of the radial basis functions, si , and the evolution of
the level set function is explicitly controlled by the optimization algorithm.
Hence the idea of the alternative formulation is to explicitly develop a skeleton curve
through the use of the design variables used within the radial basis function. The width
would then be obtained from the overall radial basis function, Φ( x, si ). Thus similarly to
the method presented by Kreissl et al. the Φ function is discretized as follows:
φ(x, si ) = si e
with
−
( x − xi )2
wi b2
i
Φ(x, si ) =
∑ φ(x, si )
(5.1.12)
i = 1, ..., n
∀i : x − xi < R
(5.1.13)
i
were φ(x, si ) is the Gaussian normal distribution used as basis functions, wi determines
the width and si the height of the respective RBF and R determines the support radius
42
5.1 level set method through skeletonization
for the RBFs. Thus the heights of the RBFs (si ) are used as the design variables in
the optimization process. The term wi can handle different widths for each individual
basis function, but for most of the design variables the width remains constant. More
explanation will be given later. A description of the design variables is presented below.
smin ≤ si ≤ smax
(5.1.14)
si = smax
→
Point on Skeleton Curve
(5.1.15)
si = smin
→
Control Width of Structure
(5.1.16)
The concept is illustrated in Figure 5.3 which shows the development of the function Φ
through the use of RBFs in the x spatial direction representing a cross Sectional view of a
2D structure. The idea of this method is to use the design variables themselves to specify
the skeleton curve, which occurs when a design variable reaches its maximum value. The
other surrounding design variables are then used to size the width of the structure by
manipulating the function Φ. There are many different ways that width can be calculated
from the Φ function.
Function
Figure 5.3: RBFs to determine skeleton and its corresponding width
The way presented in Figure 5.4 is by finding the location of were the function Φ passes
through zero. The distance from the point describing the skeleton curve to the point
of intersection with the zero axis can easily be determined through a search method.
Starting at the known point of the skeleton, an algorithm can simply search along the
discretized points of Φ values until the corresponding point of the RBF discretization
were the function Φ turns negative is found. Based off the discretization of the RBFs the
width from the center point is calculated.
An advantage of this approach, is the skeleton does not necessarily have to remain centered in the structure. A different width can be calculated for each of the sides. The issues
with this approach is the search method becomes more complicated when the search direction is not aligned in the direction of the discretization. Thus bilinear interpolation to
determine a width is needed. Another issue is when branching of the structure occurs.
Especially near the point of branching, the positive Φ values may span two skeletons
making the case a bit ambiguous how to calculate the width of the separate branches.
43
investigation
Search Along
RBF
d
Spacing
RBF
RBF Discretization
B
A
values corresponding to RBF
discretization
w: width
(a) Search Method
(b) Width Determination
Figure 5.4: Determining width of skeleton structure from Φ function
A different method of determining width is through the height of the Φ function. The
method is simple and requires no extra calculation. The height of the function would be
used to describe the width on both side of the skeleton and is still controlled through the
use of the surrounding design variables.
Example Implementation
A simple example was created through the above approach and is illustrated in Figure
5.5. A simple structure was developed by selecting a few design variables and setting
them to a maximum value. Different heights of the surrounding design variables were
used to vary the width of the structure. In figure 5.5a the description of the Φ function
due to the selected radial basis functions is shown. In figure 5.5b, the skeleton curve in
blue was determined from the maximum design variables and the width of the structure
was determined from the search method.
(a) Radial basis function showing skeleton and width
(b) 2D structure with skeleton
and determined width
Figure 5.5: Example of placing skeleton and determining width of a 2D structure
Issues with RBF Approach
Explicitly developing a structure through the use of radial basis functions and determining the width presents several issues which need to be sorted out. The following is a list
of some of the issues with such an approach:
• Having a series of multiple maximum design variables
• Having no maximum but yet Φ is positive
• Connectivity of the discretized points of the skeleton curve should not be ambiguous
44
5.1 level set method through skeletonization
• Differentiability
Function
Figure 5.6: Issues with RBF approach
The first two issues of having multiple maximum and no maximum is shown in Figure
5.6. If there were multiple maximum next to each other then the point of reducing the
number of elements would not be achieved as well as the structure could become illdefined in terms of the skeleton curve being developed. This is potentially to happen in
an effort of the optimizer to try to increase the width of the structure. To prevent this
an idea of changing the width of the RBF for the respective RBF reaching its maximum
height value si is used. Thus if si is equal to smax the width of the RBF would decrease
consequently causing a decrease in the width of the structure. Hence, it would give the
optimizer less incentive to increase the width of the structure by bringing surrounding
RBF to its maximum height. Having no maximum and yet Φ is positive could result
in the previous point of the skeleton which would be the tip of the branch to have an
overly defined width. A method would be needed in order to determine if the RBF
design variable corresponds to the tip of the structure and develop the structure in this
area in an appropriate manner. Further investigation in terms of a simple optimization
implementation would be needed to determine the actual occurrences.
As far as connectivity of the structure is concerned, the design variables corresponding
to the skeleton curve are known if the design variables reach the maximum. However,
as shown through Figure 5.7, it is not known specifically how these design variables are
connected together in order to develop the structure. This can especially be an issue at
points were branching occurs. As shown from Figure 5.7 there are multiple possibilities
on how to connect the 4 nodes together. Some other form of information from the function will be needed in order to determine the correct path of the structure. To accurately
determine the path, a similar method developed in Section 5.1.1 will most likely have to
be used.
The differentiability of the problem presents significant issues as the structure is noncontinuous. Hence, the structure is either defined or is not defined within this method.
A different method is needed as the traditional method of sensitivities will not work and
the use of boundary curve evolution is no longer applicable to the problem. Therefore
the structure can’t be update and optimization can not occur.
45
investigation
Figure 5.7: Connectivity issue of skeleton structure
5.1.3 Current Issues of Skeleton Approach
An issue with both of these methods is how to accurately and efficiently combine the
bar element discretization of the skeleton curve with a coarse background mesh in order
to obtain accurate temperature response. Such a method is not very intuitive to develop.
The element overlay method of S-FEM as discussed in Section 4.1.3 might be a suitable approach but currently has received little research within topology optimization itself. An
approach was presented in a paper by Wang and Wang (33) where they used a relatively
coarse Eulerian mesh of bilinear rectangular elements for the global mesh and a flexible
linear triangular elements to enhance the elements along the dynamic implicit boundary
of the level set function. To simplify the process the global and local meshes coincide to
alleviate the difficulties due to discontinuities. Such a simplification is unlikely to occur
with the skeleton approach as it would be difficult for the bar elements to coincide with
the global mesh, as the structure is randomly placed throughout the global domain.
The main goal of their approach is to achieve a better representation of the boundary
and accurate responses. Through their approach, their goal was met with numerical
results in good agreement with those from theoretical solutions and numerical results
from standard finite element method. However this approach brought extra computational cost due to the greater complexities of the method when compared to a standard
mesh method with the same number of elements. More research on this approach and
how it could be efficiently implemented for use with a skeleton curve needs to be investigated. From here a better understanding of whether or not computational savings can
be achieved will be better realized.
5.2
simp with gray scale suppression
The SIMP method with gray scale suppression as initially introduced in Section 4.3.2
is a simple method to implement which can provide a significant reduction in the gray
scale of the optimized design. This gives a better description of the structure and makes
the results easier to interpret. The gray scale suppression is achieved by the gray scale
suppression parameter q which penalizes the intermediate densities in the interactions
of the update process. Therefore two heuristic values are applied to the optimization
process, the penalization component and this gray scale suppression parameter. The
implementation of the method and how it compares to the traditional SIMP method will
be explored.
46
5.2 simp with gray scale suppression
5.2.1 Effects of Gray Scale Suppression Parameter
This investigation looks at how the gray scale suppression parameter effects the optimized structure. This was examined by setting the penalty parameter to 1 and varying
the gray scale suppression parameter with the following values: 1.5, 2, and 2.5. This
was performed on a mesh of 200x200 elements with a volume fraction of 0.1, using test
problem 1. The results of this investigation are shown in Figure 5.8 and the performance
values in table 5.1.
(a) Gray scale suppression parameter: 1.5
(b) Gray scale suppression parameter: 2.0
(c) Gray scale suppression parameter: 2.5
(d) Gray scale suppression parameter: 3.0
Figure 5.8: Optimized structure for SIMP-GSS with varying gray scale suppression parameter (at iteration 60)
Suppression Parameter
q:
q:
q:
q:
1.5
2.0
2.5
3.0
Avg. Temp
Max. Temp
Percent Static (%)
1.75
2.72
5.14
5.96
3.28
5.65
12.58
12.37
86.52
98.10
99.65
99.82
Table 5.1: Comparison of performance valus at 60 iterations
From the figures, the immediate results of how the suppression parameter affects the
design can be seen. This is especially true between Figure 5.9a with a suppression parameter of 1.5 and Figure 5.9b with a suppression parameter of 2, were all the intermediate
density branches of Figure 5.9a are removed from the design. From Table 5.1 a more
47
investigation
accurate description of the temperature in the domain is achieved, as the intermediate
material artificially lowered the temperature in the domain but contributes to no feasible
structure. Having a suppression parameter above 2 seems to overly suppress the intermediate values and prevents the structure from properly developing branches as can be
seen in Figure 5.8c and 5.8d. Overall there seems to be no benefit to have a suppression
parameter above 2.
Another observations is the effect on the changing design variables in the design area
which can be seen in Table 5.1. By suppressing the intermediate design variables less
changing elements are occurring in the optimization process. The higher the suppression
parameter the greater the percentage of static design variables.
The issue of non-convexity induced from the penalty parameter of the SIMP is not an
issues in this case, but still with changing the gray scale suppression parameter different
optimal designs result. Thus the issue of multiple optimal solutions still remains. This
is due to how the suppression parameter modifies the update method. Having a higher
suppression parameter means a greater promotion to remove intermediate material, thus
locking the design into removing as much intermediate material early in the design
process.
5.2.2 Comparison of Traditional SIMP to SIMP-GSS
A comparison study of the final structures achieved between the traditional SIMP method
and the implementation of the SIMP method with gray scale suppression was performed
both with a continuation method and static penalty parameters. Due to the issue of
local optimum solutions the final optimal structures obtained are not easy to compare,
but other interesting observations are made. The comparison was performed under the
conditions presented in Table 5.2:
Convergence
VF
Filter Radius
Mesh
Temp. Change < 0.1%
0.1
2.4
200x200
Table 5.2: Comparison settings for comparing SIMP to SIMP-GSS
The first comparison shown in Figure 5.9 is a comparison between the two methods
using a continuation approach with the following increment procedure.
• Penalty Stepping SIMP: Penalty = 1 for the first 20 iterations then increased by 0.2
every 10 iterations afterwards
• Penalty Stepping SIMP-GSS: q= 1 for the first 20 iterations then increased by 0.2
every 10 iterations afterwards, penalty set to 1 for entire process.
Description
End Parameter value
Avg. Temp
Max. Temp
Percent Static (%)
Iterations
SIMP-GSS
SIMP
q: 2.0
p: 3.0
2.52
3.03
5.18
5.75
96.30
78.16
64
114
Table 5.3: Comparison of performance values for full optimization process between SIMP and SIMP-GSS with continuation method
Through the use of continuation methods, the issue of local optimal configurations is
lessened but still contributes to the final designs. Despite this, striking resemblance of the
48
5.2 simp with gray scale suppression
(a) Gray scale suppression parameter: Continuation
(b) SIMP Penalty parameter: Continuation
Figure 5.9: Comparison of optimized structures between SIMP and SIMP-GSS with continuation method
final designs can be seen especially in the main structural elements as shown in Figure 5.9.
The performance results are presented in Table 5.3 and show the gray scale suppression
method minimizes the temperature the most with fewer iterations. The reason for the
difference is the better suppression of intermediate design variables with the SIMP-GSS
approach as would be expected. Thus essentially most of the gray material from the
traditional SIMP method has been eliminated providing a structure that is more physical
and easily to interpret in an engineering sense. The main observations are summarized
as follows:
• SIMP - GSS offers better suppression of intermediate design variables and increases
the percentage of static design variables
• Less iterations were needed for convergence in the considered test case due to
better suppression of local changes in the design with SIMP-GSS. The amount of
suppression can be controlled by the q parameter, which effects how the design
variables are updated within the domain.
• A lower objective value for the average temperature in the domain is achieved with
SIMP-GSS
The second comparison shown in Figure 5.10, is an optimization problem using static
penalty values in both of the methods. The following parameters where used:
• Traditional SIMP: Penalty was set to 3
• SIMP - GSS: Penalty was set to 1 and q parameter was set to 2
Description
Parameter
Avg. Temp
Max. Temp
Percent Static (%)
Iterations
SIMP-GSS
SIMP
q: 2.0
p: 3.0
2.47
3.09
4.85
6.84
98.80
78.62
110
134
Table 5.4: Comparison of performance values for full optimization process between SIMP and SIMP-GSS
49
investigation
(a) Gray scale suppression parameter: 3
(b) SIMP Penalty parameter: 3
Figure 5.10: Comparison of optimized structures between SIMP and SIMP-GSS
Due to local optima in the optimization process, little comparison is achieved between
the two structures. However, a similar observation is seen in better suppression of intermediate design variables in the design domain through the use of the gray scale suppression method. Again better objective values, less iterations, and larger percentage of static
design variables was achieved through the use of SIMP-GSS.
An interesting observation is the comparison of the results without penalty continuation in table 5.4 with the results with penalty continuation in table 5.3. With the
traditional SIMP method, the penalty continuation approach reaches a better objective
value and is often accredited to achieving a better optimal design as it eliminates the
tendency to get stuck in a local optima. However with SIMP-GSS, continuation of the
suppression parameter did not reach a better optimal solution. The reasons for this are
currently unknown, but the more aggressive gray scale suppression might also contribute
to convergence to a local minima to some degree as also seen in Figure 5.8.
5.2.3 Effects on Design Variables
A similar procedure of analyzing the amount of changing design variables in the design
domain was also performed for the SIMP method with gray scale suppression. The
procedure used is the same as presented in Section 3.3.2. The results of this analysis are
presented in Figure 5.11.
Overall significant improvements over the traditional method of SIMP were achieved,
with better suppression of changing design variables. This can be seen comparing the
SIMP-GSS change in design variable plots, Figures 5.11a and 5.11b, to that of the traditional SIMP plots in Figures 5.11c and 5.11d. Overall results near almost 100% suppression were achieved for almost all the mesh sizes tested and both design problems. After
20 iterations with a volume fraction of 0.1, approximately 90% and 80% of the design
area has no changing design elments for designs 1 and 2 respectively. This is about 10%
improvement over the finest mesh quality with the traditional SIMP method as shown
in Figure 5.11e. A similar trend was seen with the other volume fractions tested. These
results show promise for a combination of gray scale suppression technique with an
adaptive substructuring formulation of the design problem, which is discussed in the
next section.
50
5.3 adaptive sub-structuring of optimization domain
(a) SIMP-GSS Design 1
(b) SIMP-GSS Design 2
(c) SIMP Design 1
(d) SIMP Design 2
(e) Comparison of SIMP to SIMP-GSS
Figure 5.11: Percentage of design domain with chaning design elements
5.3
adaptive sub-structuring of optimization domain
The goal of this investigation is not necessarily to show the efficiencies of the substructuring algorithms in general or to develop a new algorithm for this situation. There
is much literature and research performed on ways to efficiently implement a substructuring method into analysis of a design (26)(16) and a few papers describing implementations within topology optimization (31),(21),(20). Thus such efforts to efficiently implement a sub-structuring routine is out of the scope of this research paper. However this
investigation will focus on how to adapt such substructuring methods in a way that is
slightly different from the traditional methods, and describe the benefits it can bring to
the localizing design problem.
51
investigation
The concept of the adaptive sub-structuring of the optimization process was established in Section 4.1.4 with further analysis of how to implement such a technique and
the implications of the process will be discussed here. Through development of a new
optimization formulation it is important for the optimal structure to have similar features
to the current methods available. This ensures consistencies in the approach for optimization problems. This will be examined first to justify the continuation of the investigation
into this technique. Since the objective of this approach is to reduce cost, an analysis
on the solution method will follow. Finally, the separate domains corresponding to the
static and changing design variables need to be selected. An idea on how to select the
two different domains for assembly of the stiffness matrix will be explained.
5.3.1 Comparison of Optimal solutions
For the comparison, the following descriptions of the two optimization forms will be
used:
• Full Implementation (FI): Refers to calculating all DOF’s within the FE problem
(Traditional SIMP optimization)
• Adaptive Substructuring Method (ASSM): Refers to calculating changing DOFs
within the FE problem and the static degrees of freedom every few iterations
In order to get an accurate comparison of ASSM to FI without fully implementing the
sub-structuring problem, a simple test case was developed. An illustration of how the design domain was set up for this simple test case is shown in Figure 5.12. A portion of the
optimization domain is set fixed both in location and size throughout the optimization
process, with the location being centered on the right boundary for design 1 and in the
center of the optimization domain for design 2. The number of design variables within
each of the domains can be specified before the beginning of the optimization process. It
is assumed the behavior of the equations will be the same for the simple case of holding
the domains fixed and for when the domain boundaries will be changing throughout the
optimization process.
T=0
Insulated Boundaries
T=0
Changing Domain
Changing Domain
Known Static H H
c
Domain
Ws
W
(a) Design 1
T=0
Known Static
Domain
Hs
H
Ws
Wc
(b) Design 2
Figure 5.12: Static structure for testing optimization domain
The ASSM routine was directly implemented from equations 4.1.4 - 4.1.9 in Section
4.1.4 within the MATLAB environment. Such an approach is not an optimal method as
will be explained in Section 5.3.2 but allows for a straight forward implementation. During the first iteration, the stiffness matrix for the static domain is computed along with
52
5.3 adaptive sub-structuring of optimization domain
−1
−1
forming KSS
KSC and KSS
FS . For the rest of the iterations, these terms are held constant
to simulate the ASSM process. For the current method FI, the MATLAB backslash operator using Cholesky decomposition is calculated every iteration for the entire domain.
The two implementations are then compared together using the same amount of design
variables in each domain, with the following criteria for comparison:
• Development of similar structures (Figure: 5.13)
• Average temperature in the domain
• Maximum temperature in the domain
• Percentage of design variables remaining static
• Iteations to convergence
Figure 5.13 and table 5.6 present the results of the optimized structures for the two
implementations. For these tests, SIMP-GSS was used as it offered better suppression
of changing design variables. For a description of this method the reader is referred to
Section 4.3.2 and Section 5.2. The specifics for the test are presented in Table 5.5.
Convergence
Parameters
VF
Filter Radius
Mesh
Temp. Change < 0.1%
q: 2, p: 1
0.1
2.4
200x200
Table 5.5: Comparison of ASSM to FI settings
Comparison of the optimum structures presented in Figure 5.13 shows no visual differences in the structures developed from the two methods. The performance data shown
in table 5.6 is also consistent between the two methods with no significant deviation in
the values. Due to this being a simple test to simulate the ASSM process, differences
could arise during a complete implementation of ASSM when the different domains corresponding to the changing and static variables need to be updated.
In this direct substructuring approach, the main sources of error arise from the matrix
of DOFs that is condensed from the global system, as the inverse is used in three separate occasions in the solution process (see equations 4.1.4 - 4.1.9). If this matrix becomes
ill-conditioned, significant numerical errors occur. During this test, the static stiffness
matrix was significantly better conditioned when compared to the global stiffness matrix,
approximately 1.5E3 and 5.0E17 respectively. This is due to the fact the static domain
Design
Design 1
Design 2
Description (Mesh
Size)(Static Elements)
Avg. Temp
Max. Temp
Percent Static
Iterations
FI: 100x100 50x50
ASSM: 100x100 50x50
FI: 200x200 100x100
ASSM: 200x200 100x100
FI: 100x100 50x50
ASSM: 100x100 50x50
FI: 200x200 100x100
ASSM: 200x200 100x100
3.72
3.72
3.31
3.32
0.79
0.79
0.70
0.69
7.98
7.98
7.98
7.98
3.87
3.87
3.88
3.88
97.38
97.38
98.94
98.94
88.88
88.88
96.86
96.91
38
38
77
76
19
19
50
50
Table 5.6: Performance data of optimized structures computed from FI and ASSM
53
investigation
(a) FI: 200x200 Mesh, 100x100 Static Domain
(b) ASSM: 200x200 Mesh, 100x100 Static Domain
(c) FI: 200x200 Mesh, 100x100 Static Domain
(d) ASSM: 200x200 Mesh, 100x100 Static Domain
Figure 5.13: Comparison of optimized structures computed from FI and ASSM (Top: Design 1, Bottom: Design 2)
typically has elements with the same material properties making the system better conditioned when compared to the global system with changing properties throughout the
domain. However the condition of the static matrix worsens as the number of static
variables increase, and may become an issue later in the optimization process if the domains are update with an ill conditioned static matrix. Different implementations of the
equations, whether it be through implicit development of the equations through a modified factorization or through iterative methods, can result in errors depending on the
structure of the matrix, the factorization used, or the degree of accuracy obtained from
iterative methods.
5.3.2 Analysis of Solution Process
This section on analysis of the solution process looks to identify the benefits of using
this approach by providing a comparison of ASSM to the FI. Since the ASSM method
can not be efficiently implemented in MATLAB to give a fair comparison against the
direct Cholesky solver, an approximate comparison was developed through the use of
flop counts. As some assumptions were made on the process due to the sparsity, the
implications of how sparsity effects the problem will be explained. Finally a direct implementation of the equations is not the most beneficial way to solve for the temperatures.
54
5.3 adaptive sub-structuring of optimization domain
Full Implementation
Sub-Structuring
"
Kt = F
KSS
KSC
#( )
tS
KCS KCC
1. Solution Process Ax = b
a) Cholesky Factorization.
Factor: A = LL T
b) Forward Substitution
Solve: Ly = b
c) Backward Substitution
Solve: L T = y
2. Dominant Terms
Step a): 31 n3 flops
Step b): n2 flops
Step c): n2 flops
Total: 13 n3 + 2n2
tC
=
( )
fS
fC
1. Solution Process
1
−1
a) Form: K−
SS KSC and KSS fS
1
b) Form: S = KCC − KCS K−
SS KSC and
1
f̃ = fC − KCS K−
SS fS
c) Solve: StC = f̃
d) Solve: KSS tS = fS − KSC tC
2. Dominant Terms
Step a): f + nC s flops
(f: Factoring KSS , s: Solve)
−1
Step b): 2n2C nS (Cost: KCS × KSS
KSC )
2 3
Step c): 3 nC flops
Total: f + nC s + 2n2C nS + 23 n3C
Table 5.7: Comparison of solution process between FI and ASSM
Different methods on implementation will finalize the discussion on the analysis of the
substructuring method.
Efficiency Improvement of Sub-Structuring
A relative idea of how the approaches compare can be developed by looking at the dominant mathematical operations needed. Such an analysis is often done by using flop
counts, giving an estimate on the computational effort based on the number of variables
being used in the calculation. Depending on how the algorithms are implemented and
the architecture of the computer system, the results may be different from what is predicted in the flop count analysis. To create an accurate comparison some assumptions
on the behavior of the ASSM process were made. The assumptions used to develop the
efficiency improvements for the substructuring approach are the following:
• Solution of substructured equations is less efficient
• Sparsity patterns and density of matrix is less optimal, particularly KCC
• A form of Cholesky decomposition is used in the solution process of ASSM
• Efficiency can be compared effectively through flop counts
An overview of the analysis is presented in Table 5.7, stating the main equations, the
solution process, and the dominant terms of the solution process for each implementation.
As can be seen from this overview, some information is needed on how the factoriza−1
−1
tion of KSS behaves as well as the solution of KSS
KSC and KSS
FS is performed. However,
this can very much depend on the specifics of the implementation. Considering the full
55
investigation
implementation uses Cholesky decomposition it is assumed for the ASSM approach a
Cholesky decomposition with forward and backward substitution is also used. Since
the structure of the matrices and the equations is often rather efficient for the full implementation an assumption has to be made on the computational performance for the
substructured matrices. Often these matrices will not develop in a nice banded structure
as in constructing the global matrix. Without doing a complicated full analysis on methods of permutations, sparsity patterns and fill in of the substructuring equation to get
the exact cost, it was assumed the Cholesky decomposition for ASSM would be half as
efficient. More information on the sparsity of the problem is presented in Appendix B,
Section 3.
Therefore the cost for factoring and solving is given by 23 n3s and 2n2s respectively. Incorporating these terms into the the total cost equation and taking into account the terms in
the static domain are computed once every few iterations, the following cost equation is
produced:
2 n3s
n2
ns
2
+ 2 s nc + 2n2c
+ n3
(5.3.1)
3 iter.
iter
iter 3 c
Expanding the total cost for FI and inputting n = nc + ns , the cost equations can be
easily compared
Cost(iter, ns , nc ) =
1 3
1
ns + n2s nc + n2c ns + n3c
(5.3.2)
3
3
The two equations were plotted together in Figure 5.14 for a varying amount of static
variables. For the cost equation belonging to ASSM, several different values for the
number of iterations the domain was kept fixed is also plotted. The x-axis shows the
percentage of the design variables being static, and the y-axis the percent time savings
in Figure 5.14a and the speed up of the solution process of the finite elements in Figure
5.14b. For Figure 5.14a a value of +100 means the process takes 100% less time, thus
practically a free solution process, and -100 means the process takes 100% more time,
thus double the solution time. For Figure 5.14b a value of 25 means the process is 25
times faster in solving the finite elements to obtain the temperature of the problem.
Cost(ns , nc ) =
(a) Percent time savings in the solution process of
the finite elements
(b) Speedup of the solution process of the finite
elements
Figure 5.14: Efficiency of static condensation method for number of static iterations
From these figures the following information can be interpreted:
• Holding the domain fixed for one iteration results in the process being twice as
inefficient, at least 3 iterations are needed to see improvement
56
5.4 methods for structuring the optimization domain
• Typically after 30% of the domain being static, significant improvements result
• At higher percentages of the domain being static, the improvements plateau. Thus
approximately after 80% of the domain being static, the focus should be on holding
the domains fixed for more iterations. Strategies in Section 5.4 look to better exploit
this region.
This information will be taken into account when developing the separate domains
for the optimization process. The efficiency information developed in Figure 5.14 can be
used to compute an approximate overall efficiency improvement, in terms of reduction
of the computation time, for a whole optimization process. This information will be
presented further in Section 5.4.
Sub-Structure Implementation
The implementation of ASSM can greatly impact the performance of the method as well
as the storage space needed. One of the biggest problems of directly implementing Equations 4.1.4 - 4.1.9 in Section 4.1.4 is the immediate products developed from the process
−1
especially from KSS
KSC . This results in a full matrix if explicitly computed resulting in
−1
extra need for storage and computation time. A better method is to obtain KSS
KSC and
−1
KSS FS through decomposition, forward reduction, and backward substitu
The sub-structured stiffness matrix in Equation 4.1.4 can be decomposed as:
"
# "
#"
#
T
T
KSS KSC
LSS
0
LSS
LCS
=
(5.3.3)
T
KCS KCC
LCS LCC
0 LCC
The main components of the stiffness matrix needed in the substructuring process become the following through decomposition:
T
KSS = LSS LSS
(5.3.4)
T
LSC LSS
(5.3.5)
KSC =
T
K∗ = KCC − LSC LSC
(5.3.6)
Through forward reduction the modified force vector for the changing domain becomes:
1
p∗ = fC − LSC L−
SS fS
(5.3.7)
and through backward substition the temperatures in the changing domain followed by
the recovery of the static domain are achieved by:
tC = K∗−1 p∗
tS =
5.4
T
−1
L−
SS ( LSS fS
(5.3.8)
T
− LSC
tC )
(5.3.9)
methods for structuring the optimization domain
A very important process in this method, is how to construct the domains in an efficient
manner. Obviously the designs are changing, meaning the areas were design changes
occur also change through the optimization process. Figure 5.15 illustrates the changing
design area as well as the ideas that will be presented shortly. As was shown in Section
57
investigation
5.3.2 and Figure 5.14 it is important to keep the domains fixed for several iterations in
order for ASSM to be beneficial to the user. To achieve greater number of iterations when
the domains are fixed, an extra area around the changing domain can be developed, a
form of buffer zone in a way. This comes at a trade off:
Changing Domain at k
Changing Domain at k+1
(a) Domain Change
Static Domain
Buffer Zone
Changing Domain
(b) Domain Change with Buffer Zone
Figure 5.15: Establishing static and changing domains
• Smaller buffer zone → greater number of updates needed
• Larger buffer zone → smaller percentage of static design variables
Both approaches ultimately end up reducing the overall benefit to the procedure, but
selecting the proper one will minimize this reduction.
Figure 5.15 is a simplification of the processes involved in SIMP but presents an understanding of the behavior. Figure 5.15a shows the development of the changing domain
over iterations and Figure 5.15b shows the concept of the buffer zone. The buffer zone is
meant to encapsulate the changing domain for several iterations. Thus great care in the
construction of the buffer zone is needed.
The changing domain can have many different behaviors in the optimization process.
Changes can result due to the addition of material to the structure, as well as changes
can occur when structure is removed. The areas where the design changes occur are not
necessarily aligned with the definition of the structure. For instance, there can be areas
of no design change within the structure itself.
For the sake of completeness, a description of how the initial changing and static domains are selected will be given, followed by different implementations to formulate a
buffer zone. Selecting the domains is relatively simple process by examining the design
variables of the current iteration to see if they changed from the previous. To avoid selecting changing design variables occurring through roundoff / numerical error, a change
threshold e is specified. If the change is greater than e the design element is marked
with the discrete value 1 and is part of the changing domain. If the change is less than
e the design element is marked with discrete value 0 and is part of the static domain.
Mathematically the element design change, xc , values are shown as:
58
5.4 methods for structuring the optimization domain
(
xc =
1
if |xik − xik−1 | ≥ e
0
if |xik − xik−1 | < e
(5.4.1)
A buffer zone can be developed in many ways, but to keep the process simple, intuitive,
and computationally inexpensive, existing information in the optimization process is
exploited. Three different ideas for development of the buffer zone are the following:
radial buffer around changing design variables, buffer in areas of high sensitivities, and
a combination of the previous two.
5.4.1 Radial Buffer
A radial buffer zone, as shown in Figure 5.15b, can be developed by using the existing
filtering methods established in the optimization routine. Thus instead of filtering sensitivities or densities, as in the optimization process, the design change values in xc are
filtered. Since xc is a matrix of zeros and ones, with the ones corresponding to design elements that are changing, a radial filter will create an artificial expansion of the changing
domain into the static domain representing the buffer zone. Once the matrix is filtered,
the values in the matrix will no longer will be a discrete value of 0 or 1. Thus these
median values are returned back to 1 to maintain the discrete structure. The filtering
method is presented in equation 5.4.2 using the filter that was presented by Sigmund (3).
xbc =
1
∑i∈ Ne Hei
∑
Hei xc
(5.4.2)
i ∈ Ne
where Ne is the set of elements i with the center to center distance of i to e 4(e, i ) is
smaller than the filter radius rmin and Hei being the weight factor defined as:
Hei = max (0, Rmin − 4(e, i ))
(5.4.3)
There are a few disadvantages of this method, such as each changing design variable
receives the same filter radius and the radius is set to the same size used within the
optimization routine. Since different parts of the structure grow more rapidly than others,
specifically at the tip of a branch compared to the side of the branch, a bigger radius is
needed where in the other parts it is not. Therefore there is excess buffer zone. This
provides the motivation to look into using sensitivity information which is presented
in the next section, as the sensitivities indicate where the structure is about to change.
As far as being linked to the optimization filter radius, a smaller radius size can not be
achieved from what is specified without rebuilding the filter for a smaller radius. Which
is problematic if a large filter radius is needed for the optimization process but not for
the buffer zone. On the other side to make the filter radius bigger, it is rather simple.
The filtering method described above can be performed multiple times through the use
of loops.
5.4.2
Sensitivity Buffer
The use of sensitivity information is more of an intuitive approach, as the sensitivities tell
where changes in the structure are likely to occur. Thus, if we know from the sensitivity
information in future iterations material is needed in these regions, it makes sense to
develop a buffer zone in those areas. Figure 5.16 provides a simple illustration of this
59
investigation
idea, with Figure 5.16a showing regions of high sensitivity and Figure 5.16b showing
the establishment of a buffer zone in this region. The idea behind sensitivities is to look
how the optimization objective is influenced by a change in the design variables. If a
design variable is highly sensitive, it means adding material in that element will have a
significant impact on reducing the temperature in the domain. If there is low sensitivity,
it means material in these locations does not provide a large impact and could possibly
be removed.
Buffer Zone
Changing Domain
Region of High Sensitivity
Changing Domain
(a) Domain Change with regions of high sensitivity
(b) Domain Change with buffer zone based on
Sensitivity
Figure 5.16: Establishing static and changing domains with use of sensitivity information
By selecting a sensitivity threshold level τ, the size of the buffer zone can be controlled
with τ capable of receiving the following values:
0≤τ≤1
(5.4.4)
τ set to 0 means all the sensitivity values are taken into account and τ set to 1 means no
sensitivity values are used to develop the buffer zone.
This method is again done by using the discrete values of 0 and 1 mentioned previously
in the beginning of the Section. If the sensitivity value of a design element is above τ,
the element design change, xc , value is set to 1 and 0 otherwise. This is mathematically
defined as:
(
∂c
1 if ∂x
≥τ
i
xc =
(5.4.5)
∂c
0 if ∂xi < τ
Therefore this method offers a larger buffer zone in the region where the greatest
amount of design changes will be occurring in future iterations. This eliminates unnecessary buffer zone elements elsewhere but has a disadvantage as there is often high
sensitivity in areas where no design changes are occurring. An issue that arised during
implementation, is there is often not a large enough buffer zone around the changing design elements with a lower sensitivity region. Due to this, the design had to be updated
quite often in these areas. This led to the idea of combining the sensitivity buffer zone
with the use of the radial buffer zone and a small filter radius.
60
5.4 methods for structuring the optimization domain
5.4.3 Issues with Sensitivity and Radial Buffer Method
The previous two methods have some issues which were determined during testing of the
implementations. To illustrate the issue, the characteristics of each of the buffer domains
will be examined at some iteration in the optimization process. Figure 5.17 illustrates
the differences in the methods. Figure 5.17c shows the domain of design changes with
the black areas representing a design element that has changing density. Figures 5.17a
and 5.18b show the resulting buffered changing domains (black elements) based on the
respective buffering method.
(a) Radial Buffer (Percent Static: 77.11%)
(b) Sensitivity Buffer (Percent Static: 90.8%)
(c) Areas of Design Change (Percent Static: 99%)
Figure 5.17: Comparison of the buffer implementations at some instance in the optimization process (Iteration: 145)
As can be seen from the design changes in Figure 5.17c the areas where the design is
changing is quite sparse in the overall design domain. Thus it is hoped to maintain a
small fraction of design variables in the changing domain with the addition of the buffer
zone. This is achieved in different degrees from the different methods and parameters
used in the optimization, as well as each method develops a different buffer zone around
the changing design variables. At the tips of each branch, shown in Figure 5.17c, are
the areas where the design grows the fastest and along the sides of the branches little
growth or sometimes shrinking of the structure occurs. The major issues in the radial
and sensitivity methods are summarized in the following:
• Radial Buffer Zone, Figure 5.17a: A large buffer radius is needed to provide a big
enough buffer zone in areas of high growth, but provides for excess in areas where
the design is shrinking or small growth is occuring.
61
investigation
• Sensitivity Buffer Zone, Figure 5.17b: The buffer zone at high growth is adequate,
but the buffer zone is overly minimal in areas where shrinking or small growth is
occurring.
5.4.4 Combined Buffer
It is thought a hybrid of the two methods could provide for a better buffer zone. Here
the idea is to first perform the addition of the buffer zone through the use of sensitivities,
allowing for an expanded region where high growth is occurring. This is followed by the
use of the radial filtering to expand the buffer region. This is illustrated in Figure 5.18a.
Instead of looping to create a large radial filter, which is needed if the radial filter is
only used, ideally a single pass of the filter is used providing enough additional material
to the changing domain. Examination of the combination of the two buffering methods
as well as how all the buffering methods compare, will occur in the next section. The
direct implementation of the combined buffer is shown in Figure 5.18b, where the direct
combination of the enlarged region at the tips of the branches can be seen as well as the
enhancement around the branches.
Buffer Zone
Changing Domain
Region of High Sensitivity
(a) Sensitivity and blur radius buffer zone
(b) Combined Buffer (Percent Static: 76.51%)
Figure 5.18: Using sensitivity information and blur radius to form buffer zone
5.4.5 Comparison of Buffer Zone Methods
As described in Sections 5.4.1 through 5.4.4 about the different buffer methods, each
of them have their advantages and disadvantages. To gain a better understanding of
how they can help improve the efficiency through a reduction of the analysis time, each
method will be compared under a variety of different settings. The standard set up of
the problem used the following parameters with a SIMP-GSS implementation:
Convergence
Parameters
VF
Filter Radius
Mesh
Temp. Change < 0.1%
q: 2, p: 1
0.1
2.4
200x200
The comparison considers each of the different buffering methods and estimates the
computational savings based on the flop count estimates discussed in Section 5.3.2. The
62
5.4 methods for structuring the optimization domain
results are presented in Table 5.8. With the radial buffer, the number of passes using
the set filter radius was varied. In the sensitivity buffer, the threshold level, τ, was
varied. Finally in the combined buffer, a combination of the two parameters were varied.
Observations focused on the number of updates after the domains were fixed for the first
time, the total number of fixed iterations, the total iterations, and an estimated overall
improvement in terms of time reduction. The estimated improvement was established by
using Equations 5.3.1 and 5.3.2 from Section 5.3.2 and only presents the improvement in
solving the finite element problem.
Method
Radial
Buffer
Sensitivity
Buffer
τ
τ
Combined
τ
τ
τ
τ
Description
Pass: 1
Pass: 2
Pass: 3
τ = 0.3
τ = 0.5
τ = 0.7
= 0.3, Pass:1
= 0.3, Pass:2
= 0.5, Pass:1
= 0.5, Pass:2
= 0.7, Pass:1
= 0.7, Pass:2
Num. of
Updates
Iter.
Fixed
Total Iter.
25
14
10
56
57
59
3
2
8
6
14
10
112
122
125
75
77
78
121
102
125
127
122
125
145
145
145
145
145
145
145
145
145
145
145
145
Est. Overall Time
Reduction(%)
By EleBy
ments
Nodes
7.81
32.19
38.87
-36.57
-36.73
-38.06
43.30
27.78
47.78
47.84
32.68
40.54
3.67
27.96
36.11
-36.34
-38.65
-40.20
38.77
21.62
45.58
44.61
30.84
37.71
Table 5.8: Comparison of buffering methods
Firstly, examining the use of the radial buffer method, Table 5.8. From the standpoint
of overall improvement, the results look fairly promising especially as the number of
passes with the filter increased and a significant amount of iterations are held static.
Looking at the number of updates needed to reach the efficiency improvement, makes
such gains questionable. The exact cost of forming the matrices are unknown, but is
assumed to be rather costly as the domains are rather unstructured. To estimate an effect
of developing the matrix for the problem, it is blindly assumed it reduces the efficiency
by 2% per update. Thus having a significant number of updates which are currently
seen, quickly reduces the improvement. Increasing the filter radius only helps to some
degree, as the larger radius will quickly reduce the percentage of static variables held
fixed. Figure 5.19 shows how the efficiency is developed through out the optimization.
The blue lines represent the total improvement through the process and the red lines
represent the improvement for the specific fixed domain.
Throughout all the processes, the iterations early on present the most trouble for the
radial buffer as there are many updates occurring with few fixed iterations. The reasons
for this is due to the heavy growth that is occurring during this period. Later on in the
process the growth slows, and the radial buffer zone can effectively hold the domains
fixed for longer periods of time. This is especially true for a greater number of passes
with the filter (i.e. a more extended buffer).
The sensitivity buffer showed to be a poor choice for developing a buffer zone as can
be see in Table 5.8. No matter what sensitivity threshold was used, the process always
became less efficient. The issues that were previously mentioned about minimal buffer
63
investigation
(a) Passes: 1
(b) Passes: 2
(c) Passes: 3
Figure 5.19: Efficiency plots of radial buffer for different number of filter passes
zone along the edge of the structure are the main causes for the difficulties of this method,
as numerous updates were needed due to changes along the boundary. Figure 5.20 shows
the domains were only fixed for a few iterations, and were not able to increase the total
efficiency.
The combined buffer method was the best performing method to develop a buffer zone,
see Table 5.8. Both significant improvements were seen in overall efficiency increasing by
approximately 10% over the highest improvement achieved with the radial buffer and in
the number of updates needed. The number of updates needed is perhaps the most significant improvement, as 3 updates can be performed while also maintaining a relatively
large efficiency improvement. Within the radial buffer and sensitivity buffer method, the
lowest amount of updates achieved during testing were 10 and 56 respectively. This is
particularly beneficial to minimize the cost to re-assmble matrices corresponding to new
domains.
Interesting to note is the interaction of the threshold level and the number of passes
used. Increasing the number of passes for the radial filter only seemed to help when the
sensitivity threshold level was set to high, and provided negative results if the threshold
level was set to low. This makes sense as a higher threshold level means a smaller buffer
zone developed by the sensitivities around the areas of high growth, thus more updates
were needed as seen in Table 5.8. Therefore increasing the buffer zone radially help
reduce the number of updates needed. Looking at Figures 5.21e and 5.21f, the increasing
number of fixed iterations in early fixed domains can be seen by using more filter passes
with a direct result of reducing the number of updates.
As for when the sensitivity threshold level was set low, a large buffer zone has already
been developed from the sensitivities. Adequate area was present to reduce the number
of updates and the additional area in the buffer zone through an extra filter pass only
reduced the percentage of the static area. This reduction in static area off set the benefits
brought by reducing the number of updates needed, therefore reducing the efficiency
64
5.4 methods for structuring the optimization domain
(a) Threshold Level: 0.3
(b) Threshold Level: 0.5
(c) Threshold Level: 0.7
Figure 5.20: Efficiency plots of sensitivity buffer for different threshold levels
improvement. This effect is seen within Figures 5.21a and 5.21b. The highest improvement seen within a specific fixed domain is about 15% less with 2 filtering passes when
compared to the 1 filter pass.
The effect of the length of the boundary and the number of interface nodes is also presented in Table 5.8 when comparing the estimated overall time reduction between using
the elements or nodes to determine the percentage. First a refresher on how the interface
nodes affects the matrices in the substructuring process. The interface nodes effect Kii , Kic
and Kci in the stiffness matrices of the substructured matrix of KCC meaning the size of
the matrix is determined by [i + c] × [i + c]. Hence, calculating the percentage reduction
through the use of the nodes that are connected to a changing design variable, the true
time savings of the process is achieved. By calculating the percentage reduction with the
use of the elements, it neglects the contribution of the interface nodes and calculates the
percentage assuming the size of KCC is [c × c]. Thus comparing the percentage reduction
between the two methods gives an indication of the affect of the boundary on the savings
achieved. In general for all the buffering methods presented in Table 5.8 the approximate
affect of the interface nodes is about 4% less in the savings achieved.
Overall the combined buffer method brings the most benefits to the ASSM approach,
both in the overall time reduction and the number of updates needed. Here a median
value of the sensitivity threshold brings the greatest improvement in terms of reduction
in time, but if the updates prove costly a lower sensitivity threshold with a smaller filter
radius could be more beneficial. The effects of the interface nodes are also important to
consider as a longer boundary between the static and changing domains means less savings are achieved. A short investigation of how this method behaves with the traditional
SIMP method is presented in Appendix B, Section 4
65
investigation
(a) Threshold Level: 0.3, Passes: 1
(b) Threshold Level: 0.3, Passes: 2
(c) Threshold Level: 0.5, Passes: 1
(d) Threshold Level: 0.5, Passes: 2
(e) Threshold Level: 0.7, Passes: 1
(f) Threshold Level: 0.7, Passes: 2
Figure 5.21: Efficiency plots of combined buffer for different threshold levels and filter passes
5.4.6 Performance of Combined Buffer for Different Volume Fractions
As the structures develop differently depending on the volume fraction, specifically the
sparsity in the structure, it is interesting to investigate how this effects the performance
of the buffered problem. From Figure 2.4 presented in Section 2.2, the differences in the
structure are noticeable. One main difference with a potential effect on the performance,
is the length of the boundaries present. As lots of design changes are focused along the
edge of the structure further in the optimization process, this can have a great effect on the
type of improvement expected. Increasing the volume fraction, in other words increasing
the amount of material to create a structure, produces larger boundary lengths. It is
suspected, greater efficiency improvements occur with ASSM with decreasing volume
fraction. Therefore a test was performed using the combined buffer method and several
different volume fractions. The settings of this test are the same as those comparing the
different buffering methods in the previous section. The only difference is τ is set to 0.5
and the number of passes with the radial filter is 1, as this offered one of the best results
in terms of performance.
Table 5.9 and Figure 5.22 provide some results from this investigation. Volume fractions between 0.2 and 0.01 were tested, with the trend showing better performance with
lower volume fractions as predicted. Also at lower volume fractions was a reduction in
the number of updates particularly at a volume fraction of 0.01. Examining Figure 5.22
66
5.4 methods for structuring the optimization domain
VF
Num. of
Updates
Iter. Fixed
Total Iter.
Iter. Fixed (%)
0.2
0.1
0.05
0.01
7
8
6
3
96
125
145
128
116
145
161
136
82.76
86.21
90.06
94.12
Est. Overall Time
Reduction(%)
By EleBy
ments
Nodes
37.20
47.78
56.29
66.81
33.19
45.58
54.79
65.06
Table 5.9: Effects of volume fraction on combined buffer method
it is noticed the reduction in time for the current fixed domain for the higher volume
fractions is not as high as those for lower volume fractions. This indicates as mentioned
before the boundaries spread out the changing design elements and when buffered reduce the percentage of static variables considerably. The effect of the boundary length
is also shown in Table 5.9 when comparing the percent reduction achieved by looking at
the elements or nodes as was discussed when comparing the buffer methods. It is shown
at larger volume fractions, the boundary provides a more significant impact on the time
savings that are achieved as the boundary reduced the savings by 4%. For sparse structures, particularly with volume fraction of 0.01, the boundary has approximately 1.5%
effect on the savings achieved. Another factor is how many iterations were fixed. As the
volume fraction decreases the percentage of the total iterations that are fixed increases
significantly. This directly impacts the performance and increases the overall benefit of
the method.
(a) Volume Fraction: 0.2
(b) Volume Fraction: 0.1
(c) Volume Fraction: 0.05
(d) Volume Fraction: 0.01
Figure 5.22: Efficiency plots of sensitivity buffer for different threshold levels
67
investigation
5.4.7 Buffer Zone Improvements
As this is the first investigation into this method and due to restrictions on time, perhaps
better buffer zones can be developed through more investigations. Looking at Figures
5.19, 5.20, and 5.21 already give some hints at where improvements in the implementation
in the buffer zone can take place. Almost all the fixed domains in the beginning of the
process are only fixed for a few iterations when compared to the number of iterations
they are fixed near the end of the optimization process. Thus some sort of variance in
the parameters used to set the buffer zone can be beneficial to increase improvements
in this area. In the very beginning when the percentage of static elements is below the
change threshold level, ASSM provides no benefits. To further increase savings in this
area, the idea of global mesh refinement as discussed in Section 4.1.2 can be employed.
This method will become especially important if large amounts of elements are needed
for the final mesh refinement level. Thus the idea is to start with a coarse mesh to develop
a better initial structure. The coarse mesh will lead up to the fine mesh being employed
once the change threshold level is reached.
Mentioned in Section 5.3.2 and shown in Figure 5.14, is the fact that after 80% of the domain being static, no significant improvements can be achieved by increasing the amount
of static design variables. Thus the focus should be on increasing the number of iterations the domains are fixed. A crude attempt was made at such an implementation by
increasing the number of filter passes, but led to an oscillation in the buffer zone as the
radius increased the buffer zone to much. Better implementation can be achieved by determining a proper distribution of buffer zone were it is most needed while maintaining
80% of the domain being static.
Finally looking at Figure 5.17, perhaps a better radial buffer implementation can be
develop. An idea is to make the radius of buffer vary per design element or design
region. Thus in regions where great amount of growth is occurring, impose a larger
radius. Thus would require a redevelopment of the filter method and would no longer
coincide with the filtering method within the optimization process. Benefits in greater
control will be achieved but at the cost of possibly re-developing the filter often.
5.4.8 Performance of Domain Selection and Buffer Zone Development
This section looks at investigating the computational performance of selecting the respective static and changing domains as well as the performance of the buffer zone development relative to the other processes in the optimization process. The other optimization
processes include calculating the solution of the finite elements, determining the objective
and sensitivities of the design, filtering the design, and updating the design to its next
configuration. Figure 5.23 presents the performance of the domain selection and buffer
zone development as a percentage of the total optimization process. Here the combined
buffer method was used with a threshold level of τ = 0.5 and 1 pass of the radial filter
with a volume fraction of 0.1.
As seen from Figure 5.23 the total percentage of time needed for developing the buffer
zone is relatively small compared to the overall cost of the finite elements. It is fairly
small compared to the other processes in the optimization process as well, approximately
3%, 1.5% and 1% of the optimization process for the mesh sizes of 50x50, 100x100 and
200x200 respectively. One of the main costs in this method is the effect of the radial buffer
in the development of the buffered region. Table 5.10 presents the percentage of the total
buffering costs that the radial buffer implementation uses.
68
5.4 methods for structuring the optimization domain
Figure 5.23: Percentage of computational cost for development of the buffered changing domain
Radial Buffer
50x50 Mesh
100x100 Mesh
200x200 Mesh
1 Pass
2 Pass
14.07%
19.52%
20.42%
28.84%
31.78%
40.58%
Table 5.10: Percentage of time used in radial buffer of the total buffering and domain selection method
The percentage of time spent in radially expanding the buffer region is quite significant
especially if more than one pass is needed which could cost up to 31.78% if one pass is
used and 40.58% if two passes is used. This percentage of the total time increases with
the mesh size as the radial filter is performed on all the design variables irrespective if
they are marked as changing with a 1 or marked as static with a 0. Thus the use of a
radial buffer for large mesh sizes may prove to be problematic as it needs to be performed
every iteration. This could be another possible area of future investigation.
5.4.9 Adaptive Substructuring Method Summary
In summary, the goal of the adaptive substructuring method (ASSM) is to reduce the
number of degrees of freedom to solve for every iteration in hopes to reduce the time
needed to compute a solution. This is accomplished through static condensation which
is a substructuring technique. Unlike traditional methods of substructuring the original optimization design domain will be split on the basis of whether or not the design
variables are changing or if they remain static. The computational advantage of this matrix decomposition comes from a reduction of the DOF in the changing domain. With
a pre-computed matrix of the static domain, these DOFs can be cheaply solved. The
method presented the same result as using a full implementation (FI) of the optimization
process calculating all the degrees of freedom at once. Through a comparison of the
computational cost of ASSM to FI, it was shown significant savings can be achieved if the
number of static design variables is large and the domains for the static and changing
design variables can be held fixed for numerous iterations. In order to keep the domains
fixed even though the areas where the design variables are changing also change, a form
of buffer zone is developed around the changing design variables. Three different approaches where presented based on a radial buffer, sensitivity buffer, and a combination
of the two. From the analysis it was found combining the two methods offered the best
performance with approximately 47% reduction in the time needed in the finite element
69
investigation
problem. A comparison was then performed due to the effect of volume fraction. It was
found the smaller the volume fraction the more benefits ASSM brings. With the best
reduction of approximately 67% at a volume of 1%.
70
6
CONCLUSION
In this present study, research was performed on a variety of different topics to develop
sparse structures through optimization. Sparse designs develop naturally in several different types of structures, such as large spanning bridges, slender reinforcements of thin
plates, compliant mechanisms, and heat conduction optimization problems. The effects
of sparsity are also determined through the volume fraction specified. Particularly at low
volume fractions of approximately 10% and under a sparse problem is more prominent.
To accurately develop these structures in topology optimization, a fine mesh is needed
to determine the responses in order to give the design physical meaning. The issue with
using such a fine mesh in topology optimization is the significant computational cost
required to solve for the DOFs. Thus through my research different methods to simplify
the problem by reducing the amount of DOFs needed to be solved were evaluated.
Research on current methods available looked into different mesh refinement techniques, structural re-analysis, assembly of the stiffness matrix, and methods to deal with
convergence of the design problem. Each of these methods brings some benefits to the
sparse problem, as they all bring a form of reduction in the computation time needed.
Using a local mesh refinement technique provides for mesh resolution where it is needed,
but costly adaptive mesh algorithms do not bring about the full benefit for sparse problems. A global mesh refinement is much simpler, and can be coupled with adaptive
substructuring method developed with in this paper to reduce the computational effort needed in the beginning of the optimization process. Structural re-analysis offers a
significant reduction in the size of the problem by approximating the existing problem
characteristics through the use of basis functions. Through an initial solve on the full
system, later structures can be approximated based on the initial characteristics for a few
iterations. Sparse structures often do not have many changing design variables, thus
the approximations with this approach could be quite accurate as well as reducing the
number of computations of the full system. More investigation is needed however on
sparse problems for this approach. Updating the stiffness matrix with the DOF that are
changing can bring benefit, as it represents approximately 30% of the total cost in the
optimization process. Thus only updating changing DOFs can bring a significant benefit
as a vast majority of the DOFs are not changing.
The current methods available offer some benefit to a sparse optimization problem,
however two new approaches were proposed which more directly takes into account the
characteristics of the sparse design problem in optimization. The two main characteristics
determined through investigation is the appearance of the developed structure as well
as how the material changes in the continuum optimization methods. The developed
structure for a sparse problem develops into a form which resembles a series of bars
connected together with each bar having a different width. The bars then can be used to
formulate a skeleton. The other characteristic looks at whether or not the design variables
are changing. For sparse structures early on in the optimization process, approximately
71
conclusion
90% of the design variables are no longer changing in the optimization domain after
about 10 or 20 iterations depending on the volume fraction used.
A skeleton modeling approach was proposed to take advantage of the characteristic
structure. By representing the structure through bar elements and overlaying on a coarse
global mesh, potentially significant savings in computational time could be achieved.
However, currently the feasibility of this method remains questionable. Two ideas were
developed in order to obtain the skeleton structure. One method looked at extracting
the skeleton from the principle curvature of the level set function. However the skeleton
curve would be an implicit representation making it difficult to determine sensitivities,
as a unique and well defined differentiable mathematical relation must exist with the
design variables. The other method looks to explicitly develop the skeleton through the
use of radial basis functions. This method presents several issues, such as how to connect
the design variables together to form the skeleton and the update of the skeleton curve
it self as the structure is non-continuous. Overall for both proposals, a common issue is
how to combined the bar element mesh with the global coarse mesh. A potential method
is through a concept called S-FEM but could be a costly method. Further investigation is
needed on the S-FEM method in order to determine if it is an efficient approach.
An adaptive sub-structuring method is a promising approach bringing large benefits
in computational savings especially for volume fractions less than 10% as approximately
65% savings in time to compute the finite element solution for a volume fraction of 1%
was achieved. The advantage of the approach is developed by decomposing the domain
based on the design variables being static or changing. As the optimization process is
performed the seperate domains change and to prevent updates of the decomposition
a buffer method is introduced. Several ideas were presented, including a radial buffer,
sensitivity buffer and a combination of the two. Overall the combination of the two
methods provided for the best performance as it offered significantly less updates (28 iterations less depending on settings) and brought larger savings of approximately
10-20% over the radial buffer method. Using only the sensitivity buffer method alone
provided for an infeasible method as the seperate domains needed to be update many
times, thus becoming computationally more expensive. In order to gain a more accurate
result on the benefits more investigation is needed on how to construct the respective
stiffness matrices in an efficient manner. As these estimates on improvements do not
factor this in.
A direct comparison of the methods is hard to formulate as each method would have to
be investigated for this sparse case. But overall each approach offers some improvement
and the substructuring method is a promising alternative to the traditional methods.
72
7
R E C O M M E N D AT I O N S F O R F U T U R E W O R K
The recommendations for future work focus around the ideas of the skeleton modeling
and substructuring techniques. The skeleton modeling approach is potentially still a
feasible method but more investigation on how to combine the two models of the bar elements representing the skeleton with the background mesh is needed in order to obtain
accurate temperatures in the domain. This investigation would most likely benefit from
an initial investigation into the S-FEM method as the approach seems to be a feasible
solution to this problem but has received little research in the application of topology optimization. Further investigation of the obtaining the skeleton structure needs to be done
as the two methods presented each have their own issues which prevent the updating off
the structure. If the structure can not be update, then the optimization process can not
be performed.
The substructuring approach already proved promising, but the structuring of the
matrices still needs to be sorted out. If an efficient method for developing the matrices
can be determined then a cheaper method of determining an optimal sparse structure is
achieved. Further investigation on implementing static condensation approach is needed
as an efficient algorithm could not be developed within the MATLAB environment. This
will provide for a better estimate on the time savings of the solution process. Finally
further investigation can be performed on the optimal development of the buffer zone.
Three methods were investigated within this paper but possible improvements could be
achieved by making the settings adaptive to the problem itself. Such ideas are to improve
the performance early on in the process as the domains are held fixed for fewer iterations
initially. Another idea is optimally placing buffer material once the percentage of design
variables being static reaches 80%. After this point the focus needs to be on holding the
domains static for as many iterations as possible.
73
APPENDIX A
1
efficiency analysis for design 2
(a) Increase in computational complexity
(b) Time differences per iteration
(c) Percentage of total optimization process
Figure 1: Break down of computational cost in terms of increase in complexity, time differences, and percentage of total
optimization process for design 2
Figure 2: Computational cost breakdown, Design 2
75
appendix a
2
effects of design change on design 2
(a) Volume Fraction: 0.3
(b) Volume Fraction: 0.1
(c) Volume Fraction: 0.001
Figure 3: Percentage of design with change less than 10−4 for different volume fractions, Design 2
76
APPENDIX B
3
effects of sparsity
To gain a better understanding of sparsity behavior in the substructuring problem, examination of the matrix structure is needed. Through substructuring the matrices are
manipulated from a nice banded structure to a form that is often less efficient for factorization. There are many different occurrences in matrices that can effect the process of
factorization and the solution of a system of equations with the exact details beyond this
investigation. But to name a few is the sparsity of the matrix, the amount of fill-in that is
present in the process, and the structure of the non-zero values. All of these seem to be
effected during sub-structuring.
(a) Full stiffness matrix
(c) KSC stiffness matrix
(b) KCC stiffness matrix
(d) KSS stiffness matrix
Figure 4: Comparison of full stiffness matrix with stiffness matrices of the substructured domain
In this discussion only the sparsity and the appearance of the equations will be discussed as getting into how fill in and the number of non-zero values effect the structure
and solution process is beyond the scope of this investigation. Figure 4 presents a com-
77
appendix b
parison of the stiffness matrix from a full implementation (FI) and the stiffness matrices
from the adaptive substructuring method (ASSM). The structure of the stiffness matrix
for FI is shown in Figure 4a and it maintains the simple banded structure that is efficient
to solve within the finite element problem. The difficulty arises within the substructuring
approach with the stiffness matrices becoming unstructured. This is particularly seen in
Figure 4b with KCC and Figure 4c with KSC whereas the static domain stiffness matrix
still maintains its banded structure as shown in Figure 4d. In Figure 4b the effects of the
interface nodes can easily be seen. These interface nodes greatly increase the complexity
of the matrix structure as well as form a dense spot in the matrix.
4
buffer method with traditional simp
This section shows how the adaptive substructuring method (ASSM) behaves with the
use of the traditional SIMP method instead of using SIMP with the gray scale suppression
(GSS) as was discussed in Section 5.3. First the traditional SIMP method with the use of a
sensitivity filter was investigate with the settings in Table 1. A threshold level of τ = 0.5
and 1 pass on the radial filter was used for this investigation.
Filter Method
VF
Parameters
Mesh
Sensitivity w/ Rmin = 2.4
0.1
p: 3, q:1
200x200
Table 1: Settings for combined buffer with traditional SIMP method and sensitivity filter
The results of this investigation are presented in Table 2 and Figure 5. Here it is seen
that the current ASSM approach does not perform as well with the traditional SIMP
method. The reason for this is how the sensitivities develop in the traditional SIMP
method which are different from the SIMP-GSS approach as it does not develop high
sensitivities in regions of high growth. The constant changes in the design variables
along the edge of the structure as shown in Figure 5b which are not suppressed also
contribute to a reduction in the percentage of design variables held constant as shown in
Figure 5c. Overall the estimated reduction in time is worse at estimated 6.3% reduction
compared to the method with SIMP-GSS which obtained an estimated 45.58% reduction
with the same settings.
Updates
Iter. Static
Total Iter.
Iter. Fixed (%)
Est. Overall Time Reduction
By Nodes
20
119
146
81.51%
6.29%
Table 2: Results for combined buffer with traditional SIMP method and sensitivity filter
78
4 buffer method with traditional simp
(a) Sensitivity
(b) Areas of Design Change
(c) Buffered Domain
(d) Efficiency
Figure 5: Behavior of traditional SIMP with use of combined buffer at iteration 50
79
APPENDIX C
5
method of moving asymptotes
A method that is similar to the sequential approximate optimization (SAO) algorithm
presented in Section 4.4.1 was initially tested and had some issues dealing with the heat
conduction optimization process. Thus some insight of the method will be given along
with the details of the issues the method of moving asymptotes (MMA) had with the
heat conduction problem.
MMA is performed in a similar manner to that of the SAO algorithm in that it solves the
problems though the use of an approximate explicit subproblem. These approximations
are mainly based on gradient information from the current design iteration as well as
information from previous iteration points. The subproblem is solved and this unique
optimal solution becomes the start of the next iteration. The exact details of this method
will not be discussed here but the reader is referred to Svanberg (28) and Svanberg (29).
The problem to be solved for the MMA algorithm as presented by Svanberg (29)
is presented in Equation 5.1 with the optimization variables as x = ( x1 , ..., xn )T , y =
(y1 , ..., ym )Y and z.
m
minimize : f 0 (x) + a0 z + ∑
i =1
1
ci yi + di y2i
2
s.t. f i (x) − ai z − yi ≤ 0
x min
j
≤ xj ≤
x max
j
i = 1, ..., m
j = 1, ..., n
yi ≥ 0
z≥0
(5.1)
were xi are the true optimization variables, and yi and z are the artificial optimization variables. In order to put the above equation into proper form for the heat conduction problem, the author suggested the following values: a0 = 1, di = 0 and ci = ”a large number”
for all i. Setting ci to a large number makes the corresponding approximate variables yi
expensive. Typically for an optimal solution y and z are equal to zero.
The issue with the MMA algorithm for the heat conduction optimization is the selection of the coefficient values, particularly ci . The problem became vary sensitive to the
value selected. In general the ci coefficient is used to enforce the volume constraint of
the problem. Thus having to high a value would force the constraint to much and to
low a value will not maintain the constraint. Typically in the optimization problem for
heat conduction the constraint was not met in the beginning of the optimization process.
As the process continues the volume constraint comes closer to being satisfied. However
once the constraint is just about to be satisfied the problem becomes unstable and begins
to oscillate, forcing the volume constraint not to be equal anymore. Depending on the
problem particular care had to be taken in order to properly select the coefficient ci .
The same problem was tested with a structural optimization case with minimum compliance and the issues did not exist. Thus a possibility is the approximate subproblems
81
appendix c
due not represent the exact behavior of the heat conduction problem. This was briefly
investigated but no conclusions on the cause were made.
82
BIBLIOGRAPHY
[1] Gregoire Allaire, Francois Jouve, and Anca-Maria Toader. Structural optimiation
using sensitivity analysis and a level - set method. Journal of COmputational Physics,
2003.
[2] Oded Amir. Efficient reanalysis procedures in structural topology optimization,
2011.
[3] Erik Andreassen, Anders Clausen, Mattias Schevenels, Boyan S. Lazarov, and Ole
Sigmund. Efficient topology optimization in matlab using 88 lines of code. Struct
Multidisc Optim, 2011.
[4] M.P. Bendøse. Optimal shape design as a material distribution problem. International
Journal for Numerical Methods in Fluids, 2009.
[5] M.P. Bendøse and Noboru Kikuchi. Generating optimal topologies in structural
design using a homogenization method. Computer Methods in Applied Mechanics and
Engineering, 1988.
[6] M.P. Bendøse and O. Sigmund. Topology Optimization: Theory, Methods and Applications. Springer, 2003.
[7] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[8] N.D. Cornea, D. Silver, and P. Min. Curve-skeleton properties, applications, and
algorithms. Visualization and Computer Graphics, IEEE Transactions on, 13(3):530 –548,
may-june 2007.
[9] J. C. A. Costa Jr. and M. K. Alves. Layout optimization with h-adaptivity of structures. International Journal for Numerical Methods in Engineering, 2003.
[10] N.P. van Dijk, Maute K, M. Langelaar, and F. van Keulen. Level - set methods in
structural topology and shape optimization: A review, 2012.
[11] G.M. Fadel. Two point exponential approximatin method for structural optimization.
Structural Optimizaiton, 1990.
[12] C. Fleury. Conlin: an efficient dual optimizer based on convex approximation concepts. Structural Optimization, 1989.
[13] Albert A. Groenwold and L.F.P. Etman. On the equivalence of optimality criterion
and sequential approximate optimization methods in the classical topology layout
problem. International Journal for Numerical Methods in Engineering, 2008.
[14] Albert A. Groenwold and L.F.P. Etman. A simple heuristic for gray-scale suppresion
in optimality criterion-based topology optimization. Struc Multidisc Optim, 2009.
[15] Seung-Hyun Ha and Seonho Cho. Topological shape optimization of heat conduction problems using level set approach. Numerical Heat Transfer Part B: Fundamentals,
2005.
83
Bibliography
[16] Tao-Yang Han and John F. Abel. Substructure condensation using modefied decomposition. International Journal for Numerical Methods in Engineering, 1983.
[17] Min-Geun Kim, Seung-Hyun Ha, and Seonho Cho. Level set based topological
shape optimization of nonlinear heat conduction problems using topological derivatives. Mechanics Based Design of Structures and Machines, 2009.
[18] Uri Kirsch. Reanalysis of Structures. Springer, 2008.
[19] Sebastian Kreissl, Georg Pingen, and Kurt Maute. An explicit level set approach for
generalized shape optimization of fluids with the lattice boltzmann method. International Journal for Numerical Methods in Fluids, 2009.
[20] Zheng-Dong Ma, Noboru Kikuchi, Christophe Pierre, and Basavaraju Raju. Multidomain topology optimization for structural and material designs. Journal of Applied
Mechanics, 2006.
[21] Arash Mahdavi, Raghavan Balaji, Mary Frecker, and Eric M. Mockensturm. Parralel
optimality criteria-based topology optimization for minimum compliance. Technical
report, Pennsylvania State University, 2005.
[22] K. Maute, S. Schwarz, and E. Ramm. Adaptive topology and shape optimization.
Computational Mechanics, 1998.
[23] K. Maute, S. Schwarz, and E. Ramm. Adaptive topology optimization of elastoplastic
structures. Structural Optimization, 1998.
[24] Suraj Musuvathy, Elaine Cohen, James Damon, and Joon-Kyung Seong. Principal
curvature ridges and geometrically salient regions of parametric b-spline surfaces.
Comput. Aided Des., 43(7), July 2011.
[25] S. Osher and J.A. Sethian. Fronts propagating with curvature dependent speed:
Algorithms based on hamilton - jacobi formulations. Journal of Computational Physics,
1988.
[26] J.S. Przemieniecki. Matrix structural analysis of substructures. AIAA Journal, 1963.
[27] J.A. Sethian and Andreas Wiegmann. Structural boundary design via level - set and
immersed interface methods. Journal of Computational Physics, 2002.
[28] Krister Svanberg. The method of moving asymptotes. International Journal for Numerical Methods in Engineering, 1987.
[29] Krister Svanberg. Some modelling aspects for the matlab implementation of mma.
Optimization and Systems Theory, 2004.
[30] Colby Swan and Salam Rahmatalla. Strategies for computational efficiency in continuum structural topology optimization. Advances in Engineering Structures, Mechanics
& Construction, 2006.
[31] Kumar Vemaganti and W. Eric Lawrence. Parrallel methods for topology optimization. Technical report, University of Cincinnati, 2004.
[32] Michael Yu Wang, Xiaoming Wang, and Dongming Guo. A level - set method for
structurl topology optimization. Computer Methods in Applied Mechanics and Engineering, 2002.
84
Bibliography
[33] Shengyin Wang and Michael Y. Wang. A moving superimposed finite element
method for structural topology optimization. International Journal for Numerical Methods in Engineering, 2006.
[34] Shun Wang, Eric de Sturler, and Claucio H. Paulino. Dynamic adaptive mesh refinement for topology optimization. Technical report, University of Illinois at UrbanaChampaign, 2010.
[35] Takayuki Yamada, Kazuhiro Izui, and Shinji Nishiwaki. A level set based topology optimizatin method for maximizing thermal diffusivity in problems including
design dependeent effects. Journal of Mechanical Design, 2011.
[36] Zhihua Yue. Adaptive Superposition of Finite Element Meshes in Linear and Nonlinear
Dynamic Analysis. diploma thesis, University of Maryland, 2005.
[37] Chun Gang Zhuang, Zhen Hua Xiong, and Han Ding. A level set method for topology optimizaiton of heat conduction problem under multiple load cases. Computer
Methods in Applied Mechanics and Engineering, 2006.
85
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement