ThesisSture2013

ThesisSture2013
Illustrative 3D visualization of
seismic data
Øyvind Sture
[email protected]
August 2013
Visualization Group
Department of Informatics
University of Bergen
Master Degree Thesis
Illustrative 3D visualization of seismic data
by Øyvind Sture
August 2013
supervised by
Helwig Hauser
Daniel Patel
Visualization Group
Department of Informatics
University of Bergen
Abstract
Seismic datasets are usually large and noisy, with no apparent structures, at
least to the untrained eye. Geo-illustrators and geologist use a set of 2D textures, lithological patterns, to show the interpreted information in seismic data.
Lithological patterns are illustrations of the contents of a rock unit or a rock
formation. For 2D slice viewing, and for 2D planar cuts in a 3D object, a 2D
texture is sufficient.
But to fully take advantage of the information found in 3D seismic datasets,
we want to explore the use of specially designed 3D textures to show the interpreted information. This opens up some new possibilities, such as viewing
the interior of the seismic volume, by using traditional tools for handling of
occlusion. Unfortunately, no such 3D lithological textures exists.
This thesis deal with the process of creating the 3D textures. We analyse
and break the 2D lithological patterns down into symbols and their integration.
Then each 2D symbol is manually extended to its plausible 3D counterpart, and
finally the 3D objects are placed in the texture as defined by its integration. The
control over this synthesis process is placed in a XML schema for flexibility. The
lithological patterns covers a variety of different textures from stochastic such
as sand to the structural limestone, and thus follows the two main types of
textures that we are able to create, structured textures and stochastic textures.
i
Contents
1 Introduction
1.1 Visualization . . . . . . . . . . . . . . .
1.2 Illustrative visualization . . . . . . . . .
1.3 Seismic data acquisition / Seismic data
1.4 Seismic visualization . . . . . . . . . . .
1.5 Illustrative visualization of seismic data
1.6 Goals for this thesis . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
1
2
2
2
2
2 Related work
2.1 Procedural textures . . . . . . . . . . . . . . . .
2.2 Example based texture synthesis . . . . . . . .
2.2.1 Memory saving texture synthesis . . . .
2.2.2 Shape extraction and texture modeling
2.2.3 Texture synthesis for volume illustration
2.3 Perceptual issues of 3D visualization . . . . . .
2.3.1 Contours . . . . . . . . . . . . . . . . .
2.3.2 Cutaways . . . . . . . . . . . . . . . . .
2.3.3 Transparency and ghosting . . . . . . .
2.3.4 Slice rendering . . . . . . . . . . . . . .
2.3.5 Color depth cues . . . . . . . . . . . . .
2.3.6 Ambient occlusion . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
3
5
7
8
10
10
11
12
12
13
13
13
3 Creating 3D Textures
3.1 Intro . . . . . . . . . . . . . . . . . . . . . . .
3.1.1 Lithologic patterns . . . . . . . . . . .
3.2 Design goals for the 3D textures . . . . . . .
3.2.1 Recognizable . . . . . . . . . . . . . .
3.2.2 Sparseness . . . . . . . . . . . . . . . .
3.2.3 Deformable . . . . . . . . . . . . . . .
3.2.4 Implications . . . . . . . . . . . . . . .
3.3 Overview of the Analysis - Synthesis process .
3.3.1 Analysis . . . . . . . . . . . . . . . . .
3.3.2 The texture model . . . . . . . . . . .
3.3.3 Object modelling . . . . . . . . . . . .
3.3.4 Synthesis . . . . . . . . . . . . . . . .
3.4 Analysis . . . . . . . . . . . . . . . . . . . . .
3.4.1 Integration . . . . . . . . . . . . . . .
3.4.2 Objects and nested objects . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
14
14
15
15
17
17
17
17
17
17
18
19
19
19
20
21
ii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
3.5
iii
3.4.3 Density, size and rotation . . . .
3.4.4 Texture building structure . . . .
Object modeling . . . . . . . . . . . . .
3.5.1 Breccia, Conglomerate and Sand
3.5.2 Salt and Limestone . . . . . . . .
3.5.3 Shale and Dolostone . . . . . . .
3.5.4 Chert and Fossils . . . . . . . . .
3.5.5 Nested objects . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
21
21
21
22
22
23
23
24
4 Implementation
4.1 Texture synthesis plugin . . .
4.1.1 Texture XML file . . .
4.1.2 Object definition . . .
4.1.3 Object placement . . .
4.1.4 Random placement . .
4.1.5 Structured placement
4.2 Data importer plugins . . . .
4.3 Render plugin . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
26
26
27
29
30
34
36
38
40
5 Results
5.1 3D textures . . . . . . . . .
5.2 Synthetic datasets . . . . .
5.3 Deformation of 3D textures
5.4 Additional results . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
42
42
47
51
52
6 Summary, Conclusions
6.1 Summary . . . . . .
6.2 Conclusions . . . . .
6.3 Future work . . . . .
.
.
.
.
and Future Work
59
. . . . . . . . . . . . . . . . . . . . . . . . . 59
. . . . . . . . . . . . . . . . . . . . . . . . . 60
. . . . . . . . . . . . . . . . . . . . . . . . . 61
Acknowledgements
63
Appendix
64
Chapter 1
Introduction
Every day we produce more and more data, some might even say that we are
facing a data explosion. As we get gradually closer to a world of ubiquitous
computing, where intelligent technology is integrated everywhere, we create
even more data. The size of datasets are even increasing, as there is always a
need for larger scale simulations and higher resolution images from acquisition
devices such as MRI-scanners. Visualization can be used to efficiently get insight
into these enormous amounts of data. Generally speaking, visualization is any
technique that creates images that convey information.
1.1
Visualization
Computer processing power tends to follow Moore’s law, which leads to a continuous increase in data that can be processed. Unfortunately, the human mind
does not follow with a similar increase in its capabilities. One way to overcome this mismatch is with the use of visualization, where data is presented
in a visual form to effectively communicate information to the users. Visualization is usually divided into two communities, scientific visualization (SciViz)
and information visualization (InfoViz). SciViz deals with the visualization of
data that have spatial representation, such as large scale simulations or 3D
datasets acquired from medical scanners or seismic data. The data are often
time-dependent, or multi-variate, i.e. multiple attributes are recorded for each
spatial position. InfoVis is the more recent community, and typically deals with
abstract data, i.e. data that lack a natural spatial representation, such as bank
account data or stock market data. InfoVis thus needs an additional mapping
step to create a visual form.
1.2
Illustrative visualization
Traditional illustrations have been around for quite some time. Some of the earliest examples of technical illustrations can be found in the works of Leonardo
da Vinci (1452-1519). Illustrators have over time created numerous techniques
to effectively convey information. Illustrative visualization uses inspiration from
traditional technical and medical illustrations to create non-photorealistic visualization.
1
CHAPTER 1. INTRODUCTION
1.3
2
Seismic data acquisition / Seismic data
The worlds main energy sources is oil and natural gas. According to the United
States Energy Information Administration, oil and natural gas amounts to 60%
of the worlds consumption of energy [1]. When including coal, it adds up to
85% of the worlds energy consumption, so in a short time perspective, nothing
can replace coal, oil and natural gas as energy sources.
The search for oil and natural gas is done by sending sound waves down
through the earth and then recording the returning reflection. Typical sound
sources can be air guns for recordings in water, and vibrator trucks for recordings
on land. A ship tows an array of airguns and stremers containing hydrophones a
few meters below the the surface of the water. The airguns produce sound waves
at regular intervals, such as every 25 m, which travels down into the earth.
When the sound waves encounter an interface by two different layes in the
earth, parts of the sound waves is reflected back to the surface. The hydrophones
then record the reflected sound waves. The recorded data is then processed
through stages of deconvolution, stacking and migration. The post-processed
seismic data is then ready for exploration and further analysis.
1.4
Seismic visualization
Seismic data can be visualized using 2D slice rendering, where a plane intersects
the 3D volume. Multiple planes orthogonally aligned with the x- y- and z-axis
are also used. Direct volume rendering (DVR) is another technique used to
visualize seismic data. A transfer function that maps interesting values to high
opacity and other occluding parts to low opacity helps to deal with occusion. A
combination of 2D slice rendering and DVR is often used. Virtual Reality installations like the HydroVR [27] project is also used for visualization of seismic
dataset, especially for well planning.
1.5
Illustrative visualization of seismic data
Lithological patterns are a language of 2D textures used by geo-illustrators and
geologists to describe and classify rock types found in the lithosphere, the outer
layer of the earth. The Federal Geographic Data Committee have described over
100 lithologic texture patterns in Digital Cartographic Standard for Geological
Map Symbolization [16]. Patel et.al. uses similar patterns from the oil industry
in their work [32]. Inspirations from illustrations found in geology books are
used to render cutouts of seismic data.
1.6
Goals for this thesis
The goals for this thesis is to create solid 3D texture representations based on
2D lithological patterns, as a way of representing information found in interpreted 3D seismic datasets. By creating specially designed 3D textures that
follows our texture goals, that the textures must be recognizable, have a sparse
representation and be deformable, we can use traditional visualization tools for
handling occlusion and depth perception in the seismic 3D datasets.
Chapter 2
Related work
Since the introduction by Catmull in 1974 [6], textures and texture mapping
have been one of the most frequently used methods to add detail to a 3D object. Instead of using a complex 3D model, a simple 3D model can be used, and
detail is added from images mapped onto the surface of the 3D model. The mapping of 2D images onto general 3D surfaces introduces a couple of well-known
problems, such as distortion due to stretching, discontinuity along the edges of
the underlying 3D mesh, or unwanted seams when texture patches are used to
reduce the stretching of the texture. Two common examples illustrates this: the
mapping of a 2D image onto two hemispheres to create a planet representation,
and the mapping of one or more textures onto the sides of a cuboid shape to
represent a wooden plank. One solution is to have textures made specifically
for each 3D surface. This is time consuming. Another solution is the use of 3D
textures.
A 3D texture can be seen as a number of 2D images stacked on top of
each other, to form a solid block. The texture mapping is then replaced by a
process which can be thought of as carving the the object out of the 3D texture
block. Two problems quickly arises when dealing with 3D textures. The first
problem is that 3D textures requires memory. Whereas a 2D texture image with
size 10242 pixels requires 1 MB of memory (for a 8-bit gray-scale image), the
corresponding 3D texture with size 10243 voxels (volumetric pixels) requires 1
GB of memory.
The second problem with 3D textures is to actually acquire them. For a
2D texture this can be done using a digital camera or an artist can create the
texture. In theory a similar approach is possible for 3D textures, but to be able
to use a camera requires that the sample object are cut into thin slices, which
destroys the sample object. At least one well known data set is created this way
[2]. Another less destructive alternative is to use a 3D scanner such as a CT
scanner. But the most common source of 3D textures is 3D texture synthesis.
2.1
Procedural textures
The use of procedural textures have been one of the main sources of 3D textures
since the introduction independently by Perlin [35] and Peachey [34] in 1985. A
procedural texture is an algorithm that defines a mapping from a point (X, Y, Z)
3
CHAPTER 2. RELATED WORK
4
Figure 2.1: 1024*512 pixel screenshot from .kkrieger, the 96 KB executable [18].
in R3 to a texture value. In the seminal work An Image Synthesiser, Perlin
combines noise of different frequencies as a texture basis function to create a
large number of different realistic looking textures, like clouds, fire, water, stars,
marble, wood and rock. Procedural techniques have since then evolved to a large
field, which spans different areas such as procedural textures, procedural plant
modeling, procedural city synthesis and procedural planets. An overview of
procedural texturing and modeling can be found in the book by Ebert et al.
[15].
Procedural techniques can be powerful, one example of this can be found
in the demoscene. ”.kkrieger” [18] is a 96 KB executable playable first person
shooter, released in 2004. 200-300 MB would be necessary to store the game in
a conventional way, according to the developers. The screenshot shown in figure
2.1 uses 174 KB when stored as a JPEG image, which is more than the entire
game.
Procedural techniques can in principle be used to create any 2D and 3D shape
and structure. But there are also limitations. Procedural techniques can be hard
to control, as the algorithms often require several input parameters which are
not intuitive. therefore a good texture can be generated by surprise by tweaking
the parameters, as serendipity follows from the sometimes unpredictable nature
of procedural techniques.
Lefebvre and Poulin [25] uses procedural techniques to synthesize two frequently used texture types, wood textures, and rectangular tiling of bricks and
tiles. Input images of the texture types they want to synthesize are used to
extract parameters which control their procedural texture. To create the tile
texture they first use the Fourier transform to find the height of the tiles, which
corresponds to the major frequency, i.e. the brightest point in the frequency
domain. The orientation of the texture is found from the same point. Horizontal scanlines are then used to find the width of the tiles from an axis aligned
texture. Then the width of the mortar is found by vertical scanlines. Finally
the offset of the tiles are found by horizontal scanlines. A similar approach is
used to find the parameters for the wood texture.
Cutler et al. [9] uses a procedural approach to define the interior of solid
models. A scripting language is defined, which allows procedural creation of
material layers and nested objects. Each object’s surface mesh is used to create a
signed distance field. This allows simple set operators such as union, intersection
and subtraction between objects. Figure 2.3 (left, middle) shows an example of
CHAPTER 2. RELATED WORK
5
Figure 2.2: Brick and wood texture created by Lefebvre and Poulin [25].
Figure 2.3: Examples of layered textures and nested objects created by Cutler
et al. [9]
a layered textures, and a nested object with intersecting surface mesh (right).
Lagae and Dutre [24] present a new texture basis function in the paper
A procedural object distribution function. Instead of a texture basis function
that returns texture values for each pixel in the image, procedurally generated
objects such as squares, stars and circles are uniformly placed onto a procedural
background texture. Their texture basis function controls rotation, scale and
size of the objects. A poisson disc distribution is created, and a stochastic tiling
algorithm is used, to ensure an uniform placement of the objects. Examples of
procedural textures are found in figure 2.4.
2.2
Example based texture synthesis
In example based texture synthesis one or more images are given as input to
the synthesis method. In methods that deal with 2D texture synthesis, the goal
is often to create a larger texture from a small sample input, or to synthesize a
texture that seamlessly wraps around a 3D object. An overview of 2D example
based texture synthesis can be found in [43]. Example based 3D texture syn-
Figure 2.4: Examples of procedural textures created by Lagae and Dutre [24]
CHAPTER 2. RELATED WORK
6
Figure 2.5: Examples of textures created by Kopf et al. [23].
thesis is closely related to 2D texture synthesis, as many of the methods for 2D
have been extended to 3D versions. In 3D texture synthesis every slice of the
volume should look similar to the texture input. A survey of 3D solid texture
synthesis can be found in [36].
Heeger and Bergen uses histogram matching and image pyramids in [17]. An
input image is given to the synthesis method, and the output is initialized to a
3D block of noise. The noise texture is first matched to the input image with
histogram matching. Then they create image pyramids, i.e. multi-level reduced
size approximations of the input image and 3D noise block. Histogram matching
is then performed at each level in the image pyramids, before they collapse the
image pyramids and perform histogram matching of the complete texture block.
The method then performs several iterations of histogram matching at pyramid
levels and matching of the complete texture to create the final texture. This
method has successfully synthesized homogeneous stochastic textures, but their
approach fails for textures with inhomogeneous structures.
Qin and Yang [38] also uses a statistical based method in their solid texture
synthesis, called Aura 3D textures. The method builds on the idea that two
textures look similar if they have the so called aura matrix distance within
a threshold. Their BGLAM (Basic Gray-Level Aura Matrices) framework is
used to define distance between images. One or multiple images are given as
input, and output is initialized to a 3D block of noise. A set of aura matrices is
created from the input sample(s). The method iterates over the output 3D noise
block, randomly visiting each voxel once and changing it such that the BGLAM
distance decreases. The iteration continues until a threshold is reached, or no
changes between iterations occur.
Kopf et al. uses a optimization-based approach for solid texture synthesis in
[23]. Voxels in the 3D texture is randomly initialized from the example image.
A global texture energy function is used that measures similarity between the
input 2D example and the synthesized 3D texture. The synthesis is performed
as a two step iteration, where each voxel is first updated based on best matching
neighborhoods of neighbor voxels, and then each neighborhood is updated for
all voxels. Histogram matching is used to ensure similarity over the entire
solid texture. Multiple exemplars are used for more control over the texture
synthesis process, this can be seen in figure 2.5. The left and right images
were synthesized from a single input image, where as the middle image was
additionally synthesized with the dot pattern as exemplar for the top view.
CHAPTER 2. RELATED WORK
7
Figure 2.6: Textures created by Dong et al. [14]. Left texture is generated from
two input exemplars. Middle and right texture uses one input exemplar for all
3 axes.
2.2.1
Memory saving texture synthesis
The synthesis of a global solid texture has some disadvantages. One is that a
3D texture requires a lot of space. A second disadvantage is that synthesizing a
volume is time consuming. Doug et al. [14] introduced a new texture synthesis
algorithm called ”lazy texture synthesis”. Instead of synthesizing a full volume,
only a small subset of voxels surrounding the surface of the object to be textured needs to be synthesized. This is realized by pre-computing a number of
3D candidates, i.e. small 3D volumes generated from well chosen 2D neighborhoods. These 3D slabs are then used for texture synthesis. Their method is fast
enough to allow real time synthesis on surface cuts. Figure 2.6 show examples
of structures generated by this method.
When modeling the interior of a 3D object, sometimes global texture synthesis gives insufficient control over the synthesis process. To resolve this the
interior can be synthesized specifically to match the border and interior of the
object to be textured, by using the border and interior information as input to
the synthesis process. This can be done procedurally, as in [9] (introduced in
section 2.1)
Owada et al. [30] present an interactive system for texture synthesis on
cuts in a 3D model. They use a browsing interface, where the user can freely
cut the 3D object, and a modelling interface, where the user selects a texture
from a number of reference images and provides the synthesis process with
directional information when necessary. Three types of texture is supported,
isotropic texture, layered texture and oriented texture. Isotropic textures are
textures which are uniform in all directions, i.e. they look the same regardless of
orientation, and they are automatically synthesized using 2D synthesis onto the
cut surface. In layered textures the variations in the texture are found primarily
along the axial or radial direction. When this type of textures are used, a 2D
distance field need to be created on the cut surface, which defines the start and
the end of the texture, or the centre and boundary for circular cut surfaces.
The third texture type is the oriented texture, which is characterized by
bundled fibres along a flow direction. For this texture type, the user needs to
sketch the flow direction. A reference image perpendicular to the flow direction
is used to synthesize a small reference volume, by sweeping the image along
CHAPTER 2. RELATED WORK
8
Figure 2.7: On the left are the 3 supported texture types, isotropic, layered and
oriented. And on the right is an example of a textured cut surface by Owada
et al. [30]
the z-axis. The cut surface is then textured with this volume along the flow
direction.
A different approach is presented by Pietroni et al. [37], where the texture
exemplars are a few cut-plane images of a real object. Based on this the internal
structure of a similar 3D model is synthesized. The input images must be placed
in the 3D framework of the 3D model as a preprocessing step. These images
form a BSP (binary space partitioning) tree. Each voxel to be synthesized is
then projected onto the exemplars in the BSP-tree to find a set of source points.
Interpolation and morphing is then used to synthesize the voxel. This process
allows real time synthesis of texture, thus avoiding the need for creating it in a
preprocessing stage and then storing it in memory.
Takyama et al. [41] repeatedly pastes irregular shaped solid texture exemplars to create a solid texture with their method Lapped solid textures. A
tetrahedral mesh of the object is generated, and solid texture exemplars are
pasted over the surface of the object. As in [30], they require user interaction to
place the texture, the user needs to draw the direction of the tensor field. The
pasted texture exemplar is warped to align with the tensor field of the object.
Translucent rendering of a solid volume is possible, but this requires multiple
slices of the model. Their system also allows manual pasting of secondary textures, as seen in the watermelon model.
2.2.2
Shape extraction and texture modeling
For textures consisting of small oriented particles, it can be more suitable to use
a different approach than treating the texture sample as a global collection of
colors. Instead a more analytical approach can be used, where the shape of the
particles are used as basis for creating 3D particles, and then texture modeling is
Figure 2.8: Watermelon with seeds as secondary texture on the left, and rendering of a fibrous tube on the right using translucent texture exemplar, created
by Takayama et al. [41]
CHAPTER 2. RELATED WORK
9
used for particle placement. Dischler and Ghazanfarpour uses such an approach
in Interactive Image-based Modeling of Macrostructured Textures [12], where a
”‘macrostructure texture”’ is a texture consisting of small oriented particles.
2D texture sample images are manually segmented, and two orthogonal particle
outlines are represented as spline polar curves. When no orthogonal views are
available, the particle outlines can be drawn by hand. The particles are then
extended to 3D by using the principle of a generalized cylinder. The 3D particles
are then applied onto the surface of 3D objects, and are used for texture synthesis
on surfaces.
Jagnow et al. [20] uses techniques from stereology to synthesize a specific
class of textures, consisting of solid particles in a binding medium. Stereology
is an interdisciplinary field that deal with extracting 3D information from measurement of 2D planar cuts of the material. The input image is segmented into
one mean profile image and a residual image. A set of 3D particles are manually
constructed (figure 2.9 left), and arbitrary cuts are applied to them. Stereology
are then used to determine the 3D distribution of the particles based on the
particle cuts and the profile image. All particles are placed into the volume,
without regard for overlap. An annealing process is used for collision detection
and relaxation of particle positions such that they do not overlap. The residual
image is then used to synthesize a noise volume, which is combined with the
particle distribution volume to form the final texture (figure 2.9 right).
Chang et al. [7] extends on the work by Jagnow et al., with a method for
synthesizing the particles from the 2D image, and a deterministic method for
particle placement. The input image is first segmented, and then the smaller
segments are removed. The particle are sorted into 10 bins based on particle
area. A number of particle outlines from one bin are randomly selected and used
as cross sections for different viewing angles. Starting with a cubic volume, the
outlines then carve out the particle shape, using the idea of visual hulls. The
particles are synthesized one by one, and a histogram matching approach is used
to control the particle selected for placement. The particles are placed along a
best fit policy.
For the synthesis of individual particles different methods can be used. In
[20] the particles was manually constructed, and in [7] a visual hull was used.
Jagnow et al. compare four different methods for synthesizing a 3D particle
based on 2D shape in [21]. They synthesize particles with spherical harmonics,
CSG (constructive solid geometry), generalized cylinders and morphed general-
Figure 2.9: Left image show manually created particles, and the right image
shows the solid texture created by Jagnow et al. [20]
CHAPTER 2. RELATED WORK
10
ized cylinders. Spherical harmonics use the deformation of a sphere to model
particles, and CSG carves the object from a solid block using orthogonal particle
profiles. The principle of a generalized cylinder uses a base shape and a sweep
curve. The base curve is moved along the z-axis, and scaled by the sweep curve.
Morphed generalized cylinder allows the use of more than one sweep curve. The
base curve is still moved along the z-axis, with interpolation between morph
curves to create a continues surface. For evaluation, different rock samples were
3D scanned, and slices from the 3D scans were used to synthesize particles.
2.2.3
Texture synthesis for volume illustration
Most of the work in solid texture synthesis have been done with focus on photorealistic results or memory efficient representation, but solid texture synthesis
have also been used for creating illustrations from volumetric data. Lu and
Ebert synthesizes solid textures for volume illustrations in [29]. As a basis for
the synthesis method, 3D sample volumes from a high resolution volume dataset
such as the Visible Human Project [2] are used. 2D illustration textures are then
used to recolor the samples, using a color transfer process based on arranging
clusters in color histograms. The color adjusted 3D samples are then used to
synthesize a set of Wang cubes to be used in memory efficient volume rendering.
Dong and Clapworthy use solid texture synthesis on volumes with orientation
in their illustrative renderings of medical data in [13]. In a preprocessing step,
a vector field for the texture orientation of the volume is calculated, using edge
detection and Hough transform. A solid texture sample is generated from a
2D example image, and the texture sample is rotated along the vector field to
follow the texture orientation. The rotated texture sample is then used as basis
for voxel-based texture synthesis.
Wang and Mueller use texture synthesis to create sub-resolution zoom levels
of medical volume data in [42]. Example images of the different zoom levels are
color matched and segmented in a preprocessing step, and used as basis for the
synthesis process. They then synthesize volumes of different zoom levels, which
are blended using constrained texture synthesis to ensure coherence between the
different levels. Constrained texture synthesis is realized using different texture
synthesis approaches for different texture regions.
2.3
Perceptual issues of 3D visualization
When working with 3D volumetric visualization, there are two important limitations. One is related to the mapping from 3D data to the 2D computer
screen. And the second is that a 3D world introduces occluding objects. Numerous approaches have been used to improve the 3D perception of a computer
model, including virtual reality, motion, and depth cueing. And similarly there
have been numerous approaches for dealing with occluding structures, including
transparency, ghosting, contours, cutaways and slice rendering. In the following
we discuss these issues.
CHAPTER 2. RELATED WORK
11
Figure 2.10: Examples of contour visualization of volumetric data created by
[8]
Figure 2.11: Examples of renderings created by [22]
2.3.1
Contours
A contour is an abstraction of an object, where only the outlines of the object is
visible. This technique is commonly found in technical illustrations, where the
emphasis of important parts is necessary to effectively convey visual information.
A true contour can be defined as the points on the object surface that have a
normal perpendicular to the viewing direction. Contours can be used to deal
with occluding surfaces. Csébfalvi et al. present a fast way to visualize object
contours in [8]. They use a view dependent function s(P, V ) = (1−|5(P )·V |)n ,
where P is the position of current voxel in the volume and vector V is the
viewing direction. A windowing function, g(| 5 (P )|) is then multiplied with
the view dependent function to produce an intensity function I(P, V ) = g(| 5
(P )|) · s(P, V ). This allows them to visualize contours of objects in the volume
with minimal occlusion (figure 2.13 left). Depth cueing and level lines are used
to further enhance the 3D perception (figure 2.13 right).
Contours based only on the product between the normal and the view vector
creates contours of variable thickness. Commonly a contour is created if v · n is
less than some limit. If a part of the object surface is almost flat, i.e. it has an
area with normal vectors almost perpendicular to the view vector, this will be
rendered as a thick contour, as seen in figure 2.11 (left). Kindlmann et al. [22]
propose to solve the contour thickness problem using curvature measure, with
the use of second order derivatives. When the curvature in an area is small,
only a narrow
p contour should be created. This is realized with the inequality
|v · n| ≤
T KV (2 − T KV ), where KV is the normal curvature in the view
direction, and T is a user defined constant. Figure 2.11 show contours with
T = 1 (middle) and T = 2.5 (right).
DeCarlo et al. take the idea of contours one step further in [10]. Here
they introduce the idea of suggestive contours, where they draw contour-lines of
CHAPTER 2. RELATED WORK
12
Figure 2.12: Examples of renderings created by [10]
contours that actually belong to a nearby viewpoint. The suggestive contours
extend the true contours, and create a more expressive image. This can bee
seen in figure 2.12, where the left image show a contour rendering, and the right
show contours and suggestive contours.
2.3.2
Cutaways
Clipping and cutaways are also well known tools from illustrations. The basic
idea is to remove the less important occluding object, such that the the more
important hidden object becomes visible. A problem with such a simple approach is that the context can be lost. Diepstraten et al. present a set of rules
for automatic generation of cutaway illustrations in [11]. They separate the
cutouts into two classes, cutouts when dealing with large interior objects, and
breakaways when dealing with smaller close placed interior objects.
Li et al. present an interactive system for creating cutaway illustrations in
[26]. A basic segmentation is needed, where the user classifies each part as a
geometric shape, and specifies a good viewpoint. An automatic cutaway is then
created, which the user can further manipulate. A CSG approach is used to
cut the model, and the system allows three types of cutting volumes, an object
aligned box cutting volume, a tubular cutting volume and a window cutting
volume.
When large cutouts are used, they can remove too much of the less important
objects. Burns and Finkelstein [5] uses non-photorealistic rendering techniques
to restore some of the context which was lost in the cutout. This is realized by
keeping ghost lines in the cutout, i.e. render contours and creases of the parts
of the objects that was cut away.
2.3.3
Transparency and ghosting
An other technique commonly found in illustrations to deal with occlusion is
the use of semi-transparency. A fully transparent surface lacks shape and depth
cues. Interrante et al. combines opaque elements with layered transparent
surfaces in [19]. A stroke texture is applied along the principal direction of the
surface. Color coding is used to convey distance between layers, and stroke
length is used to convey curvature.
Ghosting is a technique commonly found in illustrations, where the artist
reduces opacity of non-important parts to display more important internal parts.
Bruckner et al. uses a rendering approach inspired by ghosting in [3]. Their
idea is that flat surfaces facing the viewer can be treated as non-important.
CHAPTER 2. RELATED WORK
13
Figure 2.13: Noisy seismic data shaded with gradient in the left image and
ambient occlusion in the right image, images created by [31]
This makes it possible to create a ghosting effect on unsegmented data, where
opacity is reduced from a combination of gradient magnitude, shading intensity
and eye distance.
2.3.4
Slice rendering
Slice rendering, or 2D views are a well known technique for volume visualization.
2D views are used in commercial software to augment 3D models.
2.3.5
Color depth cues
Non-photorealistic rendering can be used to increase depth perception of 3D
models. Weiskopf and Ertl propose a depth cueing scheme where they combine
intensity and saturation in [44]. Intensity depth cueing is the standard depth
cueing, where colors are made darker or lighter as a linear or exponential function of the distance between camera, foreground and background. Depth cueing
by saturation creates a more subtle effect, where distant objects are rendered
with a desaturated color.
2.3.6
Ambient occlusion
Another way to increase depth perception is to change the lighting model. When
a typical shading model such as Phong is used to render 3D volume data, lighting is calculated from the position and gradient of the current voxel. Ambient
occlusion is a local approximation of a global lighting model, where the surrounding structure is taken into account. At each voxel location rays are shot
in every direction to determine how occluded the current voxel is. This method
have been used to enhance visualization of noisy seismic volume data in [31] by
Patel et al.
Chapter 3
Creating 3D Textures
Figure 3.1: How to go from 2D textures to 3D textures
3.1
Intro
In the previous chapter a number of techniques for synthesizing 3D textures from
2D exemplars were presented. Among these were procedural synthesis, image
based synthesis and the analysis-synthesis process for textures with macroscopic
structures. A common challenge for many of the image based techniques are
their ability to handle structural textures, where as procedural techniques are
often designed to handle a small set of texture types.
Most 2D exemplars used in the methods presented in chapter 2 are photos
of some real world concept, or of photo-like quality. Our problem is somewhat
different, since the lithologic patterns we use as starting points for our synthesis
process, are illustrations of the rock structures found in the layers of the earth.
This makes our starting point for our texture synthesis not as well defined as
the typical image based synthesis, such that in many cases it is necessary to
take into account what the lithologic patterns represent.
In this chapter we will describe our solution for extending 2D lithologic textures to 3D versions of the textures. Our process is similar to the work by
Jagnow et al. [20] in that we manually create each of the objects to be used
as building blocks for the texture synthesis process. Instead of using Stere14
CHAPTER 3. CREATING 3D TEXTURES
(a) 605
15
(b) 620
(c) 616
Figure 3.2: Different patterns, sizes and orientations, textures from FGDC [16]
(a) 601
(b) 602
Figure 3.3: Different densities and sizes, textures from FGDC [16]
ology and an annihilating process for object placement, we place control over
the integration process in a XML scheme. Before we go into the depths of
our analysis-synthesis approach for extending the 2D lithological pattens to 3D
versions, some more details about the lithological patterns are needed.
3.1.1
Lithologic patterns
Geologist and geo-illustrators uses lithologic patterns or textures to describe the
physical characteristics of rock units. The textures can be seen as abstractions
or illustrations of the rock units, and since the textures represent the rock types,
lithological standards used by different people share some similarities. The US
Federal Geological Data Commitee (FGDC) have described over 100 lithological
textures in Digital Cartographic Standard for Geological Map Symbolization.[16]
This standardization of the textures are used as a starting point for our sparse
3D textures. Examples of these textures can be found in figures 3.2 to 3.6. The
lithological textures are 2D images, which works well with the traditional way
of viewing seismic data, such as slice view. But as some of the seismic datasets
today also are generated in 3D form, we want to examine the use of 3D textures
for seismic illustrations.
3.2
Design goals for the 3D textures
Lithologic patterns used by different geoillustrators and geologist share a similar
structure, as mentioned in [33]. This similarity is something we want our 3D
textures to share as well.
CHAPTER 3. CREATING 3D TEXTURES
(a) 635
16
(b) 679
Figure 3.4: Nested symbols (a) and layered (interbedded) patterns (b), textures
from FGDC [16]
(a) 632
(b) 611
(c) 709
Figure 3.5: More complex layers, crossbedded (a), ripple-bedded (b) and irregular (c) textures from FGDC [16]
(a) 607
(b) 627
Figure 3.6: The two extremes of the textures, textures from FGDC [16]
CHAPTER 3. CREATING 3D TEXTURES
3.2.1
17
Recognizable
Each 3D texture should have a clear connection to its corresponding 2D lithologic texture pattern. The textures should be recognizable from axis aligned cut
planes, but preferably be recognizable from any cut plane that lie on the z-axis.
3.2.2
Sparseness
The 3D textures should have a sparse representation, so that it is possible to
look into the texture. With a sparse representation we want to keep the overall
structure of the texture, but represent it like a collection of 3D glyphs in a grid.
3.2.3
Deformable
The sparse representation of a texture should be able to be deformed without
becoming unrecognisable, where the deformation is defined by a parametrization
volume. A parametrization volume is a 3D volume that maps un-deformed 3D
textures to the deformed layers found in a interpreted 3D seismic volume. (More
details found in 4.2) Deformation especially causes trouble when working with
nested textures, i.e. textures with small details.
3.2.4
Implications
The design goals listed above creates a few limitations on what types of textures
that we focus on. As we will show in later sections, for some textures we could
not create a sparse representation which still keeps the texture recognizable.
3.3
Overview of the Analysis - Synthesis process
Figure 3.7 show an overview of the 2D to 3D texture process. The first step
is texture analysis, where we describe what characterize the 2D textures. This
includes describing the integration of the texture objects, and for some patterns
it is necessary to include rock analysis, i.e. looking into what the 2D texture are
representing. In addition to a set of integration schemes, we end up with a set
of 2D symbols. The next step is creating a 3D version of the 2D symbols. The
3D symbols are used as building blocks for the integration schemes to make the
final 3D texture.
3.3.1
Analysis
The two extremes in texture classification are stochastic textures, which look like
noise, and regular textures, which have a clearly defined structure. Liu et al. [28]
suggests 5 different texture classifications, from regular, near-regular, irregular,
near-stochastic and stochastic. Many of the texture patterns defined by FGDC
[16] can be classified into one of the two base types, stochastic or regular. These
are the texture types we have focused on. An example of a regular texture is
found in pattern 627 (figure 3.6 right), and similarly an example of a stochastic
texture can be found in pattern 607 (figure 3.6 left).
A second characteristic of the lithologic patterns is the layered structure of
many of the patterns. The layers are horizontal, as seen in figure 3.4 (right),
CHAPTER 3. CREATING 3D TEXTURES
18
Figure 3.7: Overview of the texture building process, where we start with 2D
textures, go via an analysis process to a set of 2D symbols, extend these to
3D object, integrate the 3D symbols in a synthesis process to end up with 3D
textures.
crossbedded, as seen in figure 3.5 (left), or more complex, as seen in 3.5 (right).
We have focused only on textures with horizontal layers. In addition some of
the texture patterns uses nested objects. Only one level of nesting is found in
the textures, with up to 3 different nested objects found in the base object.
Additionally, variations in size, density and rotation of the base objects are
found. Our idea is to separate the base texture objects or glyphs from their
integrations, such that the 3D synthesis process becomes a 2-step process, where
first we extend the 2D objects to their 3D counterpart, and then uses these 3D
objects to build up the final 3D texture. This leads to the following model of
a texture: a set of symbols that can be scaled, rotated and nested, placed in a
dense or sparse integration, and possibly separated by different layers.
3.3.2
The texture model
As mentioned above, our textures are created by placing 3D objects in a texture
space. We define a layer as a continuous texture space of similar objects. And
when possible, we use repeating layers, as shown in figure 3.8.
Figure 3.8: Texture 2D layers, textures from FGDC [16]
In our texture model we only have horizontal texture layers, although the
layer concept could be extended to layers with irregular surfaces. This would
allow the texture model to handle more complex textures such as 2D textures
with cross-bedded patterns (figure 3.5 left) and irregular patterns (figure 3.5
right). We have not implemented this extension, as we believe that more complicated texture layers would make it more difficult to follow the deformation
of the texture.
CHAPTER 3. CREATING 3D TEXTURES
19
Figure 3.9: Texture layers
3.3.3
Object modelling
In the last chapter a couple of techniques for synthesizing 3D solid textures
from 2D exemplars were presented. A common limitation for these techniques
are dealing with structured textures. On the other hand, procedural techniques
can in theory generate any kind of texture, but they are difficult to work with.
That is why a somewhat different approach are chosen, where the 3D realization
of the 2D symbols are manually created. The 3D realization of the 2D symbols
are then used as building blocks to create the 3D solid texture.
Some of the 2D symbols are easily extendable to 3D representations. Examples of these are gravel and limestone. Other 2D symbols can be extended to 3D
by looking into what the 2D symbol represent. The rock type shale breaks into
thin sheets. The rock type breccia consist of broken rock fragments embedded
in a fine grained matrix. Salt forms cubic crystals, so the 3D representation is
a cube.
3.3.4
Synthesis
Having manually constructed a set of 3D symbols, the process of texture synthesis reduces to scaling, rotating, nesting and placing the objects so that they
form a plausible solid texture. The object placement are separated in two different methods, which follows from the two base types of textures, stochastic
and regular.
The method for stochastic textures uses a random approach. A location in
space is given, and a simple check is performed to see if there is enough space for
the current 3D symbol to be placed. Regular textures uses a different approach,
the method starts in one corner of the texture block, and places the objects
one after another. If there is not enough space for an object, a smaller object
may be tried in the same location. An XML scheme is used for control over the
synthesis process.
3.4
Analysis
Since we now have presented an overview of the analysis synthesis process, it is
time to go into more of the details. First we want to define the texture concepts.
A texture is divided into layers. In our texture model we only operate with
horizontal layers. One layer is allowed to fill the entire texture. When the
texture consists of more than one layer, the layers are allowed to be repeating.
The texture layers can be filled with 3D objects, or can stay empty. When
CHAPTER 3. CREATING 3D TEXTURES
20
Figure 3.10: XML scheme for textures
working with nested objects, we separate between the nested objects and the
containing object, which we call base object.
In the document by FGDC [16] they have defined a total of 117 lithological
patterns, which is too many patterns to deal with for a master thesis. But as
mentioned before, when you break the patterns down into details, many share
the same structure. If we start with the main characteristic of the textures,
whether a pattern is pseudo-random or have a structured layout. Then we can
divide about 23 of the patterns into one of these groups. With 117 patterns to
work with, some excluding is needed. We wanted to continue work with those
that works with our texture model. This removes the patterns found in the
FDGC [16] part 37.2, which are consisting of metamorphic rock, igneous rock
and vein matter. This leaves the 84 sedimentary rock patterns.
3.4.1
Integration
Starting with the outer structure of the textures, there are a few integration
types that stand out. The most common are the ”closed brick” pattern. There
are more than 20 variations over this pattern. (If we include the ”open brick”
pattern in a brick pattern, this totals to about 30 of the pattens.) The closed
brick pattern is characterized by no space between the objects. Open brick
patterns are similar to closed brick pattern, but with small distances between
objects.
Two of the integration patterns operate on textures instead of objects. These
are the 6 crossbedded patterns and the 12 interbedded pattens. An example of
a crossbedded pattern is found in figure 3.5 (a), and in figure 3.4 (b) an example
of a interbedded pattern is found. The interbedded patterns are easily realised
using layers in the texture model, and assigning different textures to different
layers. The crossbedded patterns are a bit more complicated. Although crossbedded patterns in their simplest form are beds of textures placed in an angle
to the texture layer, the patterns found in the FGDC document does not have
this layout. The crossbedded patterns have multiple curved segments, which are
not easily extendable to a 3D representation. The book Cross-Bedding, Bedforms and Paleocurrents [39] discusses crossbedding, and there exists MATLAB
code [40] that simulates cross-bedding, but we have decided to not work with
CHAPTER 3. CREATING 3D TEXTURES
21
crossbedding patterns.
Similarly the 3 ripple bed patterns are not included, but a ripple as seen in
the pattern found in figure 3.5 (b) can be expressed as a sine wave. This sine
wave defines the placement and bending of the texture objects, which all are
very flat (sand and sheet objects). Extending this pattern to 3D can be realised
with a sine plane, or by extending the sine wave along one axis, almost like
sweeping.
3.4.2
Objects and nested objects
In table 3.1 we find the set of 2D symbols we have selected to work with. The
structures found here are either just basic geometrical shapes, or variations over
them. As an example we have the shale variant dolomitic, which is characterized
by its small line to the right, and similar we have the dolostone, as a modified
limestone. Other textures, such as the gravel texture, consist of sand particles
and larger objects which represent gravel. The nested objects are only found in
3 of the base objects, limestone, dolostone, and chert.
Table 3.1: Example symbols found in the patterns from FGDC [16]
Sand
Gravel
Breccia
Shale
Chalk
Dolomitic
Limestone
Dolomitic
limestone
Dolostone
Chert 1
Chert 2
Fossil
3.4.3
Density, size and rotation
Density is only found in the pseudo-random textures, although if ”open brick”
and ”closed brick” patterns are grouped together as a ”brick pattern”, density
could control the space between each object. Some of the texture patterns uses
variations in size, and rotation is only found in pseudo-random texture.
3.4.4
Texture building structure
Having defined the concepts, we can start to build the 2D textures in a schematic
view, a 3 layer where top represent integration, next level represent objects in
texture and third level represent size of base object or nested objects.
3.5
Object modeling
When extending a 2D lithological objects such as the ones found in table 3.1 to
3D objects, the process is not straight forward. The 3D objects must be kept
as simple as possible to avoid problems when the texture is deformed by the
parametrization volume, but the objects still needs to represent the information
CHAPTER 3. CREATING 3D TEXTURES
22
Figure 3.11: The structure of two example textures
contained in the lithological patterns. To ensure these two design goals are met,
we prefer to use basic geometrical shapes whenever possible, or constructing
new shapes with two of the three most common CSG operators, union and
intersection.
Figure 3.12: Intersection of the space between 3 sets of parallel lines (red, green
and blue) creates a 2D breccia object.
3.5.1
Breccia, Conglomerate and Sand
The rock types breccia and conglomerate are both characterized by having relative large particles. Where they differ is in the shape of these particles. In
breccia, the particles are broken angular fragments, where as conglomerate has
rounded fragments. This difference can easily be seen in table 3.1. The 3D
objects representing breccia objects are created by a CSG approach as illustrated in figure 3.12, where a number of planes create the angular shape. For
simplicity a diamond-like shape is used. If more advanced breccia representations are needed, it is possible to add more cut planes, vary the cut planes for
each instance of the breccia object, and combine multiple breccia objects into
one. This more advanced approach is not implemented, as uniform objects are
important when deformed with the parametrization volume.
3.5.2
Salt and Limestone
The lithological pattern that represent salt is possibly the easiest one to create
a 3D version of. As mentioned before, salt forms cubic crystals, thus the 3D
objects is a cube. There is no variations in size with this pattern.
The pattern for limestone is very close to a traditional brick pattern. A
procedural 2D brick pattern is found in the work by Lefebvre and Poulin [25].
Such a pattern can be extended to 3 dimensions. Keeping with the separation of
3D objects and their integration, we choose a different approach. We treat each
block as a 3D object, which later are placed on its correct place. In the current
CHAPTER 3. CREATING 3D TEXTURES
23
implementation, the limestone objects have predefined size relations, i.e. each
block have a quadratic base of size n, and a height which is half of that. As the
textures are supposed to be viewed from arbitrary angles, the quadratic base
makes it possible to create a brick pattern along two axes, where brick pattern
is defined as the traditional half block overlap. The limestone pattern as shown
in the document defined by FGDC [16] is not quite as simple as this. There is
a slight variation along the x-axis, and additionally, a variation of the height of
the rows of blocks.
Figure 3.13: 3D realisation of salt and limestone object. More advanced block
model.
A better solution for the block objects is to define it by all its 3 dimensions.
This will then allow for the variation found in the limestone pattern, and include
the salt object. Furthermore if we allow the block object to be defined by both
its dimensions, and its bounding box, then this object can also be used to
represent shale. Currently, shale is also treated as a separate object.
3.5.3
Shale and Dolostone
The rock type shale is characterized by the way it breaks along its thin parallel
layers. We choose to keep the quadratic base as found in the limestone object,
1
and change the height of the object to 10
of the base length. If we wanted to
create a more realistic approach to shale, a variation of a Voronoi diagram could
be a starting point, where instead the edges of the polygons are irregular. Our
quadratic abstraction may work better when the object are deformed, and is
sufficient for our use.
Dolostone is a rock type closely related to limestone. It is thought to be
formed by replacing some of the calcium in limestone with magnesium, i.e. a
modified limestone. Looking at the lithlogical patterns, this modification might
be illustrated by shearing each limestone block. Thus the 3D representation of
this object is a sheared limestone-block
3.5.4
Chert and Fossils
Similarly to dolostone, the lithologic symbol for chert does not appear to be
based on how the rock type bedded chert looks like, but more likely by the fact
that chert fractures in a Hertzian cone when struck. Three variations of chert is
found among the lithological patterns. Here the smoothest version (Chert nr 2
in table 3.1) is selected to be extended to 3D object and represent chert. Once
again, a 2D object can be extended to a 3D in numerous ways, and in this case
the 2D object is centred in the yz plane, and then rotated about the z-axis.
CHAPTER 3. CREATING 3D TEXTURES
24
Figure 3.14: Chert 3D realization path.
We are uncertain what the symbol for fossils are supposed to represent,
possibly a shell. The crescent is kept as it is, extended to 3D by moving it along
one axis.
3.5.5
Nested objects
Nested objects are found in 17 of the 84 sedimentary rock patterns. Fortunately
there are only three different base objects used to contain the nested objects.
These base objects are limestone (rectangular cuboid), dolostone/dolomite (parallelepiped) and bedded chert (toroid with filled center). The problem with
nested objects is not how to create a 3D version of the nested 2D object, which
in most cases is just a scaled down version of our already defined 3D objects,
but rather how to ensure that the nested objects are always visible.
When adding a nested object to the base object, there are a few approaches
possible. We want to separate these approaches into two groups, one which
creates nested objects in the 3D textures, and a second group where the nesting
of the objects takes place after the cutting and deformation of the 3D texture
with base objects takes place. We have not implemented any method from the
second group.
Nested 3D textures
A general solution for nested objects is difficult to find. When we want to keep
the frame to frame coherency when cutting the 3D texture, the best solution is
possibly to sweep the nested object as a 2D object along the viewing direction,
or the axis closest to the view direction. (See figure 5.5 left) This method can
be extended to all 3 axes, and works fairly ok as long as no cutting of the base
objects occur. When the nested object is only swept along one axis, cuts close
to 45 degree angle works in most cases. The problem with this method appears
when a cut through the centre of the base objects are performed. (See figure
5.6 right) A variation of this approach is to not sweep through the entire base
object, but to sweep a small distance into the object, from four or all six sides.
(See figure 5.6 left)
There is also the possibility to make the base object transparent, so that the
nested object is always visible, but in this case we quickly run into the problem
of distinguishing between one layer a of nested objects contained in transparent
base objects, and another layer b with base object similar as the nested objects
of a.
Finally we have the problem with deformation. When working with small
textures and large deformation, the nested objects tend to not ’survive’ the
transformation, i.e. they are no longer recognizable. One way of dealing with
this problem is described in the next paragraph, since the nested objects does
CHAPTER 3. CREATING 3D TEXTURES
25
Figure 3.15: Dark grey nested ellipse object swept along one direction. Two
directions creates cross pattern, which does not work when cut through center
only contribute to identifying the 3D texture in its layer, and not so much to
show its deformation.
Nested onto cut surface
A workaround of the problem with deformation is to paste images or 3D objects
onto or into the visible surface after the surface has been cut. Dischler and
Ghazanfarpour used a similar approach in their work [12], where they pasted
3D object onto polygonal 3D objects, and pasting 2D textures onto surfaces is
a well known technique. This kind of approach will probably give the nicest
result, but has not been tried here.
Chapter 4
Implementation
This solid texture system has been implemented in C++ and OpenGL, as a set of
plugins in VolumeShop [4]. VolumeShop is a flexible prototyping framework for
volume visualization, developed by Stefan Bruckner. Additionally, Matlab was
used for prototyping of the distance volumes for the 3D objects. An overview
of our implementation is as follows (as seen in figure 4.1):
• Read parameters from XML-file, resource and importer interface.
• Partition the texture volume into layers, and fill layers with objects
• Read the parametrisation volume and its corresponding volumetric dataset.
• Render solid texture mapped images.
4.1
Texture synthesis plugin
The texture synthesis importer plugin uses Volumeshops built in importer and
resource interface. Depending on the number of resource components selected
when adding a resource to Volumeshop, three different types of textures can
be created. A one component volume is the basic one, where the alpha values
Figure 4.1: Program structure
26
CHAPTER 4. IMPLEMENTATION
27
represent the textures. And a four component volume is used for storing precalculated gradients used for more advanced shading. The solid texture importer
plugin then controls the size and scaling of the finished volume. The details of
the solid texture is then read from a XML file given as input, where as the size
of the texture is defined in the plugin interface as seen in figure 4.2 right image.
4.1.1
Texture XML file
The XML file is where the texture specification is made. A simple XML Docunment Type Definition (DTD) is included, and can be used to check the
correctness of the texture pattern specification in an XML editor that supports
DTD. The XML parser included in Volumeshop only reads plain XML, so the
DTD part needs to be commented out before use.
<?xml version="1.0" ?>
<!DOCTYPE PATTERN [
<!ELEMENT PATTERN (INTERLEAVE, LAYER+)>
<!ELEMENT INTERLEAVE EMPTY>
<!ELEMENT LAYER (INTEGRATION, SHAPE+)>
<!ELEMENT INTEGRATION EMPTY>
<!ELEMENT SHAPE EMPTY>
<!ATTLIST PATTERN
NAME CDATA #REQUIRED >
<!ATTLIST INTERLEAVE
TYPE (ONCE|REPEAT) #REQUIRED >
<!ATTLIST LAYER
SIZE (SMALL|MEDIUM_SMALL|MEDIUM|MEDIUM_BIG|BIG|DEBUG|CDATA) #REQUIRED>
<!ATTLIST INTEGRATION
TYPE (RANDOM|STRUCTURED) #REQUIRED>
<!ATTLIST SHAPE
TYPE (SPHERE|BLOCK|DISC|SHEET|PLUS|TORUS|BRECCIA|HEXABLOCK|SKEWBLOCK|PYRAMID|
FLATTORUS|CUBE|KINDLMANN|CONE|SOLIDBLOCK|GRID|BRICK|TEST) #REQUIRED
SIZE (VERY_SMALL|SMALL|MEDIUM_SMALL|MEDIUM|MEDIUM_BIG|BIG|DEBUG|CDATA) #REQUIRED
DENSITY (SPARSE|MEDIUM_SPARSE|MEDIUM|MEDIUM_DENSE|DENSE) #REQUIRED
ROTATION (NONE|FREE|ORTHOGONAL|FREE_OLDBOX|BRICKSHIFT) #REQUIRED
MAXOBJECTS CDATA >
]>
Interleave
The first control parameter is the type of interleaving of the layers in the textures, two different modes are allowed, repeated and once. Figure 4.3 show the
layer structure. This can be seen as four different layers, or one small layer and
one big layer repeated.
Layer
Then the layer definition follows. One texture can consist of up to 10 different
layers. Each layer have a size definition, which controls the height of the layer.
CHAPTER 4. IMPLEMENTATION
Figure 4.2: Resource and loading interface for the solid texture importer
Figure 4.3: Textures are built up of layers
28
CHAPTER 4. IMPLEMENTATION
29
Figure 4.4: 3 first object placed with different object placement methods, bounding boxes shown
There are 6 predefined layer heights, or the height of the layer can be defined
directly by a number.
Integration
Each layer have a specified integration, either random or structured, which
controls how the objects are placed in the current layer. See section 4.1.3 to
4.1.5 for details.
Shape
Each layer then consists of different objects. A total of about 20 different objects
are possible to use in the current implementation. One object have parameters
type, size, density and rotation. Type is the 3D object building blocks, i.e.
sphere, brick, sheet. Size defines the largest dimension of each object. There
are 7 predefined sizes of objects. Additionally, the objects size can be defined
by a number. Density is a parameter that only works for random placement of
objects. This controls how many texture objects that are placed in a texture.
More details about density is found in 4.1.4.
Rotation have 5 possible values. Two of these needs further explanation,
free oldbox and brickshift. Free oldbox is a variation of free rotation, where the
bounding box of the original axis aligned object is kept. Brickshift only works
for structured placement, and is explained in more detail in 4.1.5. And finally,
maxobject is a limit of how many object of this type that is allowed in the
current layer.
Nested objects
Nested object are special cases of the basic objects, and work exactly as base
object in the current implementation.
4.1.2
Object definition
The second goal for our 3D textures was that the textures should have a sparse
representation (3.2.2). To ensure this we have designed each 3D object to have
a scalable representation. This makes it possible to modify the size of the 3D
CHAPTER 4. IMPLEMENTATION
30
Figure 4.5: Left: Bounding box creates distance between objects. Right: rotated
bounding box create new larger bounding box.
Figure 4.6: Images of a sphere (gravel), a cube (salt), and a filled torus (chert2)
objects in the renderer, without changing the integration of the objects or the
need to regenerate the 3D texture. Each 3D object is defined to have a value of
1.0 in its center and 0.0 at the border of the object. In figure 4.6 low resolution
examples illustrate the interior of a 3D object.
Common for all objects is a shift to a coordinate system where x = 0,y = 0
and z = 0 is placed in the center of the object. Since each 3D object are created
by sampling of a shifted distance function, this allows the coordinate system
for the object to be rotated before the sampling takes place. This lets us have
the correct value placed at each voxel without the need for error introducing or
costly interpolation. Thus, the creating of a 3D object is as follows: For each
object, a for loop runs through the entire space, i.e. the 3D objects bounding
box, and samples the shifted distance function at each voxel.
For each (x, y, z) p
Sphere: α = 1.0 − (x2 , y 2 , z 2 )
Cube: α = 1.0 − max(abs(x), abs(y), abs(z))
Breccia: α = 1.0 − max(2.0 ∗ abs(x + y + z), abs(x + y − z), abs(x − y + z), 2.0 ∗
abs(x + y), abs(x − y − z))
Most object are defined in a similar manner, or by a pseudo-CSG approach.
We don’t use union or intersection as found in CSG, but we create the more
complex objects in a similar way with the use of minimum, maximum, and
absolute operators.
4.1.3
Object placement
Our texture placement method consist of 3 parts, one preliminary part, and then
for each layer in the 3D texture either structured or pseudo-random placement
is used. In the preliminary part first the layers of the texture must be defined
so that they fit the texture volume. A texture has at least one layer, this layer
CHAPTER 4. IMPLEMENTATION
31
Figure 4.7: Texture generating process. Select type represent select placement
method. Create boundaries, selects the objects and creates its bounding box.
Then the location is found and checked, before an object may be generated.
is used as a background layer regardless of its defined size. If the texture have
more than one layer, the texture height is divided into parts corresponding to
the layer size. If the layer pattern is repeating, the height of all texture layers
are used before they are repeated. When this process is finished, the texture
placement iterate over each height of the texture, and selects either structured or
pseudo-random placement for this location. This process is shown in algorithm
4. The iteration over each height value of the texture is to make it possible to
have non-horizontal layers, which is not implemented.
All placement of 3D objects are done with a bounding box, even for spheres.
This is done to simplify the bounding box check, thus to speed up the texture
generation. The texture is typically generated as a 2563 voxel volume. In
addition to the texture volume, an additional volume keeps track of whether a
voxel space is used or not.
Memory
In the introduction to chapter 2 it was mentioned that one of the problems
with 3D textures was their memory requirements. Although generating memory
efficient textures is not the focus of this thesis, memory usage is a topic which
needs to be handled, since much of this thesis was developed on a Geforce
GTS250 with 512MB video memory.
The solid textures are not color textures, because 3 color channels would
require 3 times the memory, as shown in table 4.1. The color details are placed
in the transfer function, i.e. the textures are color independent/grey-scale. This
allows easy separation of the layers by assigning a different color to each layer.
On the other hand, memory which is not located on the graphics card is cheap
and upgradable, so there have not been much focus on limiting the memory
usage during the texture building process.
The texture building process uses three equally sized memory blocks of size
M*N*O, a solid texture block, a object block and a boolean block for to check
whether current voxel is used or not.
float
RGB
1 voxel
2 Bytes
6 Bytes
1283
4 MB
12 MB
2563
32 MB
96 MB
5123
256 MB
768 MB
Table 4.1: Memory required for some texture sizes
CHAPTER 4. IMPLEMENTATION
32
Figure 4.8: Voxel testing steps for empty texture space. Red cube corresponds
to bounding box of object in texture volume, next 3 images is testing of voxels
inside the red bounding box.
Object space testing
When placing an object of size (a, b, c) in the texture volume, it is first necessary
to check that the location is empty. This is performed with a brute force approach, starting at location (x0 , y0 , z0 ) and checking all the voxels in the texture
volume from (x0 , y0 , z0 ) to (x0 + a, y0 + b, z0 + c). When the object placement
is performed in an iterative way as found in structured placement (4.1.5), this
is sufficient. But when the object placement is random (4.1.4), this brute force
approach on its own is too slow.
Instead a 3-step procedure is used, as shown in figure 4.8. First the given
(x, y, z) location is checked. Then the remaining 7 corner-voxels of the objects
bounding box is checked, before the entire bounding box volume is checked
by iterating over all the voxels. This simple check of the corner voxels gives
sufficient speed for our texture building process, and is shown in algorithm 1.
Algorithm 1 Algorithm for object space testing
Check that the objects first corner voxel is empty
If empty, check if the remaining 7 corner voxels are empty.
If all are empty, check all voxels of the objects bounding box.
Bounding Box
When placing objects into the texture, each object occupy a size equal to its
bounding box. A bounding box is defined by a position X,Y,Z and its dimensions
(width, height, depth). The second image from left in figure 4.8 show an example
of this, where the cube in the left corner represents the position of the bounding
box. The use of a boolean texture limits how tight the objects can be packed
together. An other approach that was tested but later dropped due to very slow
texture generation (minutes instead of seconds to generate a texture), was the
use of a ’soft bounding’ box. As mentioned in section 4.1.2, objects are defined
as distance volume with values from 1.0 in the centre to 0.0 far from centre. A
value in between, lets say 0.5, is selected as bounding box value. This allows
the outer part of two close objects to merge together, and thus a much denser
packing of objects. For the two circles in figure 4.5 the dotted green lines would
be the 0.5 boundary.
CHAPTER 4. IMPLEMENTATION
33
Figure 4.9: Texture space is virtually extended by the length a of the largest
3D object, such that the excess part b of the object can be wrapped around.
Rotating of the bounding box
An object is defined in an axis aligned coordinate system. When rotated objects
are used in the texture, special care is needed to avoid a jagged effect. Since
the objects are implicitly defined, it is possible to rotate the coordinate axis
first, and then calculate correct values for the objects. Figure 4.5 left show a
2D view of a worst case scenario for spheres with a bounding box. Rotating the
bounding box created a new larger bounding box.
Texture tiling
Since an increase in the resolution of a 3D texture quickly consumes the entire
memory of a graphics card, a better solution is texture tiling, i.e. to repeat
the texture along its x- y-, and z-axis. Texture tiling introduces 2 problems,
repetition and unwanted seams. Repetition is not addressed in this thesis, as it
does not cause problems for our use of 3D textures.
To avoid unwanted seams when texture tiling is used, the objects in the
texture must be allowed to extend beyond the texture space, which is realized
by virtually extending the texture. This allows the placed object to wraparound,
i.e. the excess part of the object is placed at the beginning of the texture, Figure
4.9 show a 2D illustration of this concept.
Algorithm 2 Algorithm for finding bounding box of each object
calculate bounding box from object type and size
if (rotation)
rotate bounding box
if (new bounding box is selected)
//creates a larger bounding box
create new extended bounding box
CHAPTER 4. IMPLEMENTATION
34
Algorithm 3 Algorithm for generating each object
call find bounding box
for all i, j, k
map each voxel to its corresponding position in a rotated coordinate system
save the distance value of the object in the rotated coordinate system
Algorithm 4 Algorithm for generating a texture
define real values for predefined values for object & layer (SMALL...BIG)
count total number of objects
//layer setup
calculate height of all layers //layerTotalSpace
find start height of each layer //layerStartPosition
create array layerId of length height_of_texture
for all heights i
set layerId[i] to 0 //use first layer as background layer
if(more layers than one)
if (repeat_of_layers is false)
for all layers l
if(i>=layerStartPosition[l] && i < layerTotalSpace)
layerId[i] = l
else //repeat_of_layers is true
imod = modulus(i, layerTotalSpace)
for all layers l
if(imod>=layerStartPosition[l])
layerId[i] = l
//main loop
for all heights i
find first_object that have correct layer id
find total objects in this layer
check which layer type, go to structured or pseudo-random parts
if (layerId(i) is ’structured’)
create structured texture layer
else
create pseudo-random texture layer
4.1.4
Random placement
For the random placement method there is a loop running until a finite number
of steps is reached, controlled by the texture size and the density of the layer.
For each iteration, a random (X,Y,Z) location is fetched. A simple check is
done to see if there is enough free space around this location to place the current
object. If space, place it, if not, a new location is tried. Figure 4.4 (left) show an
CHAPTER 4. IMPLEMENTATION
35
Figure 4.10: Worst case rotated square bounding box, 2D
example of how the 3 first object bounding boxes can be placed. This procedure
is shown in algorithm 5.
Density estimate for random placement
The method for estimating density of a texture when placing objects is simple
but still efficient. Total texture space in each layer is calculated. Then an
estimate of the bounding box for objects with rotation is calculated, by averaging
a worst case bounding box and best case bounding box. Worst case bounding
box is found by calculating the distance from the center of the object to a corner.
Figure 4.10 show an example of this. For flat objects this differs quite a lot from
the actual worst case bounding box, but once again, we only want an estimate
of the worst case for the density calculation. For objects without rotation best
case is used.
We further estimate that a dense packed random texture have space for
about half the number of objects that it is possible to place with a structured
texture placement procedure, i.e. all objects as close as possible. This number is
then divided into 5 intervals, to be used as limits of number of objects in a sparse
to dense texture. This results in a method that works, but have inconsistent
results.
When generating a texture of low density, i.e. a texture where the number
of objects are much less than the estimated maximum number of objects, it is
possible that all generated objects are placed in one corner of the texture, due
to the random placing of objects. In this case we just regenerate the texture.
CHAPTER 4. IMPLEMENTATION
36
Algorithm 5 Algorithm for generating pseudo-random texture
//for all heights i
save original layer size
//for density estimate
if only one layer
extend layer to entire texture height
for all objects in this layer
while counter is less than 10000 //break at 10000 tested positions
calculate a random position (x,y,z) inside layer
estimate number of objects at selected density
check if number of objects < estimated density limit
check if texture space is empty
check if rotation is needed (recompute bounding box size)
findObjectSize (objectType, objectSize, rotation)
if height of object fit inside texture
check all corner voxels of bounding box
if no error, check all voxels in bounding box
if all voxels empty
generateObject (objectType, objectSize, rotation)
place object and add to object counter
4.1.5
Structured placement
For the structured method every voxel is traversed first along the Z then Y
and finally X axis. For each location once again a check is done to see if there
is enough space to place the current object. If space, the object is placed. If
not, depending on XML parameters, the object may be rotated orthogonally,
or a smaller object may be tried in the same place. If the current layer consist
of more than one 3D object, the building process alternates among the objects
found in the layer, i.e. if the layer has 3 objects A, B, C, the objects are placed in
a A, B, C, A, .. sequence. Figure 4.4 (right) show how the 3 first object bounding
boxes are placed. This procedure is shown in algorithm 6.
Brickshift
Brickshift is an input XML parameter used with structured placement. This
defines whether each row of objects needs to be shifted half the object size to
create the familiar brick pattern. To allow for variation in the brick textures,
a random value from 0 to 1/8 of the object length is added to the shift of the
texture. Brickshift only works in two directions. Additionally brickshift works
best with equally sized objects in the pattern.
Figure 4.11: Details of brick shift
CHAPTER 4. IMPLEMENTATION
37
Figure 4.12: Top-down view of siltstone pattern
Figure 4.13: Chessboard pattern for siltstone
Chessboard placement
The chessboard pattern is an extension of the structured pattern. This extension
was included to make it easier to represent alternating patterns of groups of
objects of different size and numbers. Such texture patterns are found in 616,
617 and 618, which are variation of siltstone. Siltstone 616 consist of a repeating
pattern of 3 small sand objects and one larger silt object, as shown in figure
3.2 (c). A 3D realization of a this pattern is shown in from a top down view in
figure 4.12, with an alternating pattern of one large object and 9 small objects.
What separates the chessboard pattern from a basic structured pattern, is
that the object placement is a two-step process. The first object in each layer
is used to divide the layer into a chessboard grid with the same size of each cell
as the first object, and then each black cell is filled with the first object. The
chessboard placement then continues as a basic structured placement for the
remaining objects in each layer. This two-step process thus allows the siltstone
616 pattern to be defined by just the two objects found in the pattern.
CHAPTER 4. IMPLEMENTATION
38
Algorithm 6 Algorithm for generating structured texture layer
recompute numeric values if needed //(SMALL..BIG)
if (brickshift)
find size of first object in this layer
calculate offsets for brickshift
if (chessboard)
find size of first object in this layer
place the first object in a chessboard pattern
for all voxels at height i (i.e. for all j,k)
for object = objNr to number_of_objects_in_layer
check that max number of this object is not reached
check if space empty
check if rotation needed
findObjectSize (objectType, objectSize, rotation)
if height of object fit inside texture
check all corner voxels of bounding box
if no error, check all voxels in bounding box
if all voxels empty
generateObject (objectType, objectSize, rotation)
place object and add to object counter
if objNj < lastObject in layer //for alternating placement
objNr++
else
objNr = first object in layer
4.2
Data importer plugins
As shown in figure 4.1, our solution requires a couple of additional data volumes
when we want to create more complex images than just images of our 3D texture
alone. These 3 supplemental plugins uses the built in importer and resource
interface of VolumeShop, just as the texture synthesis plugin did.
Seismic volume importer plugin
The seismic dataset we used for deformed visualization of our 3D textures consists of two parts, one volumetric dataset containing the seismic 3D volume and
a parametrization volume. To be specific, two plugins were implemented to
read this dataset. The volumetric dataset is stored in the brk file format, where
data is stored in 163 blocks (x0 . . . x1 5, y0 . . . y1 5, z0 . . . z1 5) that follow a header
containing the (x0 , y0 , z0 ) reference for the block. The parametrisation volume
is 3 component 2563 volume, where the components correspond to a (u, v, w)
parametrisation in a (x, y, z) system. This volume is stored in a straight forward
approach, with the 3 components next to each other. For details about creating
a parametrisation volume, see Patel et.al. [32].
CHAPTER 4. IMPLEMENTATION
39
Figure 4.14: Design of synthetic datavolume
Synthetic parametrisation volume plugin
The deformation of our 3D textures is done with the help of a parametrisation
volume. This is a 3 component volume, where for each voxel xyz, a RGB
value is stored. Each RGB value at voxel location XYZ corresponds to the
parametrisation. To test out the concepts it was necessary to create a few
synthetic parametrization datasets. As this is on the border of the thesis scope,
the synthetic datasets was kept as simple as possible.
The first dataset consist of 2 cylindrical layers, where layer 2 overwrites
layer 1. No deformation between the layers is implemented. By using cylindrical deformation along the i− and k − axis and just using one quadrant, the
parametrisation can be calculated from the radii of the layers (circles) and an1
gle as shown between the red lines. At each location i, k, then k = aa−a
and
2 −a1
r−r1
i = r2 −r1 , which gives us the i and k parameters. The j values are unchanged,
they follow the volume location.
We also created a 2D deformed and a full 3D parametrisation volume, following the steps as described in the work by Patel et al. [32]. The method
handles 3D deformation between two surfaces, the upper surface and lower surface. Our implementation iterates over each voxel between the surfaces, where
first the closest distance to the upper and the lower surface is calculated. The
distance from the lower surface is then divided by the total distance, and we
have a height (z) volume with values between 0 and 1. We then select the upper
volume as base, and project each xy location through the height volume, by using the gradient at each voxel location as direction. Debug steps of this process
is shown in the left of figure 5.14, and we end with a xyz parametrization with
too much noise.
Texture mask plugin
One goal when designing our 3D textures was that the 3D textures should be
recognizable when cut. For consistent testing of cuts through the 3D textures,
a couple of 3D texture masks were created. A basic texture mask is a 2D or 3D
boolean texture, which selects whether a pixel or voxel is included or not. An
example of a 2D texture mask is shown in figure 4.15.
CHAPTER 4. IMPLEMENTATION
40
Figure 4.15: (a) 2D boolean texture mask, (b) 2D gradient texture mask, both
shifted vertically by a sine wave.
Our texture masks are created by sweeping a 2D texture mask along the
third axis to form a 3D volume, but instead of using a boolean 2D texture
mask, we use a 2D gradient mask with values between 0 and 1. This allows us
to move the cut surface in the renderer while keeping the shape or curvature of
the cut. Although the use of a 3D texture as texture mask allows true 3D cuts,
we have not implemented this, since the more irregular cuts makes the texture
design more complicated.
4.3
Render plugin
The standard renderer in VolumeShop uses a transfer function [4] to map voxel
values to color and opacities to be used for direct volume rendering. We build
upon the standard renderer, but implement a couple of additions necessary for
use with our 3D textures. With the use of a couple of extra transfer functions,
we can control scaling and transparency along the x- and y-axis. This is realised
by letting each additional transfer functions control the volume along one axis.
Transfer functions is also used to handle texture masks, where the α value from
the texture mask is multiplied with the rendered volume. To help with depth
perception we add intensity shading and chroma-depth shading. The details of
this shading is described in section 2.3.5. A few of the images found in chapter
5 uses such depth based shading.
Contour shading
Our 3D textures is a collection of 3D objects placed in the texture. For some
of the 3D objects, in particular the breccia 3D objects, it is useful to have a
shader that emphasize the edges. To achieve this, we have implemented contour
shading. Contours based on curvature uses the second order derivative. In our
implementation only the second order derivative is calculated on the GPU. The
first order derivative is calculated before the rendering takes place, in the texture
building step. This requires the use of a four component texture volume, where
the gradient in x,y and z direction is stored together with the texture alpha
volume. With access to the second order derivative, we implement the thickness
controlled curvature, as described by Kindlmann in [22].
CHAPTER 4. IMPLEMENTATION
41
Deformation
When we want to render our 3D textures with deformation, we use a parametrization volume as a lookup volume for the textures. This lookup process is straight
forward, due to the structure of the parametrization volume. At each voxel location, we read the xyz-values, and then fetch the corresponding value from
our 3D textures. When needed, a modulus operator is used on the xyz-values
to handle tiling of our 3D texture. Unfortunately we are unable to use contour shading in combination with deformation, since the first order derivative
is calculated on a undeformed 3D texture, as mentioned above.
Chapter 5
Results
In this chapter we want to present the results obtained with our implementation
described in the previous chapters. Following the way we build our textures, we
want to start with the basics, images of the 3D objects.
From chapter 3.5 we have the 3D objects based on the lithological 2D textures, shown in figure 5.1. The upper row of images are our 3D objects for
modified dolostone, dolostone and irregular limestone. This modified dolostone
was added to be able to create a solid representation of dolostone, as used in
figure 5.21, since our implementation only uses a cuboid bounding box. In middle row we find variations over cuboids: silt, limestone and salt. And in the
lower row we have the 3D object for breccia, chert and gravel/sand. The right 6
images are one version of the nested objects, where the nested parts are carved
into the objects. All nested objects use limestone as base object.
In addition to these 3D objects, we have created a second class of 3D objects,
called test objects, shown in figure 5.2. These test objects are unrelated to the
lithological 2D textures, and are used as the name implies, to test concepts such
as deformation and sparseness. Additionally, some of these test objects are used
as 3D glyphs in a vector field, an expansion of the random placement method
that did not go any further than the two images shown in figure 5.23. The
inclusion of the quadratic defined by Kindlmann [22], is because this 3D object
was used to test curvature shading.
We also use an empty 3D object, with similar size as the limestone block,
to be able to mark the area in the boolean check texture as used. This helps
create space between the 3D objects when needed.
5.1
3D textures
Having shown the building blocks for our 3D textures, we then want to show
how the familiar brick texture is defined in our XML format. This XML-file
is read by our texture synthesis plugin to create the limestone texture, shown
in figure 5.3. Since our texture synthesis plugin uses the importer interface in
VolumShop, the texture size has to be defined in the load window, as shown
in figure 4.2. This texture is of size 128*128*64, filled with 3D object (blocks)
of size 32*32*16. Compared with 2D lithological pattern it is based on, our
implementation lack the variation in height as found in Limestone 627 (shown
42
CHAPTER 5. RESULTS
43
Figure 5.1: Some of the 3D building blocks, i.e. the 3D objects based on the
2D lithological symbols found in table 3.1. In the right image nested objects
are craved into limestone base objects.
Figure 5.2: In this figure we show 3 rows of test objects, together with an image
of iso-surfaces of the distance volumes of test objects and those found in figure
5.1. The first row of objects contains the quadratic defined by Kindlmann [22],
and two 3D objects with defined directions. The second row of test objects are
variations over grids.
CHAPTER 5. RESULTS
44
Figure 5.3: The limestone 3D texture, as defined by its XML information.
in figure 3.6). We can adjust the size of the 3D objects, but since our 3D objects
have a fixed width to height ratio, this will not create a better representation.
<PATTERN NAME="LIMESTONE">
<INTERLEAVE TYPE="REPEAT"></INTERLEAVE>
<LAYER SIZE="BIG">
<INTEGRATION TYPE="STRUCTURED"></INTEGRATION>
<SHAPE TYPE="BLOCK" SIZE="32" DENSITY="DENSE" ROTATION="BRICKSHIFT">
</SHAPE>
</LAYER>
</PATTERN>
Cutting solid textures
Since the layout of brick pattern found in limestone is well-known, we want to
use this texture as our basis for dealing with cuts into the texture and nested
objects. To help us to have consistent cuts in the 3D textures, we use a couple
of texture mask, as described in chapter 4.2. In figure 5.4 we see such a texture
mask used to cut a brick texture. When the texture volume is treated as a solid
block (space between objects is represented as black), pseudo-artefacts appear
when the texture block is cut at non-optimal locations. This effect is less visible
when the texture is rendered as 3D object in a transparent space. Even when
the cutting surface approaches a 45 degree angle to the texture, i.e. where
the bricks are cut down to triangles at the top of the right image, we can still
recognize the texture as a brick texture. But this is of course helped by the
surrounding texture.
Nested limestone texure
When we introduce nested objects to the 3D object representing limestone, it
soon becomes more complicated. In our first attempt, shown in figure 5.5, we
carve the nested objects out of the limestone object in one direction. This
works as long as the cutting surface does not move beyond about 45 degree
angle. The obvious disadvantage of this method is that the texture must be
facing the cutting surface. If the texture is rotated 90 degrees, we either end up
with a distorted nested part, or the nested part is not visible.
The next step is carving the nested objects from two directions, which is
shown in figure 5.6. This allows the limestone blocks to appear as a block
with a nested object when viewed from multiple angles. Cutting the texture at
a non-ideal location can here introduce a new unwanted type of object, with
CHAPTER 5. RESULTS
45
Figure 5.4: Brick texture, treated as colors left and as objects right. Rendered
with depth shading in right image.
Figure 5.5: Nested blocks, where nested part is carved out along depth.
Figure 5.6: Nested blocks, where nested part is carved along two axes in the
left image, and as 5 separate parts in the right image.
CHAPTER 5. RESULTS
46
Figure 5.7: Sparse limestone pattern. When pattern is rotated texture is less
recognisable.
a long horizontal stripe, as seen in the left image at fourth, sixth, 10th and
16th row. The two black rows appear because the slice is placed between the
limestone blocks. A different approach that deals with nested objects visible
from two directions, is shown in figure 5.6. Here the nested objects are carved
as 5 separate parts, one from each side and the fifth in the center of the object.
Once again, this introduces unwanted new patterns, as seen in forth and sixth
row, and it is possible to cut the limestone object such that no nested object is
visible.
Sparse representation of limestone and salt
A solid representation of a 3D texture is an opaque object, and does not contribute to help with occlusion of the space we are texturing. One way to deal
with occlusion is to create a more sparse representation of the texture. Unfortunately this ended up as a much more complex task then we had anticipated.
Since all our 3D objects are designed as scalable objects, it is trivial to reduce
the size of the limestone objects without changing the integration. In figure 5.7
we have done this, and while the left image still is recognizable as a limestone
texture, at least when you know what you are looking at, its rotated version
is not. This texture might as well have been created with our pseudo-random
approach, from chapter 4.1.4. As we see it, it is the integration of the 3D
objects combined with the 3D objects that form the texture, not the 3D objects
on their own. The recognition of a spare 3D texture can be further improved
by scaling down only the area of interest, as shown in figure 5.8 left, and made
less occluding with the use of transparency, shown in figure 5.8 right.
Other textures
Most of our concepts have been tested on the limestone texture or similar cuboid
textures. The reason for this is that we believe they are the most recognizable
textures, and when problems occur with the cuboid textures, they will most
likely occur with other textures as well. But our implementation is still able
to create other textures. One of these is the texture for pattern 606 breccia.
Another one is the granite textures found in patterns 718 and 719, where the
example textures are found in figure A.2. The breccia texture shown in figure
5.9 right is a 2563 texture volume filled with 3 different sizes of breccia 3D
objects, placed with our pseudo-random placement method, where all objects
are allowed random rotation.
In chapter 3.4 we said we wanted to exclude the non-sedimentary patterns
because many of them does not work with our texture model, i.e. consisting of
CHAPTER 5. RESULTS
47
Figure 5.8: Sparse salt pattern. Center part of left image is scaled down in
our renderer such that the darker blocks behind are less occluded. In the right
image all but the front light grey blocks are made transparent.
a set of 3D objects. Although we do not know what the small line segments
found in the 2D patterns represent, we can still create a somewhat resembling
3D texture by filling the texture volume with small pipe objects, which is shown
in figure 5.10 left. In the right image of figure 5.10 the pipe volume is rendered
as a solid texture, by representing the empty space as white. This image show
one of the limitations from our choice of using an axis aligned bounding box.
Since the rotated bounding box increases much in size when used with thin
objects such the rotated pipe objects, we are not able to pack them as tight as
we would prefer, as shown in the example granite patterns 718 and 719.
5.2
Synthetic datasets
Having created a few 3D textures and looked on some of the challenges we are
facing when creating and cutting the textures, we want to use the 3D textures
to texture deformed 3D volumes. Since the interpretation of a seismic 3D volume and creating its 3D parametrization volume is a large and time consuming
process, and such seismic 3D volumes are not freely available, we have one
such dataset to work with. Thus we believed that creating a few 3D synthetic
parametrization volumes would be an achievable task, which would help us test
our 3D textures.
We start with a couple of simple synthetic parametrization volumes. In
figure 5.11 left we use a sine wave to create a pseudo-parametrization volume.
The only changes with this is that the objects places are shifted along the sine
wave in a vertical direction. In the right image in figure 5.11 we have used
the method described in chapter 4.2 to create a 2D parametrization volume.
Parametrization along one axis follows the arc of the curve, and parametrization
along the second axis follows the diameter of the circle at each x,y location.
Since the curvature is low, and we use the limestone texture on top of our
mathematically defined parametrization volume, this works well.
The seismic 3D volume we use later in this chapter have a full 3D parametrization with areas of high curvature. Therefore we decided to try and implement
CHAPTER 5. RESULTS
48
(a) 606
Figure 5.9: The pattern 606 Breccia, with example pattern (a) from FGDC[16].
In the upper left image contour shading is used to emphasise the sharp edges
of the breccia objects. In the right image we have a 2563 voxel Breccia texture.
Intensity depth shading and contours are used to render the texture objects.
Figure 5.10: Approximation of the granite patterns 718 and 719 as found in
figure A.2. Only the pipe objects are rendered in left images, the texture is
rendered as a solid texture in right image. Especially the short edge of the
right image have a similar appearance as pattern 719, found in figure A.2. With
an improved bounding box implementation, the large white spaces could be
avoided.
CHAPTER 5. RESULTS
49
Figure 5.11: In the left image a sine wave used as parametrization volume,
where left and right part of the wave are shifted vertically. This is shaded with
chroma-depth to improve depth perception of the grid test objects. Here some
small render errors occur on the border between the parts. In the right image
two cylinder parts are used to create the parametrization volume.
the full 3D parametrization method as described in [32], which in retrospect
was not the best use of our time. We managed to create a 3D volume with
2D parametrization between two curves, which can be seen in figure 5.12. Our
implementation was not without its flaws. Our implementation have problems
dealing with convex curves as seen at top of the figure. We have included two
more images that show 3D textures used with this 2D parametrization volume.
We then tried to use this method to create a full 3D parametrization volume,
but our implementation was unsuccessful. Our 3D parametrization volume have
too much noise to be of any use, which can be seen in figure 5.14 right. Maybe
some of the noise can be traced to the low resolution (643 ) that was used when
the volume was created.
Figure 5.12: Parametrization volume created by methods defined in [32], only
2D variation works.
CHAPTER 5. RESULTS
50
Figure 5.13: The left image shows the noise at the top where the space between
the block objects is largest. The right image show a transparent nested texture,
which appears without noise. This is due to the transparent block being uniform
over the noisy segment of the parametrization volume.
Figure 5.14: Creating of a 3D parametrization volume. Left image show the
bounding surfaces (red and grey) with debug of the xy projection lines, middle
image shows the resulting parametrization volume and right image shown the
noise that appear when a salt cube texture is applied to this parametrization
volume.
CHAPTER 5. RESULTS
51
Figure 5.15: The image shows the parametrization volume with salt cube texture
applied
5.3
Deformation of 3D textures
Since generating a full 3D synthetic parametrization volume did not work as
planned, we only have one parametrization volume to test 3D deformation of
our 3D textures on. This parametrization volume was created by Patel et al. in
their work Illustrative Rendering of Seismic Data [32].
In figure 5.15 we see a salt cube texture deformed with this parametrization
volume, rendered as a solid texture. And in figure 5.16 we use the parametrization volume to deform a sparse salt cube texture. This image is pseudo-shaded,
i.e. raycasting is used to shade the objects without calculating the gradient.
Since we are using the parametrization volume as a lookup volume for access to
our 3D textures, removing the 6 additional lookups needed for the neighbouring
voxels to calculate the gradient improves our framerate. The pseudo-shaded effect is achieved by using a steep transfer function with a narrow space between
two colors, in this case light grey and dark grey. Since all objects in the scene
are built using similar distance volumes, this pseudo-shading works well for the
testing part.
Deformation of grid and nested limestone
A few pages above we examined cutting of nested 3D objects without ending up
with a good solution. Now that we have a parametrization volume to work with,
we can continue our examination of nested objects. Once again the limestone
texture is used as base texture for the nested objects. Before we show the images
of the deformed nested textures, it can be useful to visualize deformation of
the entire parametrization volume. In figure 5.17 we use a grid 3D texture to
visualize this deformation, rendered with chroma depth. A nice visualization of
the parametrization volume can also be achieved with the use of simple textures
consisting of either small or large cuboids, as seen in figure 5.18. In the following
two images, only about two-thirds of the parametrization volume is used.
In figure 5.19 we test to fill a limestone object with nested breccia objects.
Although this is not a combination found in the lithological patterns defined
by FGDC [16], the results are transferable. Each limestone objects contains 32
nested objects. Even when the limestone objects in the nested 3D texture are
rather large, the nested breccia objects become unrecognisable at areas with
high deformation. Reducing the number of nested objects, thus increasing their
CHAPTER 5. RESULTS
52
Figure 5.16: Pseudo-shaded salt cubes deformed with the parametrization volume. The noise in the middle of the image is due to a fault, i.e. a break in the
layers of the seismic data.
size, is the method we try in figure 5.20. This leads to a better recognition
of the nested objects, but the problem with distortion still remains at difficult
areas, like in the upper middle of the image.
Multiple 3D textures applied the parametrization volume
Before we move on to the additional results we want to show a similar rotated
image to the one used for the front page. In figure 5.21 we use four different 3D
texture to create the image. The empty space in the sheet 3D texture and the
salt 3D texture is created with the help of empty objects.
Although the image looks nice, when we want to display the interior of
the parametrization volume, a grid texture as shown in figure 5.17 is a better
solution. Combining a grid in the areas of interest with a solid or sparse texture
in the areas of not interest is also a solution that could have been tried, but is
not possible in our implementation.
5.4
Additional results
Our final collection of results is those that does not fit comfortably with the rest
of the material. Perhaps some of the results belong to future work, but we still
keep them here with the rest. When the 3D textures is designed as a collection
of 3D objects, instead of cutting these objects it is possible to build the 3D
texture around the cutting surface, which as shown in figure 5.22. Here all the
small 3D cubes are placed around the cutting cylinder. Given a fast enough
implementation of the texture synthesis, this would eliminate the problem with
unrecognisable objects at the surface of a cut. Our texture method is not
optimized for speed, but creates a 2563 texture filled with 14827 83 voxel salt
cubes that are randomly rotated in less than 2 second.
CHAPTER 5. RESULTS
53
Figure 5.17: The image shows 3D grid objects rendered with chroma depth
When we created our pseudo-random placement method described in chapter
4.1.4, we tested to let the rotation of the 3D objects be controlled by a vector
field. This can create nice effects, which is seen in figure 5.23. In the left image
the pyramid 3D objects are placed as random directional 3D glyphs. Since this
testing was used with a hardcoded vector field, this is not included as part of
our implementation. The right image in figure 5.23 is created in a similar way.
The final result is the use of a noise texture to introduce variation in the
surface of the objects, which are shown in figure 5.24. The left texture is built
as a brick texture consisting of limestone objects and salt objects, and allowing
orthogonal rotation of objects. The color is manually selected in the transfer
function used with the renderer. The same effect is used in the right image.
Since small details does not work well with deformation, we have not tried to
use these textures with the parametrization volume.
CHAPTER 5. RESULTS
54
Figure 5.18: The images shows small and large cuboid objects used to visualize
the deformation of the parametrization volume. The noise in the middle of the
lower image is still due to the objects in the 3D texture covering a fault in the
parametrization volume.
CHAPTER 5. RESULTS
55
Figure 5.19: Nested blocks on the left, distortion almost makes the nested part
unrecognisable in the right image where the nested texture is used on a real
data volume.
Figure 5.20: More of the nested object information is kept when larger nested
objects are used, but still distortion problems
CHAPTER 5. RESULTS
(a) Silt
(b) Dolostone
56
(c) Limestone
(d) Salt
Figure 5.21: Four textures are used here, shale, dolostone, limestone and salt.
Open parts are used in the middle of the two sparse textures. Brickshift is not
used in the sparse textures because this occludes more of the structure.
CHAPTER 5. RESULTS
57
Figure 5.22: Building the texture around a cut-out volume, avoids cutting of
objects
Figure 5.23: 2D vector field used to place objects on the left and 3D vector field
on the right, created with a variation of the pseudo-random method.
CHAPTER 5. RESULTS
58
Figure 5.24: Using a secondary noise texture for color independent of shape.
Left texture is built as a brick texture with orthogonal rotation of objects. The
object on the right is the implicit quartic polynomial used by Kindlmann et al.
in [22]
Chapter 6
Summary, Conclusions and
Future Work
6.1
Summary
Illustrative techniques have been around for a long time. In recent years many
of these techniques have been applied to medical visualization of 3D images.
In the seismic domain 2D lithological patterns are well established as a way to
represent interpreted seismic data. With more advanced acquisition techniques,
the seismic datasets are generated as 3D images. Therefore we want to examine the use of specially designed 3D textures based on a set of 2D lithological
patterns as a way of representing interpreted 3D seismic data.
In this thesis we present the steps going from 2D lithological patterns to 3D
textures and the use of such 3D textures on an interpreted seismic dataset. We
define three design goals for our process in section 3.2, that the 3D textures
should be recognizable, deformable and have a sparse representation. Since the
2D lithological patterns are illustrations and not images, our first step is to
analyse the 2D lithological pattens and split a subset of them into two parts,
2D symbols and integration patterns. We then end up with our texture model,
a set of symbols that can be scaled, rotated and nested, placed in a dense or
sparse integration, and possibly separated by different layers. Then we extend
the 2D symbols to 3D representations in section 3.5, preferring the use of basic
geometrical shapes. Nested objects are found in the 2D lithological patterns,
and we then present a few possible ways to represent them.
We have implemented our solution as a set of plugins in the VolumeShop
framework. To create the images a few supporting plugins have been implemented as well, but the main part is found in the texture synthesis plugin in
section 4.1. The building process of a 3D texture is controlled by first defining
the texture in a XML file. All our 3D objects are created as shifted distance
functions, allowing the 3D objects in the textures to be scaled in the renderer.
Our texture plugin places the 3D objects in the 3D texture divided into layers using one of two different placement methods, pseudo-random or structured
placement.
The supporting plugins are a render plugin, a plugin that generates a synthetic parametrization volume, and a couple of loader plugins for reading the
59
CHAPTER 6. SUMMARY, CONCLUSIONS AND FUTURE WORK
60
3D seismic dataset used for creating the images in section 5.3. In the synthetic
parametrization plugin we tried to generate a full 3D parametrization volume
with the method that is described in [32], but we ended up with too much noise
in our generated 3D volume. The render plugin uses the standard Phong renderer from Volumeshop. To this renderer plugin we added contour shading, and
control over scaling and transparency of the 3D texture along two axes.
In chapter 5 we present a couple of 3D textures, show some problems with
cutting of textures and placing nested objects. We then show the result of
our generation of a 3D parametrization volume. We then use the seismic
parametrization volume to examine nested objects again and visualization of
the entire volume with sparse 3D textures compared to a 3D grid. In the end
we look at 2 additional possibilities when the 3D texture is created from a
collection of 3D objects and one enhancement of the 3D textures.
6.2
Conclusions
The goal for this thesis was to create solid 3D texture representations based on
2D lithological patterns. Since the 2D patterns are illustrations of the rock types
found in the earth, there was two ways to go about this challenge. One way is
to look at only the 2D lithological patterns as they are found in the document
defined by FGDC [16]. We created one such texture with our implementation,
shown in figure 5.10. The other way is how we create our 3D textures, by trying
to analyse the patterns and understand what they represent. In retrospect, our
method would have benefited from talk with a geoillustrator.
Our 3D objects may appear as very crude approximations of the 2D objects
found in the litholigcal patterns, but as we see in figure 5.19 and 5.19, when
the 3D objects are deformed, less small details is better. Our XML controlled
texture synthesis process is both too flexible for our needs and not flexible
enough. Too much time went into creating the texturing framework compared
to our needs for this thesis, and there is still annoying ad-hoc solutions such
as the use of a dedicated empty block, or the use of fixed size ratio on the 3D
objects. But still, we are able to create some nice 3D textures based on the 2D
patterns. However, the time spent on creating the plugin used to create full 3D
synthetic parametrization volumes could have been used for something better,
such as more time on using traditional visualisation tools for handling occlusion.
Using solid 3D textures to render the seismic data, which is seen in the
middle two layers of figure 5.21 and in figure 5.15, does not add sufficient value
over the use of 2D textures such as the 2D lithological patterns. Especially the
dolostone texture in 5.21 had to be carefully placed to avoid getting vertical
artefacts, similar to those seen with the blocks in figure 5.4. And we believe
nested objects cause more problems than they are worth. The nested objects in
figures 5.19 and 5.20 are almost unrecognisable, since areas with much distortion
and small details does not work well together.
On the other hand, when the seismic 3D data is rendered as a sparse volume
such as the upper and lower layer of 5.21 or as the grid in 5.17, this creates
a different scenario. We believe that the possibility to display the interior of
the volumes in such a way show promising results. There is however room for
improvements, and some of those are discussed in the next section.
As a final thought, we believe that the design goals we defined for our 3D
CHAPTER 6. SUMMARY, CONCLUSIONS AND FUTURE WORK
61
texture in many cases works against each other. It is possible to create recognizable 3D textures, but deformation of them does not always work successfully,
and viewing the interior of a deformed volume, which was our goal for creating
sparse representations, is better achieved by grids or objects in a grid formation.
6.3
Future work
As mentioned in the conclusions, a talk with a geoillustrator to better understand the reasoning behind some of the more difficult of the lithological patterns
would be helpful. For this first prototype of the 3D lithological textures the information at hand was sufficient, but if more of the patterns should be extended
to 3D version, this would be necessary.
Better 3D objects and more flexible texture synthesis
Some of the 3D realisations of the lithlogical symbols could gain from a better
representation. Among these are the breccia object and the gravel object. As
mentioned before, breccia is broken angular fragments of rock. In its current
definition, no variation exists apart from size. Variation in number of cut planes
and orientation of cut planes would help make breccia objects look more similar
to the randomness found in the 2D lithlogical patterns. And the gravel object
would look more realistic if it was represented by something different than a
sphere. An ellipsoid shape could be more suited.
It would also be useful to have a more flexible integration framework, where
it should be possible to both define the objects dimensions and their bounding
box. Similarly the concept of a soft bounding box could be of interest, as
mentioned in section 4.1.3, to allow more dense packed textures. This would
require a better implementation of object space testing to make the texture
generation time acceptable. It could also be interesting to support loading of
external building blocks. This would allow the use of better modelling tools,
such as Autodesk 3D Studio Max or similar software, or some specially designed
software.
Nested objects as images
When dealing with nested object we believe that the deformation and thus the
internal part of the seismic volume can be represented by the base objects alone.
Since the nested objects typically are smaller details than the base objects,
they tend to get more distorted, as mentioned in the conclusion. Therefore we
believe that a better solution might be to synthesize them onto the deformed
base objects.
Other model than 3D texture
The choice of using solid textures as a way to represent the sparse textures is
debatable. Perhaps a better solution would be to deform the textures before
they were rendered, i.e. create a large deformed volume equal to the size of the
seismic volume. An other option is to change to a polygon based model, as the
sparseness of the textures is more suitable to the architecture of the graphics
cards, which are optimized for polygons.
CHAPTER 6. SUMMARY, CONCLUSIONS AND FUTURE WORK
62
Contour shading
In our implementation contour shading uses pre-calculated first order derivatives
stored in a four component volume. We wanted to use this method to increase
the framerate. A different implementation of contour shading that does not
depend on pre-calculated values could be useful.
Acknowledgements
This work has been partially done within the Geoillustrator research project,
which is funded by Statoil and the PETROMAKS programme of The Research
Council of Norway.
I would like to thank my supervisors Helwig Hauser and Daniel Patel for
their support and guidance through the work on this thesis. I would also like
to thank Stefan Bruckner for allowing the use of the VolumeShop framework.
63
Appendix
Figure A.1: The lithlogical patterns defined by FGDC page 1 [16]
64
APPENDIX
Figure A.2: The lithlogical patterns defined by FGDC page 2 and 3[16]
65
Bibliography
[1] World
consumption
of
primary
energy
by
type
and
selected
country
groups,
December
http://www.eia.doe.gov/pub/international/iealf/table18.xls.
energy
2008.
[2] M.J. Ackerman. The visible human project. Proceedings of the IEEE,
86(3):504 –511, mar. 1998.
[3] S. Bruckner, S. Grimm, A. Kanitsar, and M. E. Gröller. Illustrative contextpreserving exploration of volume data. IEEE Transactions on Visualization
and Computer Graphics, 12(6):1559–1569, 2006.
[4] Stefan Bruckner and M. Eduard Gröller. Volumeshop: An interactive system for direct volume illustration. Visualization Conference, IEEE, 0:85,
2005.
[5] Michael Burns and Adam Finkelstein. Adaptive cutaways for comprehensible rendering of polygonal scenes. ACM SIGGRAPH, 2008.
[6] E Catmull. A subdivision algorithm for computer display of curved surfaces.
PhD thesis, Comptr. Sci. Dept., Univ. of Utah, 1974.
[7] Hsing-Ching Chang, Chuan-Kai Yang, Jia-Wei Chiou, and Shih-Hsien Liu.
Chaos and graphics: Synthesizing solid particle textures via a visual hull
algorithm. Comput. Graph., 33(5):648–658, 2009.
[8] B. Csébfalvi, L. Mroz, H. Hauser, A. König, and M. E. Gröller. Fast
visualization of object contours by non-photorealistic volume rendering.
Eurographics, 20(3), 2001.
[9] Barbara Cutler, Julie Dorsey, Leonard McMillan, Matthias Müller, and
Robert Jagnow. A procedural approach to authoring solid models. In
SIGGRAPH ’02: Proceedings of the 29th annual conference on Computer
graphics and interactive techniques, pages 302–311, New York, NY, USA,
2002. ACM.
[10] Doug DeCarlo, Adam Finkelstein, Szymon Rusinkiewicz, and Anthony
Santella. Suggestive contours for conveying shape. ACM Trans. Graph.,
22:848–855, July 2003.
[11] J. Diepstraten, D. Weiskopf, and T. Ertl. Interactive cutaway illustrations.
Computer Graphics Forum (Proceedings of Eurographics 2003), 22(3):523–
532, 2003.
66
BIBLIOGRAPHY
67
[12] J.-M. Dischler and D. Ghazanfarpour. Interactive image-based modeling
of macrostructured textures. Computer Graphics and Applications, IEEE,
19(1):66 –74, jan. 1999.
[13] F. Dong and G. Clapworthy. Volumetric texture synthesis for nonphotorealistic volume rendering of medical data. The Visual Computer,
21(7):463–473, August 2005.
[14] Y. Dong, S. Lefebvre, X. Tong, and G. Drettakis. Lazy solid texture synthesis. Computer Graphics Forum, 27:1165 – 1174, 2008.
[15] David S. Ebert, F. Kenton Musgrave, Darwyn Peachey, Ken Perlin, and
Steven Worley. Texturing and Modeling: A Procedural Approach. Morgan
Kaufmann Publishers Inc., San Francisco, CA, USA, 2002.
[16] FGDC.
Federal geographic data committee,
digital cartographic standard for geological map symbolization,
2006.
http://www.fgdc.gov/standards/projects/FGDC-standards-projects/geosymbol/FGDC-GeolSymFinalDraft.pdf/view.
[17] David J. Heeger and James R. Bergen. Pyramid-based texture analysis/synthesis. SIGGRAPH 95, pages 229–238, 1995.
[18] http://en.wikipedia.org/wiki/Kkrieger.
[19] V. Interrante, H. Fuchs, and S.M. Pizer. Conveying the 3d shape of
smoothly curving transparent surfaces via texture. Visualization and Computer Graphics, IEEE Transactions on, 3(2):98 –117, April-June 1997.
[20] Robert Jagnow, Julie Dorsey, and Holly Rushmeier. Stereological techniques for solid textures. ACM Trans. Graph., 23(3):329–335, 2004.
[21] Robert Jagnow, Julie Dorsey, and Holly Rushmeier. Evaluation of methods
for approximating shapes used to synthesize 3d solid textures. ACM Trans.
Appl. Percept., 4(4):1–27, 2008.
[22] Gordon Kindlmann, Ross Whitaker, Tolga Tasdizen, and Torsten Moller.
Curvature-based transfer functions for direct volume rendering: Methods
and applications. In VIS ’03: Proceedings of the 14th IEEE Visualization
2003 (VIS’03), page 67, Washington, DC, USA, 2003. IEEE Computer
Society.
[23] J. Kopf, C.-W. Fu, D. Cohen-Or, O. Deussen, D. Lischinski, and T.-T.
Wong. Solid texture synthesis from 2d exemplars. ACM Trans. Graph.,
26(3), 2007. ACM Trans. Graph. 26, 3, 2.
[24] A. Lagae and P. Dutre. A procedural object distribution function. ACM
Trans. Graphics, 24(4), 2005.
[25] L. Lefebvre and P. Poulin. Analysis and synthesis of structural textures.
Proceeding of Graphics Interface 2000, pages 77–86, May 2000.
[26] Wilmot Li, Lincoln Ritter, Maneesh Agrawala, Brian Curless, and David
Salesin. Interactive cutaway illustrations of complex 3d models. ACM
Transactions on Graphics, 26(3):31–40, 2007.
BIBLIOGRAPHY
68
[27] Endre M. Lidal, Tor Langeland, Christopher Giertsen, Jens Grimsgaard,
and Rolf Helland. A decade of increased oil recovery in virtual reality.
IEEE Computer Graphics and Applications, 27(6):94–97, Nov./Dec. 2007.
[28] Yanxi Liu, Wen-Chieh Lin, and James Hays. Near-regular texture analysis
and manipulation. In SIGGRAPH ’04: ACM SIGGRAPH 2004 Papers,
pages 368–376, New York, NY, USA, 2004. ACM.
[29] A. Lu and D. S. Ebert. Example-based volume illustrations. In Proceedings
of IEEE Visualization 2005, pages 83–92, 2005.
[30] S. Owada, F. Nielsen, M. Okabe, and T. Igarashi. Volumetric illustration:
designing 3d models with internal textures. In Proceedings of the 2004
SIGGRAPH Conference, pages 322–328, 2004.
[31] D. Patel, S. Bruckner, I. Viola, and E.M. Groller. Seismic volume visualization for horizon extraction. In Pacific Visualization Symposium (PacificVis), 2010 IEEE, pages 73 –80, March 2010.
[32] Daniel Patel, Christopher Giertsen, John Thurmond, and Eduard Gröller.
Illustrative rendering of seismic data. Proceedings of Vision Modeling and
Visualization, pages 19–22, 2007.
[33] Daniel Patel, Øyvind Sture, Helwig Hauser, Christopher Giertsen, and
M. Eduard Gröller. Knowledge-assisted visualization of seismic data. Computers & Graphics, 33(5):585 – 596, 2009.
[34] Darwyn R. Peachey. Solid texturing of complex surfaces. ACM SIGGRAPH, 19(3):279 – 286, July 1985.
[35] Ken Perlin. An image synthesizer. ACM SIGGRAPH Computer Graphics,
19(3):287 – 296, 1985.
[36] Nico Pietroni, Paolo Cignoni, Miguel A. Otaduy, and Roberto Scopigno.
Solid-texture synthesis: A survey. IEEE Computer Graphics and Applications, 30(4):74–89, July/August 2010.
[37] Nico Pietroni1, Miguel A. Otaduy, Bernd Bickel, Fabio Ganovelli, and
Markus Gross. Texturing internal surfaces from a few cross sections. EUROGRAPHICS 2007, 26(3), 2007.
[38] Xuejie Qin and Yee-Hong Yang. Aura 3d textures. IEEE TRANSACTIONS
ON VISUALIZATION AND COMPUTER GRAPHICS, 13(2):379–389,
MARCH/APRIL 2007.
[39] D. M. Rubin. Cross-bedding, bedforms and paleocurrents. SOCIETY OF
ECONOMIC PALEONTOLOGISTS AND MINERALOGISTS, 1987.
[40] D.M. Rubin and C. Carter. Bedforms 4.0: Matlab code for simulating bedforms and cross-bedding. Technical report, U.S. Geological Survey OpenFile Report 20051272, 2005.
[41] Kenshi Takayama, Makoto Okabe, Takashi Ijiri, and Takeo Igarashi.
Lapped solid textures: Filling a model with anisotropic textures. ACM
Transactions on Graphics, 27(3), 2008. Publication date: August 2008.
BIBLIOGRAPHY
69
[42] Lujin Wang and Klaus Mueller. Generating sub-resolution detail in images
and volumes using constrained texture synthesis. In Proceedings of the
conference on Visualization ’04, VIS ’04, pages 75–82, Washington, DC,
USA, 2004. IEEE Computer Society.
[43] Li-Yi Wei, Sylvain Lefebvre, Vivek Kwatra, and Greg Turk. State of the
art in example-based texture synthesis. Eurographics ’09 State of the Art
Reports (STARs), 2009.
[44] D. Weiskopf and T. Ertl. A depth-cueing scheme based on linear transformations in tristimulus space. Technical report, Universität Stuttgart,
Fakultät Informatik, September 2002. TR-2002/08.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement