Improving Artwork Reproduction Through 3D

Improving Artwork Reproduction Through 3D
Improving Artwork Reproduction Through
3D-Spectral Capture and Computer
Graphics Rendering
Project Overview - May 2006
Roy S. Berns, Ph. D.
Munsell Color Science Laboratory
Chester F. Carlson Center for Imaging Science
Rochester Institute of Technology
Research Homepage: http://art-si.org
[email protected]
Sponsored By
THE ANDREW W. MELLON FOUNDATION
Table of Contents
Executive Summary ................................................................................................................... 2
Description of the Problem ........................................................................................................ 3
Solving the Problem................................................................................................................... 4
Research Methodology............................................................................................................... 7
Phase 1: Construct and Optimize a Practical BRDF System................................................................ 7
Phase 2: 3-D Scanning and Reduction to Practice .................................................................................. 8
The Virtual Museum.................................................................................................................................... 8
Phase 1: Construct and Optimize a Practical BRDF System ..................................................... 8
Stage 1-A: Instrument Development ......................................................................................................... 8
Stage 1-B: Data Collection of Objects ....................................................................................................... 9
Stage 1-C: Implementing Rendering Algorithms ................................................................................. 10
Stage 1-D: Model Evaluation – Physics .................................................................................................. 11
Stage 1-E: Model Evaluation – Psychophysics ...................................................................................... 11
Timeline ....................................................................................................................................................... 12
Anticipated Outcomes .............................................................................................................. 12
Scholarly Publications ............................................................................................................................... 12
Public-Domain Databases ......................................................................................................................... 13
Sustainability ........................................................................................................................... 13
The Munsell Color Science Laboratory ................................................................................... 14
Munsell Color Science Laboratory Personnel ......................................................................... 14
References ............................................................................................................................... 15
Appendix I: Project 2 3-D Scanning and Reduction to Practice.............................................. 19
1
Executive Summary
Cultural heritage is most commonly accessed in two ways: in real life or as a
reproduction in print or display. The latter has limited realism since it reduces the observer’s
interactive experience to one pre-defined by a photographer. That is, the complex interplay
between the lighting, work of art, and observer has been condensed to a single image based on a
photographer’s subjective decisions.
The purpose of this research project is to develop a practical methodology for imaging
cultural heritage that is not limited to a single subjective image. Instead, it will be a
comprehensive record of the object’s optical properties. This requires first measuring the
geometric and spectral properties of the art using an imaging gonio-spectrophotometer, an
instrument where a light source and spectral-based camera are moved independently in threedimensional space around the object to capture data known as the bidirectional reflectance
distribution function, or BRDF. To render an object realistically an instrument that can measure
its shape and the 3-D properties of its environment is also required. From these measurements,
mathematical models from the domain of computer graphics are used to render for an unlimited
set of viewing experiences. These can be presented interactively using computer-controlled
displays or statically where different renderings are created for purposes such as documentation,
publication, conservation, and scholarship.
The research will produce a measurement system appropriate for use in a museumimaging studio that captures sufficient spectral, geometric, and shape information to enable the
realistic rendering of paintings and drawings over typical viewing geometries. The research will
follow a strategy of staged development beginning with basic research then instrument design,
visual experimentation, and reduction to practice. Two successive phases are envisioned where
each can be performed independently. The total time for both phases is five years.
Phase 1: Construct and Optimize a Practical BRDF System
Stage 1-A: Instrument Development
Stage 1-B: Data Collection of Objects
Stage 1-C: Implementing Rendering Algorithms
Stage 1-D: Model Evaluation – Physics
Stage 1-E: Model Evaluation – Psychophysics
Phase 2: 3-D Scanning and Reduction to Practice
Stage 2-A: Laser Scanner Acquisition and Incorporation
Stage 2-B: Defining User Needs
Stage 2-C: Data Collection of Museum Lighting Environments
Stage 2-D: Practical Implementation
Stage 2-E: System Verification
Currently funding for Phase 1 has been obtained.
2
Description of the Problem
Imagine you are at The Museum of Modern Art standing in front of van Gogh’s Starry
Nights. You notice that it is behind glass. You move to avoid the glare. To see his build up of
paint, you walk from side to side. Perhaps you stand next to the wall to see it from a grazing
angle. If this painting were executed on a wooden panel, you could see if the wood was flat or
warped. This particular gallery has white walls, adding diffuse light to the experience. Clearly,
there is a complex interplay between the lighting, painting, and observer. If the painting is loaned
to a different museum with its unique environment, the observer’s experience will be different.
This is depicted in Figure 1 in the leftmost illustration.
A reproduction of Starry Nights, either as a poster, in a book, or on the museum website,
reduces the viewing experience dramatically. In essence, the museum photographer has
attempted to approximate this complex and interactive experience within a single image. This is
accomplished, to the best extent possible, by understanding the artist and art historical context
and using this knowledge to define lighting geometry. This is a subjective and aesthetic decision.
In fact, our recently completed Mellon Foundation project, Direct Digital Capture of Cultural
Heritage – Benchmarking American Museum Practices and Defining Future Needs [Berns
2005A], described how the ideal museum photographer has 10 – 15 years experience and
expertise in information technology and art history, and how aesthetics are often deemed more
important than scientific rigor and reproducibility when imaging. These findings clearly support
the notion that reducing the experience of viewing artwork to a single image is both complex and
subjective.
Figure 1. Illustration depicting how an interactive experience (left) is reduced to a single image (right) that may be
printed (center).
Our project, Multi-Channel Visible Spectrum Imaging, Digital Archiving, and
Reproduction, has developed a practical approach to eliminating the need for visual editing in
order to achieve high color accuracy within typical imaging workflows. Berns [2005B] has
summarized the scientific approach and research accomplishments, reprinted in Appendix IV.
The research we now propose has a similar goal, developing a practical approach to eliminating
the need for subjective lighting decisions at the time of capture and reducing the complex
interaction among the lighting, object, and observer to a single image.
3
In order to demonstrate the complexity and subjectivity of typical practices, a painting
was produced that varied greatly in surface topography and gloss. The painting was imaged
using two lighting approaches. The first was with directional lighting as practiced at the National
Gallery of Art [Berns 2005C], illuminating from about 60° from the normal on each side of the
painting and doubling the intensity on one side. The second lighting approach was diffuse
illumination, commonly used in scientific imaging when tracking long-term color changes
[Saunders 1993]. A detail of the painting is shown in Figure 2. The directional lighting casts
shadows providing visual clues to the surface topography. However, the shadows obscure color
uniformity information. Both images are important to understand the physical properties of the
painting. If this painting were viewed in a gallery, neither would represent the viewing
experience. Simply, the painting would be lit with a flood lamp from above along with some
ambient diffuse illumination. This is seen in Figure 1 where the gallery and studio lighting are
quite different.
Figure 2. Painting test target photographed under directional (left) and diffuse (right) illumination.
Solving the Problem
The solution draws upon computer graphics, an applied area of computer science
resulting in synthetic imagery [Glassner 1995]. This requires a geometric description of objects
in the scene; the location, geometry, and strength of light sources in the scene; a description of
how surfaces in the scene reflect and scatter incident light; and the location and field of view of
the detector or observer. Having this information and defining specific conditions, one can
render the art object realistically using a variety of computer graphics techniques.
Realistic rendering is often limited by a lack of information about the object’s shape and
how incident light is absorbed and scattered at each position on the object. The object’s shape
can be obtained directly using laser and structured light scanners [Taylor 2002], close range
photogrammetry [Miranda-Duarte 2005], or inferred from the object’s shading [Zang 1999]. This
results in a three-dimensional digital model of the objects shape that can be used along with its
surface properties and the lighting positions to create a rendered image as shown in Figure 3. The
rendered image contains portions illuminated by both diffuse and directional lighting.
4
Figure 3. Painting rendered into a virtual scene with both directional and diffuse lighting using the threedimensional software package, Maya.
The absorption and scattering data are known as the bidirectional reflectance distribution
function, BRDF [Nicodemus 1977]. For each point on the object, there are four degrees of
freedom involved in the complete description of the BRDF as the light source and detector can
be moved anywhere on the surrounding hemisphere. This is depicted in Figure 4. Two of the
degrees of freedom correspond to the angles defining the position of the light source, θi and φi;
the other two degrees of freedom correspond to the angles defining the position of the observer
(or camera), θr and φr.
Figure 4. BRDF expressed in terms of viewing and illumination angles (after Nicodemus 1977).
5
Suppose that the light source and detector sample the hemisphere at 145 locations (and
are not co-located). This results in 20,880 combinations. Because we assume that the detector
and light are interchangeable via the Helmholtz reciprocity principle [Magda 2001], we can cut
this number of samples in half. However, since spectral information is desired, this is repeated at
each wavelength, resulting in 375,840 images (10,440 x 36 wavelength samples: 380 – 730 nm
in 10 nm increments). If each image were stored as a 16 bit Tiff file with a resolution of 4k×5k
pixels (40MB), the BRDF would require over 14 terabytes of data storage. Our current spectral
capture system is able to record the spectral information using as few as two images (versus 36);
this would reduce the data storage requirement to 816 gigabytes. Despite the dimensionality
reduction, the time and data storage required for this approach are excessive for use in a museum
setting.
There are several approaches to further reduce dimensionality and thus enable practical
object characterization. One method, polynomial texture mapping (PTM), coarsely samples the
hemisphere, often fixing the detector position, and uses direct interpolation to approximately
reproduce the appearance [Malzbender 2001]. This approach is object specific; the specular
component is not directly modeled and must be handled separately. Although useful for efficient
rendering and compact storage, this method is not directly suitable for re-rendering artwork
within a complex virtual environment.
A more robust approach is to reduce the number of measurements, still with four degrees
of freedom, and fit a model found to approximate the object’s actual BRDF. Models can be
empirical without regard to conservation of energy [Phong 1975], or based on physics, optically
modeling of the absorption and scattering of various types of materials [He 1991]. Depending on
the particular object and rendering geometries (the geometry of lighting, position of the object,
and position of the observer), these techniques vary in accuracy. A portion of the painting shown
in Figures 2 and 3 was rendered using different BRDF models, shown in Figure 5.
Figure 5. Renderings of painting surface using, from left to right, anisotropic, Blinn, Lambertian, and Phong BRDF
algorithms.
Despite several projects to image artwork using similar techniques [Tominaga 2001,
Tonsho 2001, Hawkins 2001, Ju 2002, Akao 2004], there is very limited information available
about the BRDF properties of artist materials, paintings, and drawings.
Our approach will include building an apparatus that can measure the BRDF and spectral
properties of paintings, drawings, and their constituent materials. This is an imaging goniospectrophotometer, that is, an instrument capable of capturing images of objects as a function of
6
angle and wavelength. (A goniometer is an instrument that measures angles.) From an analysis of
these materials, evaluating the physical accuracy of various BRDF models, and conducting
visual experiments, a practical system (hardware and software) can be defined for use in a
museum-imaging department that produces an image dataset appropriate for computer graphics
rendering. With the addition of shape information from a separate measurement system, this
would enable a painting or drawing to be rendered for specific conditions, either in an interactive
way on display or statically in print. (Note that although this proposed project is aimed at
paintings and drawings, it is readily extendable to three-dimensional objects such as sculptures.)
Research Methodology
The proposed research will produce measurement systems appropriate for use in a
museum-imaging studio that captures sufficient BRDF and shape information to enable the
realistic rendering of paintings and drawings over typical viewing geometries. The research will
follow in similar fashion to our research strategy in our current Mellon Foundation funded
project in which there is basic research, instrument design, visual experimentation, and reduction
to practice. A five-year research program is envisioned where the research has two successive,
independent phases. Providing important new knowledge for interactive computer museums, the
first phase has been structured so that it can stand alone—without Phase 2.
Phase 1: Construct and Optimize a Practical BRDF System
Phase 1 is fundamental in nature. It is focused on measuring and modeling the BRDF
properties of typical artist materials used in paintings and drawings. This research results in
knowledge that can be applied to developing practical and optimized systems for measuring and
characterizing the BRDF properties of these materials. A database of these properties enables
more realistic computer graphic renderings. This is important when developing real-time
interactive virtual museums. This information can also be used for art conservation science in
deriving more accurate models of the optical behavior of these materials, helpful when
developing new conservation materials and techniques. Characterizing BRDF properties can be
used to document and monitor objects over time. It may be possible to use this information as a
type of “fingerprinting.” Phase 1 provides the fundamental underpinnings of three-dimensional
imaging of paintings and drawings. The project is divided into five stages:
Stage 1-A: Instrument Development
Stage 1-B: Data Collection of Objects
Stage 1-C: Implementing Rendering Algorithms
Stage 1-D: Model Evaluation – Physics
Stage 1-E: Model Evaluation – Psychophysics
The unique feature of Phase 1 is the inclusion of visual experimentation. The visual
results enable both measurement reduction and defining key geometries for BRDF
characterization of paintings and drawings specifically. The BRDF physical measurement data
and the visual psychophysical image data will be made public, enabling more realistic computer
graphics rendering of cultural heritage and advanced metrics on image difference, useful for
objectively quantifying the quality of digital photography. Details are described in the next
section.
7
Phase 2: 3-D Scanning and Reduction to Practice
Phase 2 is focused on realistically rendering one or more objects as they appear in a
specific gallery environment. This requires the Phase 1 measurement system and a second
instrument, a laser scanner (or equivalent). The scanner precisely measures the shape of the
object (including surface topography) and can also be used to measure the physical dimensions
of the gallery. This information combined with knowledge or measurements of the lighting
geometry within the gallery enables rendering the object as it would appear if viewed in the
physical location. In order to have a measurement and computational system that is appropriate
for museum usage, Phase 2 requires partnership with a museum. This phase also reduces the
fundamental knowledge from Phase 1 to practice. The project is divided into five stages:
Stage 2-A: Laser Scanner Acquisition and Incorporation
Stage 2-B: Defining User Needs
Stage 2-C: Data Collection of Museum Lighting Environments
Stage 2-D: Practical Implementation
Stage 2-E: System Verification
Phase 2 incorporates information about the work of art, its surrounding environment, and
the observer’s experience. Details are described in Appendix I.
The Virtual Museum
The completion of both phases provides a method to create a virtual museum where
objects captured using the imaging gonio-spectrophotometer in a photo studio are rendered to
appear as they would if viewed in the galleries of a real museum. If only the first phase is
undertaken, it is still possible to create the virtual museum; however, the correlation with the real
viewing experience will be reduced. This proposal seeks funding for Phase 1. If Phase 2 were not
implemented, the reduced realism would result from the need to estimate or assume rather than
measure the object’s shape and the gallery’s layout and lighting geometry. The work of art can
be rendered with all these assumptions as shown in Figures 3 and 5, though in these cases, the
painting’s BRDF properties are also assumed as the imaging gonio-spectrophotometer has not
yet been built.
Phase 1: Construct and Optimize a Practical BRDF System
Stage 1-A: Instrument Development
An imaging system will be built that can measure the absorption and scattering properties
of paintings and drawings in which the lighting position has two degrees of freedom, the object
position has four degrees of freedom, and the detector position has one degree of freedom. The
spectral-based camera developed in the current research program will be used as the detector.
This will be an imaging gonio-spectrophotometer. That is, spectral images will be recorded as a
function of angle. The guiding design principle is applicability in a museum-imaging department
where space is at a premium. That is, the system will be as compact as possible and enable large
works of art to be measured by translating the art and stitching together tiled images.
8
A preliminary drawing of the proposed imaging system is shown in Figure 6. The
artwork is attached to the movable stage that is capable of rotation in its own plane, vertical and
horizontal translation, and translation towards and away from the camera. These movements
enable the object surface to be aligned at the central apex. The lighting arm moves about this
apex in two directions while the camera moves in one direction. Object rotation and movements
by the lighting and camera arms result in the four degrees of freedom that are required to
measure an object’s BRDF. Also depicted in Figure 6 is a sequence of images with the camera
fixed and the light source sweeping about the object.
Figure 6. Imaging gonio-spectrophotometer instrument used to capture BRDF of artwork (Left) and sample image
sequence (right) for one sweep of the light source at a fixed camera position.
Stage 1-B: Data Collection of Objects
A representative set of samples will be created that exemplify the range of BRDF
properties most common to paintings and drawings. The set will be a combination of acquired
materials including works of art and materials constructed specifically for this research, likely in
the form of colored test targets. These materials will be measured using the imaging goniospectrophotometer. Some of these samples are listed in Table I. For paintings, these
combinations span highly varnished Old Masters, unvarnished post-impressionism with
significant impasto, and modern art having a wide range of gloss and impasto. Drawing materials
span the full range of surface characteristics and gloss, and thus represent the historical range of
paper substrates from illuminated manuscripts to photographs.
9
Table I. Representative materials to be measured using the imaging gonio-spectrophotometer.
Material
Acrylic emulsion test target
Oil paintings
Art paper test target
Drawings
Varnish
Gloss
Impasto (paint) or
Surface structure (paper)
No
No
Yes
Yes
Yes
Yes
Yes
Yes
No
No
Yes
Yes
Yes
Yes
Yes
Yes
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
Low
Low
Low
Medium
High
Low
Medium
High
Low
Low
Low
Medium
High
Low
Medium
High
Low
Medium
High
Low
Medium
High
Low
Medium
High
Low
Medium
High
No
Yes
No
No
No
Yes
Yes
Yes
No
Yes
No
No
No
Yes
Yes
Yes
No
No
No
Yes
Yes
Yes
No
No
No
Yes
Yes
Yes
Stage 1-C: Implementing Rendering Algorithms
We will evaluate appropriate algorithms that estimate BRDF from limited measurements.
The goal is to test how each approximates the physical properties of the sample paintings and
drawings as measured in Stage 1-B. We will also compile algorithms that render scenes. The
common algorithms as collected in the Oregon BRDF Library [Westlund 2002] are listed in
Table II.
10
Table II. BRDF Algorithms to fit and test for accuracy in reproducing physical sample properties.
Model
Reference
Lambertian Diffuse Model
Minnaert Limb Darkening Model
Blinn Cloud and Dusty Surface Model
Cook-Torrance Specular Microfacet BRDF
Oren-Nayar Diffuse Microfacet BRDF
He-Torrance Comprehensive Analytic Model
Ward Anisotropic Model
Lafortune Generalized Cosine Lobe Model
Beard-Maxwell Bidirectional Reflectance Model
Phong Model
Poulin-Fournier Anisotropic Model
iBRDF
[Lambert 1760]
[Minnaert 1941]
[Blinn 1982]
[Cook 1981]
[Oren 1994]
[He 1991]
[Ward 1992]
[Lafortune 1997]
[Maxwell 1973]
[Phong 1975]
[Poulin 1990]
[Westlund 2002]
Stage 1-D: Model Evaluation – Physics
Each model that estimates BRDF will be compared with the measured BRDF. This
comparison will be physics based. Furthermore, an analysis will be performed to minimize the
number of measurements and identify optimal geometries that maximize estimation accuracy for
these materials. This quantitative evaluation method is summarized in the Figure 7 flowchart.
Figure 7. Flowchart of physical evaluation of BRDF.
Stage 1-E: Model Evaluation – Psychophysics
Categorized within the field of bio-statistics, psychophysics involves measuring cognitive
processes in order to create scales [Engeldrum 2000]. In our case, visual experiments are
performed. These experiments are aimed at determining which physics-based metrics are
appropriate when judging BRDF accuracy and enabling dimensionality reduction minimizing
visual artifacts. An analogy is image file compression and the term, “visually lossless
compression.” In this case, image information is reduced in a manner that minimizes visually
observed artifacts. Psychophysics was used to determine compression metrics and develop
algorithms.
A number of visual experiments will be carried out. As described in Stage 1-C, BRDF
estimates will be compared with measured BRDF on a physical basis. We need to determine if
these differences are observable as well as the conditions where these estimates break down. A
11
flowchart is shown in Figure 8. Having an object’s measured BRDF (Stage 1-B) and a
representative synthetic gallery, an image can be rendered for a specific set of geometries. Using
these same conditions, images can be rendered using various estimated BRDFs. Pairedcomparison experiments can be performed on a colorimetrically calibrated display that will
determine which estimate is the closest match visually to the measured BRDF based rendering.
Figure 8. Flowchart of psychophysical evaluation of BRDF.
Timeline
Phase 1 is a three-year program. An approximate timeline is shown in Table III.
Table III. Phase I timeline.
Phase 1 Activity
A: Instrument
Development
B: Data Collection of
Objects
C: Rendering Algorithm
Implementation
D: Model Evaluation –
Physics
E: Model Evaluation –
Psychophysics
Publish Final Results
and Databases
Year 1
Year 2
Year 3
1 Year
6 Months
2 Years (Post Doctoral Fellow)
6 Months
6 Months
2 Months →
Anticipated Outcomes
Scholarly Publications
This research supports three M.S.-level research theses, one Ph.D. dissertation, and a
two-year post-doctoral fellowship. These will result in scholarly publications including
conference proceedings, refereed journal articles, and MCSL technical reports.
12
Public-Domain Databases
A public-domain database of the BRDF measurements as a function of wavelength of all
materials measured will be created. Images as a function of geometry will be available for
representative materials (please see Figures 4 and 6). All of the image files will be archived and
made available by request. For each representative material (Table I), the optimized parameters
for each model (Table II) will be available. The database will facilitate more accurate computergraphics rendering of virtual museums and their collections.
The psychophysical stage of the research program will result in image pairs that have
different match quality, that is, interval scales of accuracy and preference. The experimental
images, the raw visual data, and the statistically derived interval scales will comprise a second
public-domain database. These data enable image-quality model development and testing. One
application of these models is defining digital photography quality metrics that predict subjective
evaluation.
Sustainability
The public-domain databases enable the project results to be implemented by scientists
active in synthetic imagery and computer graphics, an active area of research. Because of our
close relationship with the museum community, we anticipate system duplication. In time, and
following the completion of Phase 2, we anticipate a shift in how cultural heritage is digitally
archived and accessed.
This research program revolves around several academic programs, the M.S. and underdevelopment Ph.D. in Color Science and the M.S. and Ph.D. in Imaging Science. These research
techniques and instrumentation will become part of the graduate curriculum at RIT. (For
example, spectral estimation from multi-channel images is taught in several graduate courses in
the Center for Imaging Science.)
13
The Munsell Color Science Laboratory
The majority of the research will be carried out at RIT’s Munsell Color Science
Laboratory (MCSL), established in 1983 after the dissolution of the Munsell Color Foundation,
Inc. The aims and purposes of the Munsell Foundation as stated in its bylaws were “... to further
the scientific and practical advancement of color knowledge and, in particular, knowledge
relating to standardization, nomenclature and specification of color, and to promote the practical
application of these results to color problems arising in science, art, and industry.”
The following objectives guide the activities of the Munsell Color Science Laboratory:
1)
2)
3)
4)
To provide undergraduate and graduate education in color science,
To carry on applied and fundamental research,
To facilitate spectral, colorimetric, photometric, spatial, and geometric measurements at
the state of the art, and
To sustain an essential ingredient for the success of the first three — namely, liaison with
industry, academia, and government.
MCSL consists of five faculty members within our Center for Imaging Science, two of
whom hold endowed professorships, two staff, one visiting industrial scientist, and 16 full- and
part-time students. The estimated value of the instrumentation, computers, materials, and
literature in the laboratory is in excess of $2 million. The annual budget, excluding faculty
salaries, is between $1.25 and $1.5 million. Income originates from foundation, corporate, and
government grants, gifts in kind, cash gifts, visiting scientists, and our annual industrial summer
school.
The missions of MCSL and the Scholarly Communications Program of the Mellon
Foundation are perfectly aligned. Both aim at the development of scholarly resources through the
application of technology. Details about our academic and research programs can be found on
our web page: www.mcsl.rit.edu
Munsell Color Science Laboratory Personnel
All members of MCSL participate in research. (The MCSL website has details about all
the faculty and staff.)
The project director is Roy S. Berns, the Richard S. Hunter Professor in Color Science,
Appearance, and Technology and Graduate Coordinator of the Color Science degree programs
within the Center for Imaging Science at Rochester Institute of Technology. He received B.S.
and M.S. degrees in textile science from the University of California at Davis and a Ph.D. degree
in chemistry with an emphasis in color science from Rensselaer Polytechnic Institute. His
research includes spectral-based imaging, archiving, and reproduction of cultural heritage;
algorithm development for multi-ink printing; the use of color and imaging sciences for art
conservation science, and colorimetry. He is active in the International Commission on
Illumination, the Council for Optical Radiation Measurements, the Inter-Society Color Council,
14
and the Society for Imaging Science and Technology. He has authored over 150 publications
including the third edition of Billmeyer and Saltzman's Principles of Color Technology. During
the 1999-2000 academic year, he was on sabbatical at the National Gallery of Art, Washington,
DC as a Senior Fellow in Conservation Science. During 2000, Dr. Berns was invited to
participate in the Technical Advisory Group of the Star-Spangled Banner Preservation Project.
During 2005, Dr. Berns joined the Executive Committee of the International Association of
Colour (AIC). Details about Dr. Berns can be found in his CV, shown in Appendix III.
One MCSL color scientist in particular, Mr. Lawrence Taplin, will focus on this research
program. Lawrence Taplin received his B.S. degree in computer science from the University of
Delaware in 1996. He received an M.S. degree in Color Science at RIT and in 2001, joined the
MCSL research staff. He has been very involved in all of our Mellon Foundation sponsored
research programs, having extensive conventional and digital photography experience. Mr.
Taplin developed a practical implementation of the multi-filter spectral imaging system invented
at MCSL. He is the principal software architect of this system.
Professor Mark Fairchild, Director of MCSL, and Dr. Garrett Johnson, Visiting Research
Assistant Professor, have expertise in spectral-based computer graphics rendering, both having
spent a one-year residency at the Cornell University Program of Computer Graphics. They will
provide guidance to insure research efficiency. Dr. Fairchild also has significant expertise in
psychophysics. He will contribute to the experimental design of our visual experiments.
In order to improve our research efficiency and further develop ties with the computer
graphics community, we will hire a post-doctoral fellow with expertise in computer graphics
rendering and physical modeling of the BRDF properties of common materials.
References
[Akao 2004]
Akao Y, Tsumura N, Herzog PG, Miyake Y, Hill B. Gonio-Spectral Imaging of Paper and Cloth
Samples Under Oblique Illumination Conditions Based on Image Fusion Techniques. Journal of
Imaging Science and Technology 2004;48(3):227-234.
[Berns 2005A]
Berns RS, Frey FS, Rosen MR, Smoyer EP, Taplin LA. Direct Digital Image Capture of Cultural
Heritage – Benchmarking American Museum Practices and Defining Future Needs: Final Project
Report. Rochester, NY: RIT; 2005. 78 p.
[Berns 2005B]
Berns RS. Color-Accurate Image Archives Using Spectral Imaging. (Sackler NAS Colloquium)
Scientific Examination of Art: Modern Techniques in Conservation and Analysis (2005): The
National Academy of Sciences; 2005. p 105-119.
[Berns 2005C]
Berns RS, Taplin LA. Evaluation of a Modified Sinar 54M Digital Camera at the National
Gallery of Art, Washington DC during April, 2005. MCSL Technical Report: RIT; 2005.
15
[Blinn 1982]
Blinn JF. Light Reflection Functions For Simulation of Clouds and Dusty Surfaces. Computer
Graphics 1982;16(3):21-29.
[Cook 1981]
Cook RL, Torrance KE. A reflectance model for computer graphics. SIGGRAPH '81:
Proceedings of the 8th annual conference on Computer graphics and interactive techniques.
Dallas, Texas, United States: ACM Press; 1981. p 307-316.
[Engeldrum 2000]
Engeldrum PG. Psychometric scaling: a toolkit for imaging systems development. Winchester,
Mass.: Imcotek Press; 2000. xv, 185 p. p.
[Glassner 1995]
Glassner AS. Principles of digital image synthesis. San Francisco: Morgan Kaufmann
Publishers; 1995. 2 v. p.
[Hawkins 2001]
Hawkins T, Cohen J, Debevec P. A Photometric Approach to Digitizing Cultural Artifacts. 2nd
International Symposium on Virtual Reality, Archaeology, and Cultural Heritage. Glyfada,
Greece; 2001.
[He 1991]
He XD, Torrance KE, Sillion FX, Greenberg DP. A Comprehensive Physical Model for Light
Reflection. SIGGRAPH '91: Computer Graphics. Volume 25. Las Vegas: ACM; 1991. p 175186.
[Ju 2002]
Ju DY, Yoo J-H, Seo KC, Sharp G, Lee SW. Image-Based Illumination for Electronic Display of
Artistic Paintings. Ann Arbor, MI: University of Michigan; 2002. Report nr CSE-TR-466-02. 7p.
[Lambert 1760]
Lambert JH. Photometria sive de mensure de gratibus luminis, colorum umbrae; 1760.
[Magda 2001]
Magda S, Kriegman DJ, Zickler T, Belhumeur PN. Beyond Lambert: reconstructing surfaces
with arbitrary BRDFs. ICCV 2001: Eighth IEEE International Conference on Computer Vision.
Volume 2. Vancouver, BC: IEEE; 2001. p 391-398.
[Malzbender 2001]
Malzbender T, Gelb D, Wolters H. Polynomial Texture Maps. SIGGRAPH'01; 2001.
[Maxwell 1973]
Maxwell JR, Beard J, Weiner S, Ladd D. Bidirectional reflectance model validation and
utilization: Environmental Research Institute of Michigan (ERIM); 1973 October 1973.
16
[Minnaert 1941]
Minnaert M. The reciprocity principle in lunar photometry. Astrophysics Journal 1941;93:403410.
[Miranda-Duarte 2005]
Miranda Duarte AA, von Altrock P. The Close Range Photogrammetry In The Documentation
Of The Rocks Art. Study Of Case Archaeological Site Santinho Norte I – SC/ Brazil. CIPA 2005
XX International Symposium. Torino, Italy; 2005.
[Nicodemus 1977]
Nicodemus FE, Richmond JC, Hsia JJ, Ginsberg IW, Limperis T. Geometrical Consideration and
Nomenclature for Reflectance: U.S. Department of Commerce, National Bureau of Standards
October 1977.
[Oren 1994]
Oren M, Nayar SK. Generalization of Lambert's reflectance model. SIGGRAPH '94:
Proceedings of the 21st annual conference on Computer graphics and interactive techniques.
New York, NY, USA: ACM Press; 1994. p 239-246.
[Phong 1975]
Phong BT. Illumination for Computer Generated Pictures. Communications of the ACM
1975;18(6):311-317.
[Poulin 1990]
Poulin P, Fournier A. A model for anisotropic reflection. SIGGRAPH '90: Proceedings of the
17th annual conference on Computer graphics and interactive techniques. Dallas, TX, USA:
ACM Press; 1990. p 273-282.
[Saunders 1993]
Saunders D, Cupitt J. Image Processing at the National Gallery: The VASARI Project. National
Gallery Technical Bulletin 1993;14:72-85.
[Taylor 2002]
Taylor, J., J. A. Beraldin, et al. (2002). NRC 3D Imaging Technology for Museums & Heritage.
The First International Workshop on 3D Virtual Heritage. Geneva, Switzerland: 70-75.[
[Tchou 2001]
Tchou C, Hawkins T, Cohen J, Debevec P. HDR-Shop. 1.0.3: University of Southern California;
2001.
[Tominaga 2001]
Tominaga, S., T. Matsumoto, et al. (2001). 3D Recording and Rendering of Art Paintings. Proc.
Ninth Color Imaging Conference. Scotsdale, AZ, IS&T: 337-341.
17
[Tonsho 2001]
Tonsho K, Akao Y, Tsumura N, Miyake Y. Development of goniophotometric imaging system
for recording reflectance spectra of 3D objects. Proc. SPIE Color Imaging: Device-Independent
Color, Color Hardcopy, and Applications VII. Volume 4663. San Jose, CA, USA: SPIE; 2001. p
370-378.
[Ward 1992]
Ward GJ. Measuring and modeling anisotropic reflection. SIGGRAPH '92: Proceedings of the
19th annual conference on Computer graphics and interactive techniques. New York, NY, USA:
ACM Press; 1992. p 265-272.
18
Appendix I: Project 2 3-D Scanning and
Reduction to Practice
Stage 2-A: Laser Scanner
A laser scanning system will be obtained that can measure the three-dimensional shape at
an appropriate spatial resolution for paintings and drawings. This information will be added to
the BRDF data.
Stage 2-B: Defining User Needs
We will interact with curators, conservators, art historians, and connoisseurs to
understand how they look at paintings and drawings. Through these interactions, we will learn
what information they seek about an object’s appearance with respect to geometry.
Stage 2-C: Data Collection of Museum Lighting Environments
We will visit various museums and determine typical lighting practices and
environments. Images of a spherical mirror, captured with our spectral camera system, will be
used to create light probes using the software package HDR-Shop [Tchou 2001]. Virtual models
of the galleries will also be created using 3D-laser scanning or image based modeling [RealViz
2005].
Stage 2-D: Practical Implementation
Based on the knowledge gained from the previous stages and practical limitations such as
system complexity, cost, training, and maintenance, we will develop a system that is capable of
estimating spectral, BRDF, and 3-D shape data of paintings and drawings. The system will have
reasonable accuracy within the constraints of typical viewing environments.
Stage 2-E: System Verification
Working with a museum, a small collection of either paintings or drawings will be
defined. These will be imaged using the practical system and a database created along with
software for viewing. This will form a website that documents the research program and has a
demonstration of the system capabilities. Sufficient details will be provided that would enable
the system to be duplicated in a museum, library, or archive.
19
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement