OFFICIAL PROGRAM BOOK posted
Official Program Book
Program Book, Proceedings, WebSite, Online Paper Submission
and Review, and Online Registration are
services/products of Techno-Info Comprehensive Solutions.
(TICS)
http://techno-info.com
M&C+SNA+MC 2015
April 19-23, 2015
Foreword
Dear Colleagues,
The Oak Ridge/Knoxville Section of the American Nuclear Society (ANS) welcomes you to Nashville, Tennessee, the home
of country music, for the first combined Mathematics and Computations (M&C), Supercomputing in Nuclear Applications
(SNA) and Monte Carlo (MC) international conference (M&C+SNA+MC 2015).
M&C is the latest in the series of topical meetings organized by the Mathematics and Computation Division of the American
Nuclear Society. Prior to 2010, SNA and MC existed as separate conferences. In 2010, SNA and MC combined and held
SNA+MC 2010 in Tokyo, Japan. This was followed by SNA+MC 2013 held in Paris, France.
We certainly appreciate the cooperation of the organizing committees of both SNA and MC. They have provided a strong
contribution from the international community to make the Nashville conference memorable. In particular, we recognize the
support of the Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency (NEA), and the
Atomic Energy Society of Japan (AESJ).
Two other ANS entities also provided strong support for the conference – the Fusion Energy Division and the Young Members Group.
Our funding sponsors include the Department of Energy (DOE) National Nuclear Security Administration (NNSA), Oak
Ridge National Laboratory (ORNL) Radiation Safety Information Computational Center (RSICC), Varian Medical Systems,
Institute of Nuclear Energy Safety Technology, CAS·FDS Team, China and Kirk Nuclear Information Services.
M&C+SNA+MC 2015 covers several modeling and simulation (M&S) special sessions on top of the regular sessions on
Monte Carlo Methods, Deterministic Methods, Reactor Physics, Validation and Verification. M&S for Fusion Energy Systems, Nonproliferation and Nuclear Safeguards are key features of the agenda.
Two poster sessions will be held on Monday and Tuesday nights. One of these sessions will be present general papers
from the M&C technical program, and the other will offer a survey of the recent developments and newest capabilities in a
broad variety of Monte Carlo Codes.
We will be offering a variety of workshops (13 in all) that will cover mathematical methods in several applications and
computer codes. Participating computer codes include SCALE 6.2, Attila4MC, ADVANTG, NESTLE 3D, PyNE, MCNP6,
GEANT4, and there will also be a workshop on medical physics applications.
Our registration indicates a strong international presence of students, professors, national laboratories and industry.
We especially are grateful to the number of individuals in the Organizing Committee who have dedicated hours of labor on
this daunting task.
And thank you to all the participants.
General Chair: Bernie Kirk (Kirk Nuclear Information Services)
Assistant General Chair: Lawrence Heilbronn (University of Tennessee)
Technical Co-chairs: Bob Grove and Kevin Clarno (Oak Ridge National Laboratory)
Assistant Technical Chair: Chris Perfetti (Oak Ridge National Laboratory)
-1-
M&C+SNA+MC 2015
April 19-23, 2015
Acknowledgements
The Organizing Committee of M&C+SNA+MC 2015 recognizes the following sponsors and appreciate their funding
support:
Platinum - Department of Energy (DOE) National Nuclear Secu-
In addition, we recognize the following co-sponsors:
Organization for Economic Cooperation and Development
(OECD) Nuclear Energy Agency (NEA)
Atomic Energy Society of Japan (AESJ)
rity Administration (NNSA)
ANS Oak Ridge/Knoxville Local Section
Gold - Oak Ridge National Laboratory (ORNL) Radiation Safety
ANS Mathematics and Computations Division
Information Computational Center (RSICC)
ANS Fusion Energy Division
Silver – Varian, and
ANS Young Members Group
Institute of Nuclear Energy Safety Technology CAS·FDS
Team, China
Supporting – Kirk Nuclear Information Services
-2-
M&C+SNA+MC 2015
April 19-23, 2015
Organizing Committee
General Chairs:
Assistant General Chair:
Honorary Chair:
Technical Co-Chairs:
Assistant Technical Chair
Workshops:
Arrangements:
Session Evaluation & Feedback:
Technical Tours:
Publicity:
Treasurer:
Registration:
Hospitality:
Publications:
Student Representative:
Web:
Corporate Sponsorship:
International Liaison:
Bernadette Kirk, Kirk Nuclear Information Services
Lawrence Heilbronn, University of Tennessee
Enrico Sartori, Retiree, OECD Nuclear Energy Agency
Bob Grove, ORNL
Kevin Clarno, ORNL
Chris Perfetti, ORNL
Charles Daily, ORNL
Peggy Emmett, Retiree, ORNL
Irina Popova, ORNL
Mark Baird, RSICC, ORNL
Shaheen Dewji, ORNL
Trent Primm, Primm Consulting, LLC
Charles Daily, ORNL
Hanna Shapira, TICS
Anne Primm
Ahmad Ibrahim, ORNL
Cole Gentry, University of Tennessee
Hanna Shapira, TICS
Chris Robinson, Y-12
Charles Daily, ORNL
Imre Pazsit, Chalmers University
Bernadette Kirk
Lawrence Heilbronn
Enrico Sartori
Bob Grove
Kevin Clarno
Chris Perfetti Peggy Emmett
Irina Popova
Shaheen Dewji
Trent Primm
Hanna Shapira
Ahmad Ibrahim
Cole Gentry
Chris Robinson
Imre Pazsit
-3-
M&C+SNA+MC 2015
April 19-23, 2015
Technical Program Committee
Marvin Adams, Texas A&M
Cory Ahrens, Colorado School of Mines
Dmitriy Anistratov, North Carolina State University
Brian Aviles, Knolls Atomic Power Laboratory
Maria Avramova, Pennsylvania State University
Yousry Azmy, North Carolina State University
Teresa Bailey, Lawrence Livermore National Laboratory
Randy Baker, Los Alamos National Laboratory
Ricardo Barros, Universidade do Estado do Rio de Janeiro
Troy Becker, Knolls Atomic Power Laboratory
Keith Bledsoe, Oak Ridge National Laboratory
Patrick Brantley, Lawrence Livermore National Laboratory
Forrest Brown, Los Alamos National Laboratory
Thomas Brunner, Lawrence Livermore National Laboratory
Jae Chang, Los Alamos National Laboratory
Kevin Clarno, Oak Ridge National Laboratory
Matt Cleveland, Los Alamos National Laboratory
Ben Collins, Oak Ridge National Laboratory
Stephen Croft, Oak Ridge National Laboratory
Charles Daily, Oak Ridge National Laboratory
Greg Davidson, Oak Ridge National Laboratory
Mark Dehart, Idaho National Laboratory
Jeffrey Densmore, Bettis Atomic Power Laboratory
Shaheen Dewji, Oak Ridge National Laboratory
Tim Donovan, Knolls Atomic Power Laboratory
Cliff Drumm, Sandia National Laboratory
Sandra Dulla, Politecnico di Torino, Italy
Thomas Evans, Oak Ridge National Laboratory
Ron Ellis, Oak Ridge National Laboratory
Andrea Favalli, LANL Los Alamos National Laboratory
Jeffrey Favorite, Los Alamos National Laboratory
Erin Fichtl, Los Alamos National Laboratory
Benoit Forget, Massachusetts Institute of Technology
Brian Franke, Sandia National Laboratory
Barry Ganapol, University of Arizona
Jess Gehin, Oak Ridge National Laboratory
Nick Gentile, Lawrence Livermore National Laboratory
Daniel Gill, Bettis Atomic Power Laboratory
Hans Gougar, Idaho National Laboratory
Dave Griesheimer, Bettis Atomic Power Laboratory
Robert Grove, Oak Ridge National Laboratory
Alireza Haghighat, Virginia Tech
Steven Hamilton, Oak Ridge National Laboratory
Yassin Hassan, Texas A&M
Stephen Hess, Electric Power Research Institute
Paul Hulse, Sellafield Ltd.
Ahmad Ibrahim, Oak Ridge National Laboratory
Kostadin Ivanov, Pennsylvania State University
Matt Jessee, Oak Ridge National Laboratory
Wei Ji, Rensselaer Polytechnic Institute
Gulliford Jim, OECD Nuclear Energy Agency
Seth Johnson, Oak Ridge National Laboratory
Dmitry Karpeev, Argonne National Laboratory
Brian Kiedrowski, Michigan
Jaako Leppanen, VTT
Yunzhao Li, Xi’an Jiaotong University
Michael Loughlin, International Thermonuclear Experimental
Reactor (ITER)
Ryan McClarren, Texas A&M
Scott McKinley, Lawrence Livermore National Laboratory
Jim Morel, Texas A&M
Scott Mosher, Oak Ridge National Laboratory
Brian Nease, Bettis Atomic Power Laboratory
David Nigg, Idaho National Laboratory
Todd Palmer, Oregon State University
Tara Pandya, Oak Ridge National Laboratory
Ryosuke Park, Los Alamos National Laboratory
Imre Paszit, Chalmers University of Technology
Andreas Pautz, École Polytechnique Fédérale Lausanne
Shawn Pautz, Sandia National Laboratory
Douglas Peplow, Oak Ridge National Laboratory
Chris Perfetti, Oak Ridge National Laboratory
Josh Peterson, Oak Ridge National Laboratory
Markus Piro, Canadian Nuclear Laboratories
David Pointer, Oak Ridge National Laboratory
Jeffrey Powers, Oak Ridge National Laboratory
Shikha Prasad, Indian Institute of Technology
Trent Primm, Primm Consulting, LLC
Anil Prinja, University of New Mexico
Jean Ragusa, Texas A&M
Farzad Rahnema, Georgia Tech
Piero Ravetto, Politecnico di Torino
William Rider, Sandia National Laboratory
Paul Romano, Massachusetts Institute of Technology
Massimiliano Rosa, Los Alamos National Laboratory
Glyn Rossiter, National Nuclear Laboratory
Richard Sanchez, Commissariat à l’Énergie lEnergie Atomique
(CEA)Hyung Jin Shim, Seoul National
Glenn Sjoden, Air Force Technical Applications Center (AFTAC)
Rachel Slaybaugh, UC Berkeley
Richard Smedley-Stevenson, Atomic Weapons Establishment
(AWE)
Kord Smith, Massachusetts Institute of Technology
Chris Stanek, Los Alamos National Laboratory
Randall Summers, Sandia National Laboratory
Dion Sunderland, Anatech
Thomas Sutton, Knolls Atomic Power Laboratory
Kurt Terrani, Oak Ridge National Laboratory
Allen Toreja, Lawrence Livermore National Laboratory
Jean-Christophe Trama, CEA, Saclay
Tim Trumbull, Knolls Atomic Power Laboratory
Paul Turinsky, North Carolina State University
John Turner, Oak Ridge National Laboratory
Todd Urbatsch, Los Alamos National Laboratory
Gert Van Den Eynde, SKC-CEN Research Center, Belgium
Rene van Geemert, Areva
Aaron Watson, Knolls Atomic Power Laboratory
Wil Wieselquist, Oak Ridge National Laboratory
Paul Wilson, University of Wisconsin
Brian Wirth, Tennessee-Knoxville
Allan Wollaber, Los Alamos National Laboratory
Zeyun Wu, NIST
Ce Yi, Georgia Tech
Joseph Zerr, Los Alamos National Laboratory
Qiong Zhang, Baker Hughes
-4-
M&C+SNA+MC 2015
April 19-23, 2015
International Technical Program Committee
Carolina Ahnert, Universidad Politécnica de Madrid (UPM)
Kenneth Burn, ENEA Bologna
Christophe Calvin, CEA Saclay France
Mario Cart,a ENEA C.R. Casaccia
Frédéric Damian, CEA Saclay France
Christophe Demazière, Chalmers University of Technology
Cheikh Diop, CEA Saclay SERMA
Jan Dufek, AlbaNova University Centre
Eric Dumonteil, CEA Saclay France
Matthew Eaton, Imperial College
Juan Galan, OECD Nuclear Energy Agency Data Bank
Kevin Hesketh, National Nuclear Laboratory (NNL)
Andras Kereszturi, Hungarian Academy of Science
Jan Leen Kloosterman, Delft University of Technology
Arjan Koning, Nuclear Research and Consultancy Group (NRG)
Riitta Kyrki-Rajamäki, Lappeenranta University of Technology
Yi-Kang Lee, CEA Saclay France
Masahiko Machida, Japan Atomic Energy Research Agency
Fausto Malvagi, CEA Saclay SERMA
Kiyoshi Matsumoto, OECD Nuclear Energy Agency Data Bank
Norihiro Nakajima, Japan Atomic Energy Research Agency
Andreas Pautz, Laboratory of Reactor Physics and Systems Behaviour (EPFL-LRS)
Christine Poinot-Salanon, CEA Saclay France
Simone Santandrea, CEA Saclay France
Didier Schneider, CEA Saclay SERMA
Hiroshi Takemiya, Japan Atomic Energy Research Agency
Jean-Christophe Trama, CEA Saclay SERMA
Pedro Vaz, Instituto Superior Técnico (IST)
Kiril Velkov, Gesellschaft fuer Anlagen und Reaktorsicherheit
Martin Zimmerman, Paul Scherrer Institute
Igor Zmijarevic, CEA Saclay France
Andrea Zoia CEA, Saclay France
-5-
M&C+SNA+MC 2015
April 19-23, 2015
General Information
Registration
Registration is required for all attendees and presenters.
Badges are required for admission to all events.
The Full & Emeritus Conference Registration fee for
Member and Non-Member includes: Conference handouts,
Sunday night reception with cash bar, Continental breakfasts,
Coffee Breaks, Lunches Monday-Thursday, and Banquet on
Wednesday evening.
The One Day Conference fee for Member and Non-Member includes: Conference handouts and events of the day.
The Student Registration Fee includes: Conference
handouts, the Sunday reception, lunches, and the banquet.
Spouse/Guest Registration includes: Sunday reception,
banquet, and daily coffee service.
The Meeting Registration Desk is at Plantation Lobby:
Sunday
Monday-Wednesday Thursday
Please complete and return a “Session Chair Sign-in Form.”
Please attend the Breakfast (7-8am) on the day of your session and be present at your session room at least 15 minutes
prior to the start of the session. This will allow you to greet
and coordinate media arrangements with the speakers, as
well as collect biographical sketches. For the sake of meeting
attendees, PLEASE keep the session perfectly synchronized
as shown in this final program. For “no shows” simply adjourn
the session at the next allotted time (i.e., don’t shift papers to
earlier slots to fill a void). You may find it helpful to bring your
own laptop and upload the speakers’ presentations during
the breakfast or pre-session meetings. Alternatively, please
ensure there is a laptop available for facilitating presentations
during your entire session. You will have a student assistant to
assist you during the meeting. You may use his/her assistance
to drive the presentation, help with A/V etc. He/She will checkin with you prior to the start of the session and ask you to sign
a confirmation of assistance at the end of the session.
Technical Workshops
1:00 PM - 5:00 PM
7:99 AM - 3:00 PM
7:30 AM - 9:00 AM
A list of workshops is provided on the conference website at
http://mc2015.org
Gelbard Scholarship Fundraising Event
Oral Presentation Guidelines
Presenters will have 20 minutes to present their work plus 5
minutes for questions. Presenters must strictly follow these
time allotments, as sessions must stay on-time for Conference events to function smoothly. Windows laptops will be
provided in each of the Oral Presentation rooms to play the
presentations - it is recommend that all presenters convert
their presentations to a PDF format to avoid computer compatibility issues. Please upload your presentation to the conference website prior to the day of your presentation; they will be
available on the conference laptop when you arrive for your
session. All presenters are expected to be at their sessions
10 minutes before the start of the session to verify there are
no technical challenges. If you choose to present using your
own laptop, plan to test it out on the system during a break
between earlier sessions.
Poster Presentation Guidelines:
Session Chair Information
Poster boards will be provided to the presenters and will be 8
feet wide with 4 feet of active height (starting 2 feet above the
floor and going up to 6 ft above the floor). Presenters may use
all 8 feet X 4 feet of space on ONE side of the poster board
for their material. Presenters will be provided with materials
for affixing their posters to the poster boards.
Monday, April 20, at M&C + SNA + MC 2015
Time: 7:30 PM - 9:30 PM
Location: Tulip Grove (E/F) Ballroom, Sheraton Music City
Hotel
Cost: $50
More Information on http://mc2015.org/events-gelbard.html
Tours
April 20, 2015
• Discover Nashville
• Shopping Shuttle for Green Hills Mall
April 21, 2015
• Belle Meade Plantation, Lunch and Winery
• Adventure Science Center
• Grand Ole Opry with Transportation
Details on http://mc2015.org/events.html
-6-
M&C+SNA+MC 2015
Tuesday, April 21, 2015
Technical Tour
Technical Tour of Vanderbilt University Institute of Imaging Science Laboratory
•
•
Free
Tours at 10 AM and 2 PM (approximate travel and tour time 90 minutes)
The Vanderbilt University Institute of Imaging Science (VUIIS) is a University-wide interdisciplinary initiative that unites
scientists whose interests span the spectrum of imaging research. The VUIIS has a core program of research related to
developing new imaging technology based on advances in physics, engineering, and computer science. In addition to highfield MRI and MR spectroscopy, ultrasound, optical and other modalities in human subjects, the VUIIS offers state-of-the-art
options for small animal imaging in all modalities. In 2007 Vanderbilt completed a four-floor, state-of-the-art facility adjacent
to Medical Center North to house the VUIIS. The $28 million project ($21 million for construction) provides a 42,000-squarefoot facility to integrate current activities in imaging research and provide research space for 42 faculty members and more
than 80 graduate students and postdoctoral fellows in biomedical science, engineering, and physics.
For more information https://www.vuiis.vanderbilt.edu/
-7-
M&C+SNA+MC 2015
Monday April 20, 2015
Plenary Session
Hermitage A-D
8:30 - 9:30 AM
Welcome, Presentation of Awards
9:30 - 10:30 AM
Jess C. Gehin
Director, Consortium for Advanced Simulation of Light Water Reactors
CASL: Progress on Light Water Reactor Modeling and Simulation and Plans for its
Second Phase
Biographical Sketch
Dr. Jess Gehin joined the Oak Ridge National Laboratory (ORNL) in 1992 and is currently the Director of the Consortium for Advanced Simulation of Light Water Reactors (CASL). Previous positions
at ORNL include leading Reactor Technology R&D Integration, Senior Program Manager, and Lead
of the Reactor Analysis Group. His primary areas of expertise are nuclear reactor physics, advanced
reactors, and fuel cycle technology. Dr. Gehin earned a B.S. degree in Nuclear Engineering in 1988
from Kansas State University and M.S. (1990) and Ph.D. (1992) degrees in Nuclear Engineering
from the Massachusetts Institute of Technology. Dr. Gehin also holds the position of Joint Associate
Professor in the Nuclear Engineering Department and the Bredesen Center for Interdisciplinary Research and Graduate
Education at the University of Tennessee.
Abstract
The Consortium for Advanced Simulation of Light Water Reactors (CASL) was established in 2010 as the first U.S. Department of Energy Innovation Hub. CASL’s mission is to develop advanced modeling and simulation (M&S) capabilities
that can help address Light Water Reactor operational and safety performance challenges. In its first five years (Phase
1), CASL has developed a M&S capability called the Virtual Environment for Reactor Applications (VERA) that integrates
simulation capabilities for key physical phenomena for pressurized water reactors (PWRs) with a focus on in-vessel physics: neutronics, thermal-hydraulics, chemistry, and material performance.
Key accomplishments in Phase 1 include transport capability to model fuel-pin resolved core detail, enhancing computational performance for subchannel thermal-hydraulics, improved physics models for two and three dimensional fuel
performance assessment, enhanced chemistry treatment for deposition of corrosion products, and CFD capability that
better utilizes HPC resources. Significant progress has been made in coupling and integrating these physics areas to apply to address CRUD induced power shift (CIPS), pellet clad interaction (PCI), and departure from nucleate boiling (DNB).
Further, VERA has been deployed to early adopters through the CASL Test Stand program and applied to such things as
modeling the AP1000® startup, comparisons to industry-standard fuel performance codes, and modeling flow throughout
a reactor vessel.
CASL has recently been renewed for a second five-year phase (Phase 2). For this second phase, VERA activities on
PWR modeling will be expanded along with broader research for light-water based small modular reactors (SMRs) and
boiling water reactors (BWRs). This brings new development areas related to such areas as natural circulation and multiphase thermal-hydraulics and neutronics modeling of BWRs. In addition, CASL will continue to pursue deployment of
capabilities for broad use and application.
-8-
M&C+SNA+MC 2015
Monday April , 2015
Plenary Session
Continued...
10:30 - 11:30 AM
Charlie Fazzino
Manager of High Performance Computing for ExxonMobil
2015 Supercomputing in Nuclear Applications - Oil and Gas Overview
Biographical Sketch
Years of Experience 25
Areas of Expertise: Technical Computing Application Development, Data Management, and Support,
Geographic and Geospatial Information Systems, and Operations Reliability
Mr. Charlie Fazzino holds a Bachelor of Science degree in Computer Science from Texas A&M
University. He joined ExxonMobil in 1989 as a software engineer developing mapping and geologic
modeling applications. Mr. Fazzino has held a variety of Supervision, Planning, and Management
positions associated with Upstream Technical Computing. Mr. Fazzino has also had assignments in
Business Relationship Management and IT Operations. In 2004, Mr. Fazzino moved to Lagos, Nigeria to become the Upstream Technical Computing Manager at Mobil Producing Nigeria. In 2007, he moved to Calgary, Canada as the Director
of Information Technology for Imperial Oil Canada.
Mr. Fazzino became the Manager of High Performance Computing for ExxonMobil in July 2014, working closely with the
geophysics research organization on the development, deployment, and uptake of advanced imaging technologies for
ExxonMobil’s global Upstream Companies.
Abstract
The oil and gas business is a large, capital intensive, and complex industry which is organized into Upstream and Downstream segments. The Upstream segments are responsible for finding, developing, and producing hydrocarbons. Downstream segments refine, distribute, and market hydrocarbon products. The business carries significant geopolitical, technical, financial, safety, health, and environmental risks and challenges.
High Performance Computing is applied across several oil and gas business processes to help manage these risks. Advanced seismic imaging technologies require the most compute and data processing capacity within the industry and are
key to the Upstream business segment’s success. The scale of the seismic imaging problem coupled with the impact it
has on reducing commercial and technical risk in exploration, development, and production is the major driver behind the
sector’s long investment history in high performance computing. ExxonMobil is credited with inventing 3D seismic technology and has been a leader in applying seismic technologies for many years.
There are many other applications for HPC in the industry. While they do not operate at the same scale as advanced
seismic applications, they are able to leverage these investments to solve problems at increasing larger scales that deliver
more accurate models for improved simulations. Building an HPC program and then achieving broad impact across the
organization is challenging. In this talk I will provide a short history of supercomputing in the oil and gas industry, describe
the key drivers and applications for HPC in this industry and share some of the challenges and solutions associated with
building an HPC program and leveraging the capability across the enterprise.
-9-
MC2015 : M&C + SNA + MC 2015
Monte Carlo Methods
Monday, April 20, 2015
1:30 PM
Hermitage C
Chairs: Dr. Thomas M. Sutton, Dr. David P. Griesheimer
22
Application of a Discretized Phase Space Approach to the Analysis of Monte Carlo Uncertainties
Thomas M. Sutton
Knolls Atomic Power Laboratory - Bechtel Marine Propulsion Corporation, Schenectady, New York, USA
In the study of Monte Carlo statistical uncertainties for iterated-fission-source calculations, an important distinction is made between the ‘real’ and ‘apparent’ variances.
The former is the actual variance of a Monte Carlo calculation result, while the latter is an estimate of the former obtained using the results of the fission generations in
the formula for uncorrelated random variates. That the apparent variance is a biased estimate of the real variance has been known—and the reason for the bias
understood—for years. More recently, several authors have noted various interesting phenomena regarding the apparent and real variances and the relationship
between them. Some of these are: an increase in the apparent variance near surfaces with reflecting boundary conditions, a non-uniform spatial distribution of the
ratio of the apparent-to-real variance, the dependence of this ratio on the size of the region over which the result is tallied, and a rate of convergence of the real
variance that is less than the inverse of the number of neutron histories run. This paper discusses a theoretical description of the Monte Carlo process using a
discretized phase space, and then uses it to explain the causes of these phenomena.
36
Estimating the Effective Neutron Generation Time with Monte Carlo Correlated Sampling
David P. Griesheimer and Thomas P. Goter
Bettis Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, West Mifflin, PA
In this paper we describe the use of correlated sampling for estimating the effective neutron generation time (Λeff) during Monte Carlo neutron transport simulations.
The proposed methodology builds upon the established result that Λeff is proportional to the change in system reactivity resulting from a uniform 1/v perturbation to
the macroscopic absorption cross section throughout the system. In this study, correlated sampling is used to estimate both the reference and perturbed system
reactivities simultaneously, using only a single set of neutron histories. The resulting correlation between the perturbed and un-perturbed reactivity values minimizes
the effects of stochastic noise when calculating Λeff, and enables the method to resolve reactivity differences for very small perturbations in absorption cross section.
Implementation details for a continuous-weight-adjustment correlated sampling algorithm are provided, along with consistent track-length and collision estimators for
reaction rates (including eigenvalue) in the perturbed system. The recommended correlated sampling algorithm is easy to implement and accounts for direct
perturbation effects on neutron transport as well as indirect effects on the fission source distribution for eigenvalue calculations. Numerical results for a suite of
standard benchmark problems demonstrate that estimates of Λeff and α from the proposed correlated sampling methodology agree well with experimental
measurements for both critical and non-critical systems.
40
New Algorithm for Monte Carlo Particle-Transport Simulation to Recover Event-by-Event Kinematic Correlations Of
Reactions Emitting Charged Particles
T.Ogawa, T.Sato and S.Hashimoto (1), K.Niita (2)
1) Japan Atomic Energy Agency, Tokai, Ibaraki, Japan, 2) Research Organization for Information Science and Technology, Tokai, Ibaraki, Japan
We develop a new radiation transport calculation algorithm to recover event-by-event quantities based on inclusive cross-section data with conserving energy and
momentum. In radiation transport calculations based on inclusive cross-section data, conventional algorithms could predict average behavior of the particles, however,
they could not consider fluctuations around the average. Moreover, calculation of kinematics among secondary particles and recoiled residues resulted from reactions
were out of their scope. The developed new algorithm reproduces particle emission in each reaction in exact accordance with the chosen reaction channel. The
algorithm makes it possible to predict event-by-event quantities such as nuclear recoil, secondary particle and energy spectra, in all kinds of reaction channels. To
evaluate the impact of the new algorithm, it was applied to various simulation studies such as dose conversion coefficient evaluation, soft-error analysis and radiation
damage prediction. The calculated data shows that the new algorithm is indispensable for such simulation studies.
284
Methods and Techniques for Monte Carlo Physics Validation
Gabriela Hoff (1), Tullio Basaglia (2), Chansoo Choi, Min Cheol Han, Chan Hyeong Kim, Han Sung Kim, Sung Hun Kim (3), Maria Grazia Pia,
Paolo Saracco (4), and Marcia Begalli (5)
(1) CAPES Foundation, Ministry of Education of Brazil, Brasilia, Brazil, (2) CERN, CH-1211 Genève 23, Switzerland, (3) Department of Nuclear Engineering, Hanyang University,Seoul 133-791, Korea,
(4) INFN Sezione di Genova, Genova, Italy, (5) State University Rio de Janeiro, Rio de Janeiro, Brazil,
This paper summarizes experience and results collected by the authors in the process of developing and testing physics models related to Geant4, and discusses
them in the context of establishing a common epistemology and openly available tools for the validation of the physics implemented in Monte Carlo particle transport
codes. It reviews basic concepts pertinent to the validation of the physics of Monte Carlo transport systems, and discusses the interplay between software design and
the test process. It illustrates some recent results in the validation of Geant4 electromagnetic physics and elucidates the methodology enabling their achievement.
Deterministic Transport Methods
Monday, April 20, 2015
1:30 PM
Hermitage D
Chairs: Dr Jean C. Ragusa, Dr. Tara M. Pandya
126
An Explicit, Positivity-Preserving Flux-Correct Transport Scheme for the Transport Equation Using Continuous Finite
Elements
Joshua Hansel and Jean Ragusa (1), Jean-Luc Guermond (2)
1) Department of Nuclear Engineering, Texas A&M University, College Station, TX, 2) Department of Mathematics, Texas A&M University, College Station, TX
Entropy viscosity, in conjunction with flux-corrected transport (FCT), is applied to a P1 continuous finite element (CFEM) discretization of the time-dependent transport
equation. Fully explicit time discretizations are employed, including explicit Euler and strong-stability-preserving Runge-Kutta (SSPRK) schemes such as the 3-stage,
3rd-order-accurate Shu-Osher scheme (SSPRK33). The FCT scheme described by this paper satisfies a discrete maximum principle and is stable in the discrete Linfinity norm. Results are presented for 1-D test problems.
256
Numerical and Analytical Studies of the Spectrum of Parallel Block Jacobi Iterations for Solving the Weighted
Diamond Difference Form Of The S_N Equations
Yousry Y. Azmy and Dmitriy Anistratov (1), R. Joseph Zerr(2)
Yousry Y. Azmy and Dmitriy Anistratov (1), R. Joseph Zerr(2)
We examine the iterative spectrum of the Parallel Block Jacobi (PBJ) iterative solution method for the Weighted Diamond Difference (WDD) form of the SN equations
in a two-dimensional infinite homogeneous medium with two objectives: (1) determine the dependence of the spectral radius on the WDD spatial weights; and (2)
examine the effect on the spectral radius due to lagging also the scattering source. We find that while the iterations are unconditionally unstable for the standard
Diamond Difference (DD) formulation, the spectral radius decreases with increasing cell optical thickness for the Arbitrarily High Order Transport method of the Nodal
type and 0th-order (AHOT-N0). However, in the latter case increasing scattering ratio raises the spectral radius close to unity so that the iterations become unstable
again for non-absorbing media even for very thick cells. We also find that the lagging of the scattering source results in loss of iterative robustness when c<1 with
increasing cell size and increasing value of c.
- 10 -
MC2015 : M&C + SNA + MC 2015
89
A New Multiple Balance Method for Spatially Discretizing the SN Equations
Ben C. Yee and Edward W. Larsen
Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI
In this paper we develop the MB-3 method, a modified version of the Primitive Multiple Balance (PMB) method (Morel, 1989 [1]), for spatially discretizing the SN
equations. To our knowledge, the MB-3 method is the first spatial discretization scheme designed with the objective of making the auxiliary equations algebraically
consistent with a finite difference approximation to the first angular moment of the continuous transport equation. We achieve this by introducing correction factors,
residing on the spatial cell edges, to the PMB auxiliary equations. Because of the consistency between the auxiliary equations and the first angular moment, the MB-3
method can be accelerated using "diffusion" equations that result from combining first and zeroth angular moments of the MB-3 equations, and we expect this
acceleration scheme to be unconditionally stable. Our numerical results indicate that the MB-3 method consistently yields more accurate results in shielding problems
and is robust in the thick diffusion limit. However, the MB-3 method converges slowly for fine spatial grids unless extra low-order calculations are performed between
source iterations. The purpose of this paper is to discuss the potential of the MB-3 approach, its apparent advantages and disadvantages, and our attempts to mitigate
the disadvantages.
179
A Non-negative, Non-linear Petrov-Galerkin Method for Bilinear Discontinuous Differencing of the Sn Equations
Peter G. Maginot, Jean C. Ragusa, and Jim E. Morel
Department of Nuclear Engineering, Texas A&M University, College Station, TX
We have developed a new, non-negative, non-linear, Petrov-Galerkin bilinear discontinuous finite element differencing of the 2-D Cartesian geometry Sn equations for
quadrilaterals on an unstructured mesh. This work is an extension of a scheme we previously developed for use with linear discontinuous (LD) differencing of the 2-D
Sn equations for rectangular mesh cells. We present the theory and equations that describe the new method. Additionally, we numerically compare the accuracy of
our proposed method to the accuracy of unlumped bilinear discontinuous (UBLD) differencing and the subcell corner balance method (equivalent to a “fully” lumped
bilinear discontinuous scheme) for a test problem that causes the UBLD scheme to generate negative angular flux solutions.
Reactor Physics
Monday, April 20, 2015
1:30 PM
Hermitage A-B
Chairs: Dr. Travis J. Trahan, Dr. Benjamin S. Collins
98
Variationally-Derived Discontinuity Factors for the Asymptotic, Homogenized Diffusion Equation
Travis J. Trahan (1), Edward W. Larsen (2)
1) Los Alamos National Laboratory, Los Alamos, New Mexico, USA, 2) Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, Michigan, USA
In this work, we derive and test variational discontinuity factors for the asymptotic, homogenized diffusion equation. We begin with a functional for optimally estimating
the reactor multiplication factor, then introduce asymptotic expressions for the forward and adjoint angular fluxes, and finally require that all first-order error terms
vanish. Thus, the reactor multiplication factor can be calculated with second-order error. The analysis leads to (i) an alternate derivation of the asymptotic,
homogenized diffusion equation, (ii) variational boundary conditions for large, periodic systems, and (iii) variational discontinuity factors to be applied between
adjacent periodic regions (e.g., fuel assemblies). Numerical tests show that applying the variational discontinuity factors to the asymptotic, homogenized diffusion
equation yields the most accurate estimates of the reactor multiplication factor compared to other discontinuity factors for a wide range of problems. However, the
resulting assembly powers are less accurate than those obtained by using other discontinuity factors for many realistic problems.
166
Considering the up-scattering in resonance interference treatment in APOLLO3®
Li Mao, Richard Sanchez, and Igor Zmijarevic
CEA-Saclay, DEN, DM2S, SERMA, Gif-sur-Yvette, France.
The use of the exact elastic scattering in resonance domain introduces the neutron up-scattering which must be taken into account in the deterministic transport code.
The existing resonance interference treatment method in APOLLO3® is not able to take into account the resonance up-scattering phenomenon, since this method
employs the asymptotic scattering kernel in the calculation of the infinite homogeneous medium reaction rates of mixture. It is known that the use of the asymptotic
kernel instead of the realistic free-gas model has non-negligible impact on the calculated results. In order to consider both the resonance interference phenomenon
and the resonant up-scattering, the resonance interference factor method was implemented in APOLLO3®. The numerical results showed that this method gived good
results in both k-eff values and reaction rates. An improved method was also proposed for the solution of the mixture heterogeneous equation by the fine-structure
self-shielding method. Compared to the existing method, it requires less storage memory and less solution time, but it gives the same numerical results as those of the
existing method.
196
Doppler Coefficients Using MC2-3, MC2-2 and MCNP-XT
Zhiwen Xu, Graham Malmgren, Nick Touran and Chuck Whitmer (1), Changho Lee(2)
(1) TerraPower, LLC., Bellevue, WA, (2) Argonne National Laboratory, Argonne, IL
The Doppler coefficient calculations are performed in this paper for three chosen Traveling Wave Reactor (TWR) fuel compositions using MC2-2, MC2-3, and MCNPXT codes. Based on the same ENDF/B-V.2 (E5R2) data, the MC2-3 and MC2-2 results are compared showing close agreements that verify the MC2-3 code’s new
numerical algorithms and modeling methodologies. Based on the same ENDF/B-VII.0 (E7R0) data, the MC2-3 and MCNP-XT results are compared again showing
close agreements that verify the MC2-3 solutions from independent Monte Carlo solutions. The results differences between using the E5R2 data and E7R0 data
faithfully reflects the data differences. However, given more than two decades of continuous nuclear data development, the E7R0 data is expected to have better
qualities, such as smaller biases and uncertainties, compared to the E5R2 data. Nevertheless, experimental benchmark calculations are yet to be performed in the
future to quantify data differences and confirm the expected advantages of using the E7R0 data over the E5R2 data. On the other hand, the E5R2 data can still be
used in MC2-3 code if desired. Overall, the code migration effort from MC2-2 to MC2-3 is reasonably justified. In addition, the Doppler coefficients evaluation
methodology is investigated, and the assumption of 1/T temperature dependence is challenged. Several alternative approaches are tested attempting to find the
actual temperature dependence using various ways. Based on preliminary test results, an improved approach is proposed and recommended for a future Doppler
feedback model in safety analysis.
282
Conservative Nonlinear Diffusion Acceleration Applied to the Unweighted Least-Squares Transport Equation in
MOOSE
Jacob R. Peterson, Hans R. Hammer, Jim E. Morel, Jean C. Ragusa (1), and Yaqi Wang (2)
(1) Department of Nuclear Engineering Texas A&M University College Station, TX
(2) Idaho National Laboratory Idaho Falls, ID
Many second-order forms of the transport equation are not usable in voids and experience numerical convergence difficulties in near-voids. Here we consider a
recently introduced least-squares form of the transport equation that is compatible with voids. Our purpose is to describe a nonlinear diffusion acceleration scheme
that we have developed for a multidimensional multigroup form of this equation, that was implemented in Idaho National Laboratory’s finite-element code, MOOSE. A
deficiency of the least-squares equation is that it is not conservative. We compensate for this lack of conservation by coupling it with a conservative low-order driftdiffusion equation. Upon iterative convergence, the two equations do not necessarily yield the same solutions for the scalar flux and current except in the limit as the
spatial mesh is increasingly refined. The low-order solution is generally found to be more accurate than both the pure least-squares solution and the coupled highorder solution. Preliminary computational results are presented demonstrating the accuracy of the low-order solution and the iterative effectiveness of the acceleration
method relative to a similar implementation for the SAAF transport equation.
- 11 -
MC2015 : M&C + SNA + MC 2015
M&S for Fusion Energy Systems
Monday, April 20, 2015
1:30 PM
Two Rivers
Chairs: Dr. Ahmad M. Ibrahim, Dr. Arkady Serikov
132
An advanced MC Modeling and Multi-Physics Coupling System for Fusion Applications
Yuefeng Qiu, Lei Lu and Ulrich Fischer
Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
The design of fusion reactor systems and components requires coupled multi-physics analyses to be carried out on complex models. An integrated system has been
developed at the Karlsruhe Institute of Technology (KIT) for complex Monte Carlo (MC) geometry modeling and multi-physics coupling. In this system, an advanced
MC modeling module provides the conversion of CAD geometry data to hybrid Constructive Solid Geometry (CSG), tessellated solids and mesh MC geometry
representations. A generic multi-physics coupling module provides data mapping and interfacing for the MC codes MCNP5/6, TRIPOLI-4 and Geant4, the CFD codes
Fluent and CFX, and the Finite Element (FE) simulation platform ANSYS Workbench. These two modules have been integrated into the open-source simulation
platform SALOME which provides them with CAD modeling, mesh generations and data visualizations capabilities. This integrated system has been verified by a
series of test models, and was concluded to be reliable for fusion applications.
137
Monte Carlo Based Method for Shutdown Dose Rates Calculations: Functionality, Validation, Application
P. Pereslavtsev and U. Fischer
Institute for Neutron Physics and Reactor Technology, Karlsruhe Institute of Technology (KIT), Germany
An advanced method was developed for the precise shutdown dose rates calculations. It takes advantage from the Monte Carlo technique that enables robust and
detailed simulation of the nuclear processes resulting in the formation of the decay gammas in the nuclear facility. The spatial distributions both of neutrons and decay
photons are obtained by making use of the mesh tally technique available with MCNP5. This procedure has no limitation for the complexity of the geometry used for
particle transport simulations. As a novelty, the present approach makes use of the newly developed Monte Carlo based routine, linked to the MCNP5 code, that takes
care about detection of the geometry cells, materials and their volume fractions in the mesh cell. The new approach is verified by means of benchmark calculations for
JET and compared to the Direct 1 Step (D1S) method. The results obtained fit very well experimental data. The new interface was successfully applied for the
shutdown dose rate assessments in NBI ports of ITER.
200
Quality and Performance of a pseudo-random number generator in massively parallel plasma particle simulations
Seikichi Matsuoka(1), Shinsuke Satake(2), Yasuhiro Idomura(3), and Toshiyuki Imamura(4)
(1) Research Organization for Information Science and Technology, 1-5-7 Minatojima-minamimachi, Chuo-ku, Kobe, Japan, (2) National Institute for Fusion Science, Toki, Gifu, Japan, (3) Japan Atomic
Energy Agency, Kashiwa, Chiba, Japan, (4) RIKEN Advanced Institute for Computational Science, 7-1-26 Minatojima-minamimachi
The quality and performance of a parallel pseudo-random number generator (PRNG), KMATH_RANDOM, are investigated using a Monte Carlo particle simulation
code for the plasma transport. The library is based on Mersenne Twister with jump routines and provides a numerical tool which is suitable and easy-to-use on
massively parallel supercomputers such as K-computer. The library enables the particle code to increase the parallelization up to several thousand processes without
loosing the quality and performance of the PRNG. As a result, the particle code can successfully remove unphysical phenomena caused by a numerical noise.
202
Accelerating Fusion Reactor Neutronics Modeling by Automatic Coupling of Hybrid Monte Carlo/Deterministic
Transport on CAD Geometry
Elliott Biondo(1), Ahmad M. Ibrahim, Scott W. Mosher, and Robert E. Grove (2)
(1) University of Wisconsin at Madison Madison, WI, (2) Oak Ridge National Laboratory Oak Ridge, TN
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety,
assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo
(MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with
MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD
geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG
was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This
was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This
new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven
Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be
the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as
high as a factor of 59.6).
Transport in Stochastic Media
Monday, April 20, 2015
Chair: Dr. Shawn D. Pautz
33
1:30 PM
Belmont
On the Accuracy of the Non-Classical Transport Equation in 1-D Random Periodic Media
Richard Vasques (1), Kai Krycki (2)
1) PROMEC - School of Engineering, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil, 2) Department of Mathematics - Center for Computational Engineering Science, RWTH
Aachen University, Aachen, Germany
We present a first numerical investigation of the accuracy of the recently proposed non-classical transport equation. This equation contains an extra independent
variable (the path-length s), and models particle transport taking place in random media in which a particle’s distance-to-collision is not exponentially distributed. To
solve the non-classical equation, one needs to know the s-dependent ensemble-averaged total cross section, or its corresponding path-length distribution function p
(s). We consider a 1-D spatially periodic system consisting of alternating solid and void layers, randomly placed in the infinite line. In this preliminary work, we assume
transport in rod geometry: particles can move only in the directions 1 and -1. We obtain an analytical expression for p(s), and use this result to compute the
corresponding s-dependent total cross section. Then, we proceed to solve the non-classical equation for different test problems. To assess the accuracy of these
solutions, we produce “benchmark" results obtained by (i) generating a large number of physical realizations of the system, (ii) numerically solving the transport
equation in each realization, and (iii) ensemble-averaging the solutions over all physical realizations. We show that the results obtained with the non-classical equation
accurately model the ensemble-averaged scalar flux in this 1-D random system, generally outperforming the widely-used atomic mix model for problems with low
scattering. We conclude by discussing plans to extend the present work to slab geometry, as well as to more general random mixtures.
78
An Improved Deterministic Method for the Solution of Stochastic Media Transport Problems
Shawn D. Pautz and Brian C. Franke
Sandia National Laboratories, Albuquerque, NM
We present an improved deterministic method for analyzing transport problems in random media. In the original method realizations were generated by means of a
product quadrature rule; transport calculations were performed on each realization and the results combined to produce ensemble averages. In the present work we
recognize that many of these realizations yield identical transport problems. We describe a method to generate only unique transport problems with the proper
weighting to produce identical ensemble-averaged results at reduced computational cost. We also describe a method to ignore relatively unimportant realizations in
order to obtain nearly identical results with further reduction in costs. Our results demonstrate that these changes allow for the analysis of problems of greater
complexity than was practical for the original algorithm.
- 12 -
MC2015 : M&C + SNA + MC 2015
79
A Generalized Levermore-Pomraning Closure for Stochastic Media Transport Problems
Shawn D. Pautz and Brian C. Franke
Sandia National Laboratories, Albuquerque, NM
Stochastic media transport problems have long posed challenges for accurate modeling. Brute force Monte Carlo or deterministic sampling of realizations can be
expensive in order to achieve the desired accuracy. The well-known Levermore-Pomraning (LP) closure is very simple and inexpensive, but is inaccurate in many
circumstances. We propose a generalization to the LP closure that may help bridge the gap between the two approaches. Our model consists of local calculations to
approximately determine the relationship between ensemble-averaged angular fluxes and the corresponding averages at material interfaces. The expense and
accuracy of the method are related to how “local” the model is and how much local detail it contains. We show through numerical results that our approach is more
accurate than LP for benchmark problems, provided that we capture enough local detail. Thus we identify two approaches to using ensemble calculations for
stochastic media calculations: direct averaging of ensemble results for transport quantities of interest, or indirect use via a generalized LP equation to determine those
same quantities; in some cases the latter method is more efficient. However, the method is subject to creating ill-posed problems if insufficient local detail is included
in the model.
148
Accuracy of the Chord Length Sampling Method Near Boundaries of 1-D Finite Stochastic Materials
C. Russell Willis (1), David P. Griesheimer (2), Erich Schneider (1)
1) Mechanical Engineering Department, University of Texas at Austin, Austin, TX, 2) Bettis Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, West Mifflin, PA
Models for transport through binary stochastic media typically assume that the distance between successive inclusions in the material follows an exponential
distribution. However, previous research in this area has shown that the separation between inclusions can be non-exponential near the boundaries of finite stochastic
materials. In this paper we characterize the distribution of the first and last inclusions within a 1-D finite stochastic material, considering a variety of distinct edge
treatments for the material. An equivalent analysis is performed on finite material realizations generated on-the-fly using the chord length sampling (CLS) method. The
results show that realizations generated by CLS do not accurately represent the distribution of inclusions near boundaries of a stochastic material. Furthermore, the
CLS realizations show significant differences between distributions of the first and last inclusions in a finite stochastic material. Several simple modifications to the
original CLS method are proposed to improve the accuracy of the method for finite problems. Numerical results demonstrating the increased accuracy of the modified
CLS method are provided for a simple 1-D system with several different edge treatments.
Monte Carlo Methods
Monday, April 20, 2015
3:40 PM
Chair: Dr. Thomas M. Sutton
259
Hermitage C
Mean Free Path Based Kernel Density Estimators for Capturing Edge Effects in Reactor Physics Problems
Timothy P. Burke, Brian C. Kiedrowski, and William R. Martin
Department of Nuclear Engineering and Radiological Sciences, University of Michigan
Previous applications of Kernel Density Estimators (KDEs) to Monte Carlo neutronics calculations have mainly focused on global integrated scalar flux results in
homogeneous or simple heterogeneous materials, with minimal attention to reaction rates. Recently, KDEs have been applied to heterogeneous 1-D reactor physics
problems in continuous energy for the estimation of reaction rates, however KDEs were unable to accurately capture distributions at material interfaces when neutrons
were streaming from a highly scattering material into a highly absorbing material. This work introduces a KDE that is based on the number of mean free paths (MFPs),
or optical distance, between a tally point and a sample location rather than the distance between the two points. Results are shown for a 1-D representation of reactor
geometry in continuous energy. An extension of the MFP-based KDE to 2-D is also presented and results are compared to the 2-D distance based KDE for one-group
problems. The performance of this new MFP-based KDE versus the distance-based KDE is drastically improved near material interfaces, and can accurately capture
edge effects in 1-D geometries in continuous energy and 2-D geometries for one-group problems.
87
Kernel Density Estimation for Grey Implicit Monte Carlo Radiation Transport
A.M. Holgado (1), R.T. Holladay (2), A.B. Wollaber, M.A. Cleveland, T.J. Urbatsch (3), and R.G. McClarren (4)
1) Department of Physics and Astronomy, Texas A&M University , College Station, TX, 2) Department of Mechanical Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA, 3)
Los Alamos National Laboratory , Los Alamos, NM, 4) Department of Nuclear Engineering, Texas A&M University, College Station, TX
The implicit Monte Carlo (IMC) method is used widely for solving thermal radiative transfer (TRT) problems. The IMC method, like any other Monte Carlo method,
suffers from statistical noise, which can mask physical behavior and causes solution instabilities. Previous efforts to mitigate statistical noise and to improve the IMC
method have mainly focused on implementing variance reduction techniques (VRTs) and modifying the IMC algorithm. In this work, we introduce kernel density
estimation (KDEn) to the IMC method for TRT problems to obtain solution estimates with less noise than histogram tallies. We address two difficulties of using kernel
density estimators (KDEs): how to select an appropriate bandwidth to estimate the solution and how to correct the loss in symmetry in boundary regions. A locally
adaptive bandwidth (LAB) is implemented with the KDEs to accurately resolve steep gradients in the solution and reflective corrections are made to the KDEs so that
their estimates conserve the energy sourced into the problem. We demonstrate successful coupling of KDEn with continuous energy deposition, a commonly used
VRT in IMC codes. We find that solutions obtained using KDEs are smoother and exhibit substantially less statistical noise than traditional histogram tallies.
102
Progress on Long Period Random Number Generator in Monte Carlo Code RMC
Feng Yang, Jingang Liang, Kan Wang And Ganglin Yu (1), Ding She (2)
1) Department of Engineering Physics,Tsinghua University, Beijing, P.R. China, 2) Institute of Nuclear and New Energy Technology, Tsinghua University
Recently, Reactor Monte Carlo code RMC’s Random Number Generator (RNG) period has been extended from 2^63 to 2^126 based on the Linear Congruential
Algorithm (LCA). The particle history which RMC code can simulate greatly increase. RMC code has the capacity of large-scale complex problems. This article first
discuss the RNG in Monte Carlo program. Then the linear congruential algorithm is briefly introduced. Secondly the theoretical and empirical test are applied to verify
the new long period RNG. The test results are in very good agreement with former RNG. Efficiency losses are on acceptable level. As a result of the current study, the
new long period RNG can be considered feasible for practical calculations.
Deterministic Transport Methods
Monday, April 20, 2015
3:40 PM
Hermitage D
Chairs: Dr. Tara M. Pandya, Dr Troy L. Becker
113
Theoretical Investigation of Noise Propagation in Low-Order Equation for the Scalar Flux
Anil K. Prinja
Department of Nuclear Engineering, University of New Mexico, Albuquerque, NM
Spatial and temporal noise in the Eddington factor, simulating noise arising in hybrid numerical schemes, is modeled as a Gaussian stochastic process and its effect
on the scalar flux investigated theoretically. In the small correlation time limit, a nonstandard closed equation for the mean scalar flux is obtained that contains a fourth
order derivative of the scalar flux. In an infinite medium setting, this term is shown to have a destabilizing effect on the solution. Specifically, any spatial Fourier mode
with wavelength smaller than a critical value, which depends on the noise characteristics, amplifies in time without bound, in contrast to the corresponding nonrandom
case which is dissipative for all modes. An asymptotic solution is obtained which shows that the noise effect disappears at late times and the scalar flux limits to the
deterministic solution.
- 13 -
MC2015 : M&C + SNA + MC 2015
207
Evaluation of PWR and BWR Calculational Benchmarks from NURERG/CR-6115 Using the Transfx Nuclear Analysis
Software
B. P. Richardson
TransWare Enterprises Inc., Sycamore, IL
Evaluations have been performed for two calculational benchmarks described in NUREG/CR-6115 using the TRANSFX Nuclear Analysis Software. TRANSFX uses a
deterministic, three-dimensional, multigroup nuclear particle transport theory code (TRANSRAD) that performs neutron and gamma flux calculations. TRANSFX
couples the nuclear transport method with a general geometry modeling capability to provide a flexible and accurate tool for determining fluxes for any light water
reactor design. TRANSFX supports the mtehod of characteristics solution technique, a three-dimensional ray-tracing method based on combinatorial geometry, a fixed
source iterative solution with anisotropic scattering, thermal-group upscattering treatments, and a nuclear cross-section data library based upon the ENDF/B-VI data
file. These benchmarks are identified in U.S. NRC Regulatory Guide 1.190 for the purpose of qualifying a methodology for performing reactor pressure vessel fast
fluence calculations. It is noted that the reference results are based on a 3D synthesis method and TRANSFX is a full 3D method, so some differences are expected.
The overall comparison to results give a calculated to reference ratio of 1.09 with a standard deviation of ±0.11. This is within the uncertainty associated with the
reference values, and within the 20% uncertainty allowed by Reg. Guide 1.190, demonstrating that the TRANSFX Software is capable of performing neutron transport
calculations for evaluating RPV neutron fluence.
195
Domain Decomposition Method for 2D and 3D Transport Calculations Using Hybrid MPI/OPENMP Parallelism
R. Lenain, E. Masiello, F. Damian, R. Sanchez
CEA Saclay - DEN/DANS/DM2S/SERMA, Gif-sur-Yvette, France
In this paper we analyze the efficiency of the Domain Decomposition Method associated to the Coarse Mesh Finite Difference. We evaluate the effectiveness of the
algorithm for shared memory parallelism. We also present the advantages of the Hybrid OpenMP/MPI parallelism to perform high-fidelity large scale calculation. We
show that CPU time for a best-estimate 2D whole core calculation can be reduced from several days to few minutes on computer cluster. Finally high-fidelity 3D full
assembly cluster calculation is compared to Monte Carlo simulation. This case shows the challenges of advanced neutron transport simulation.
Reactor Physics
Monday, April 20, 2015
3:40 PM
Hermitage A-B
Chairs: Dr. Brendan M. Kochunas, Dr. Emily R. Shemon
289
Three Dimensional Benchmark Specification Based off of the Integral Inherently Safe Light Water Reactor (I2S-LWR)
Concept
Gabriel Kooreman, Ryan Hon, Farzad Rahnema, and Bojan Petrovic
Georgia Institute of Technology Nuclear and Radiological Engineering Atlanta, Georgia
The Integral, Inherently Safe light water reactor (I2S-LWR) is a new pressurized water reactor concept being developed by a multi-institutional team led by Georgia
Tech. The reactor has geometry based on a Westinghouse 2-loop pressurized water reactor (PWR) design while having similar power to that of current large
commercial PWRs (~1000 MWe). The paper describes the formulation of a whole-core benchmark problem based on the I2S-LWR concept. The core was simplified
into 58 distinct material regions for the generation of cross sections. The lattice physics code HELIOS version 1.10 was used to generate cross sections in 2, 4, 8, and
47 energy groups. A preliminary solution to the whole-core benchmark problem was then calculated using the Monte Carlo Code MCNP5 for 2, 4, 8, and 47 energy
groups.
198
VERA Core Simulator Methodology for PWR Cycle Depletion
Brendan Kochunas(1), Benjamin Collins(2), Daniel Jabaay, Shane Stimpson, Aaron Graham (1), Kang Seog Kim, William Wieselquist, Kevin
Clarno (2), Scott Palmtag(3), Thomas Downar(1) and Jess Gehin(2)
(1) Department of Nuclear Engineering and Radiological Sciences, University of Michigan, (2) Oak Ridge National Laboratory, Oak Ridge, Tennessee, (3) Core Physics Inc.
This paper describes the methodology developed and implemented in MPACT for performing high fidelity pressurized water reactor (PWR) multi-cycle core physics
calculations. MPACT is being developed primarily for application within the Consortium for the Advanced Simulation of Light Water Reactors (CASL) as one of the
main components of the VERA Core Simulator, the others being COBRA-TF and ORIGEN. The methods summarized in this paper include a methodology for
performing resonance self-shielding and computing macroscopic cross sections, 2-D/1-D transport, nuclide depletion, thermal-hydraulic feedback, and other
supporting methods. These methods represent a minimal set needed to simulate high fidelity models of a realistic nuclear reactor. Results demonstrating this are
presented from the simulation of a realistic model of the first cycle of Watts Bar Unit 1. The simulation, which approximates the cycle operation, is observed to be
within 50 ppm boron (ppmB) reactivity for all simulated points in the cycle and approximately 15 ppmB for a consistent statepoint. The verification and validation of the
PWR cycle depletion capability in MPACT is the focus of two companion papers [1,2].
262
Initial Verification of the High-Fidelity Neutron Transport Code PROTEUS for Heterogeneous Geometry Problems
Emily R. Shemon, Changho Lee, and Micheal A. Smith
Argonne National Laboratory, Argonne, IL
The unstructured mesh-based neutron transport code PROTEUS was developed at Argonne National Laboratory under the DOE Nuclear Energy Advanced Modeling
and Simulation (NEAMS) program to provide a heterogeneous geometry neutron transport capability for coupled multi-physics nuclear reactor applications. As a code
verification effort, two 2D heterogeneous geometry reactor problems, the Advanced Test Reactor (ATR) and the C5 benchmark problem based on the C5G7 OECD
benchmark, are analyzed. Eigenvalue and flux solutions of PROTEUS are compared with multi-group Monte Carlo (MCNP) solutions using the same 23 multi-group
cross sections in order to eliminate the source of error arising from the heterogeneous cross sections themselves. This procedure allows the solver itself to be verified.
Agreement is within 299 pcm for the ATR eigenvalue and within 4.6% for fuel plate fluxes. For the C5 core, eigenvalues from both codes agree within 125 pcm and
show excellent agreement for pin cell fluxes. This work verifies the accuracy of PROTEUS given a sufficiently refined finite element mesh and angular cubature. The
next phase of this work will focus on code verification with heterogeneous cross section generation.
M&S for Fusion Energy Systems
Monday, April 20, 2015
3:40 PM
Two Rivers
Chairs: Dr. Ahmad M. Ibrahim, Dr. Arkady Serikov
232
Applications of Supercomputing in Fusion Neutronics of Iter Diagnostic Ports
A. Serikov, U. Fischer (1), A. Suarez, R. Barnsley, L. Bertalot, R. O’Connor, R. Thenevin, V.S. Udintsev (2)
(1) Karlsruhe Institute of Technology (KIT), Institute for Neutron Physics and Reactor Technology, Eggenstein-Leopoldshafen, Germany, (2) ITER Organization, Saint Paul-lez-Durance, France
High performance computing esources of the HELIOS supercomputer have been harnessed for neutronics computational support of ITER designing work.
Accomplishments of this work have been realized in new design features of the shielding structure applied for the ITER Diagnostic Ports, particularly for Upper and
Equatorial Port Plugs (UPP and EPP). The main objective of the nuclear analyses of the ports is to guarantee radiation shielding of the personnel access to the Port
Interspace (PI) area (behind the port) with compliances to the project restrictions of weight and consistency with port interfaces to the blanket and vacuum vessel. For
the shielding optimization of the ports, the objective function was the Shut-Down Dose Rate (SDDR) at PI with the target being the minimization of SDDR following the
ALARA (As Low As Reasonable Achievable) principle. The paper presents new results of CAD-based neutronics analyses performed with the Monte Carlo MCNP5
code and Direct 1-Step (D1S) method for SDDR-calculations. Emphasis is given for the computational and methodological aspects of the analyses. This paper briefly
presents a number of design solutions for the ITER port plugs (radiation stoppers, labyrinths, collimators, as well as selection of shielding and low activation materials)
and due to possible universality of the ports might save R&D resources in future. Particular shielding improvements are presented for the Diagnostic UPP #18 which
encompasses three diagnostic systems in full-size MCNP model of ITER (B-lite) and for the local MCNP model of EPP #17 which includes only one Diagnostic CoreImaging X-ray Spectrometer (CIXS). While global models like B-lite are more realistic, local models are faster in computing and parametric analyzing, producing
relative results in search for the best shielding performance by varying geometry and material parameters.
- 14 -
MC2015 : M&C + SNA + MC 2015
141
Modelling of the Remote Handling Systems with MCNP – Jet Fusion Reactor Example Case
Luka Snoj, Igor Lengar, Aljaž Čufar (1), Brian Syme, Sergey Popovichev (2), Sean Conroy (3), Lewis Meredith (2), and JET Contributors (1)
EUROfusion Consortium, JET, Culham Science Centre, United Kingdom 1) Jožef Stefan Institute, Reactor Physics Department, Ljubljana, Slovenia, 2) Culham Centre for Fusion Energy, Culham Science
Centre, Abingdon, United Kingdom, 3) VR Association, Uppsala University, Department of Physics and Astronomy, Uppsala, Sweden
During the 2010-2011 shutdown the Joint European Torus (JET) has undergone a major transition in the first wall material from the coated CFC to ITER-Like Wall
(Be/W/C). After the transition the neutron detectors were experimentally recalibrated as different materials have vastly different neutron transport properties and thus
big changes in the material composition can significantly affect the responses of the neutron monitors. During the experimental campaign the JET remote handling
system (RHS) deployed the 252Cf neutron source on more than 200 positions inside the tokamak but as this is a massive system its effects on the neutron monitors
had to be evaluated. For this purpose we developed a simplified MCNP model of the RHS and, in order to generate the input files for all the neutron source positions
more easily, a script that translates the RHS movement data into the transformations that position individual parts of the RHS on the correct positions in the model. To
ensure that the movements of the RHS is correct we performed a series of tests including visual benchmarks on certain positions and comparison of the positions of
the neutron source that the RHS operators provided with positions calculated with our script. After the required agreement between the positions was achieved we
were able to calculate the effect that the RHS has on the neutron monitors and also provide some feedback to the RHS operators in order to decrease its effect.
174
Monte Carlo Simulation Software SuperMC2.2 for Fusion and Fission Applications
Yican Wu, Jing Song, Liqin Hu, Pengcheng Long, Lijuan Hao, Mengyun Cheng, Tao He, Huaqing Zheng, Shengpeng Yu, Yuetong Luo,
Guangyao Sun, Zhenping Chen, Bin Wu, Quan Gan, Wen Wang, Dong Wang, Peng Ge, Chaobin Chen, Jun Zou, Zihui Yang, Jinbo Zhao, Ting
Li, Liu Hong, Hui Wang, Ling Fang
Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, 230031, China
Monte Carlo (MC) method is thought as the last resort to deal with the reactor problems. However, many challenges prevent the applications of MC methods in real
fusion and fission engineering applications. SuperMC is a CAD-based MC program for integrated simulation of nuclear system by making use of hybrid MCdeterministic method and advanced computer technologies. SuperMC 2.2, the latest version, can perform neutron, photon and coupled neutron and photon transport
calculation and is equipped with the functions of automatic modeling and visualization. In this paper, the main functions and features including the geometry and
physics automatic modeling, hybrid MC and deterministic transport method, advanced acceleration methods in transport calculation, visualization and virtual
simulation of SuperMC2.2 were introduced. The calculation results and calculation time of four representative cases of fusion and fission reactors from series of
benchmark cases of SuperMC2.2 including fusion reactor, fast reactor, ADS, PWR were presented and compared with MCNP. The results were in accordance and
the calculation speed of SuperMC was faster. The intelligent pre-processing and post-processing functions reduced the human effort.
Mathematical Methods in Nuclear Nonproliferation and Safeguards Applications
Monday, April 20, 2015
3:40 PM
Belmont
Chairs: Prof. Shikha Prasad, Dr Andrea Favalli, Dr. Shaheen A. Dewji, Dr. Stephen Croft
178
An effect of capture gammas, photofission and photonuclear neutrons to the neutron-gamma Feynman variance-tomean ratios (neutron, gamma and total)
Dina Chernikova, Imre Pázsit (1), Stephen Croft(2), and Andrea Favalli(3)
(1) Chalmers University of Technology, Department of Applied Physics, Nuclear Engineering, Göteborg, Sweden (2) Oak Ridge National Laboratory (ORNL), TN (3) Los Alamos National Laboratory
(LANL), New Mexico
Two versions of the neutron-gamma variance-to-mean (Feynman-alpha) formula for separate gamma detection and total neutron-gamma detection were recently
derived and evaluated by Chernikova, et. al. [1]. However, the neutrons and gammas emitted in a photofission reaction or the release of gammas in certain thermal
neutron capture reactions were not included in the theoretical models of Chernikova, et. al. [1]. In this paper, in order to evaluate the influence of these type of
reactions to the values the neutron-gamma Feynman variance-to-mean ratios (neutron, gamma and total), we derive the enhanced Feynman-alpha formulae for
separate neutron, gamma detection and total neutron-gamma detections. The theoretical derivation is based on the Chapman-Kolmogorov equation with inclusion of
general reactions, photofission and capture gammas. The quantitative evaluation of the effect of capture gammas and photonuclear neutrons to the neutron-gamma
Feynman variance-to-mean ratios (neutron, gamma and total) is done by using reaction intensities obtained from MCNPX simulations. The new enhanced formulas
and their impact to the final values of different variance-to-mean ratios are the main subject of the discussion in the present paper.
299
Spent Fuel Modeling and Simulation Using Origami for Advanced NDA Instrument Testing
Jianwei Hu, Ian Gauld, Andrew Worrall (1), Henrik Liljenfeldt (2), Se-Hwan Park(3), Anders Sjöland(2), In-Chan Kwon, and Ho-Dong Kim (3)
(1) Reactor and Nuclear Systems Division, Oak Ridge National Laboratory, Oak Ridge, TN, United States (2) Swedish Nuclear Fuel and Waste Management Company (3) Korea Atomic Energy Research
Institute, Daejeong, Republic of Korea
The Next Generation Safeguards Initiative Spent Fuel project funded by the National Nuclear Security Administration is at the final phase of developing several
advanced nondestructive assay (NDA) instruments to provide improved capabilities for spent fuel safeguards, including detection of partial defects, verification of
operator declarations, and quantification of plutonium content. Field tests of several instruments have been completed at facilities in Republic of Korea and Japan.
Recent activities have focused on 25 PWR and 25 BWR spent fuel assemblies with diverse attributes at the Central Spent Fuel Interim Storage Facility in Sweden.
Measurement campaigns have been initiated and will continue over the next two years, including gamma-ray spectroscopy, Fork detector, calorimetry, and several
advanced NDA instruments such as Differential Die-away Self Interrogation, Differential Die-away, and Californium Interrogation Prompt Neutron methods. As part of
the field testing, high-fidelity computer models of the spent fuel assemblies and the NDA instruments are needed to help assess the instrument performance. Such
models are also essential for correlating the response of one instrument to that of another. These assembly models should provide well-characterized nuclide
inventories and the passive neutron and gamma ray emission spectra. A new three-dimensional assembly depletion capability, ORIGAMI, has been developed
recently at Oak Ridge National Laboratory to provide a convenient interface to the ORIGEN code in SCALE. This paper will describe how ORIGAMI was used to
develop the assembly models using detailed fuel assembly design and reactor operating data, and demonstrate some results generated by this code. Comparison of
these results with experimental data will be reported once the experimental data are published in the near future.
82
Calculation of Prompt Neutron Decay Constant with Monte Carlo Differential Operator Sampling
Yasunobu Nagaya
Nuclear Science and Engineering Center, Japan Atomic Energy Agency
A new method to calculate the prompt neutron decay constant (alpha value) with the Monte Carlo method is proposed. It is based on the conventional alpha-k search
scheme but no iteration is required for the alpha value search. A 1/v poisoning perturbation is considered for a system where only prompt fission neutrons are
generated. The k eigenvalue is then expressed in a truncated Taylor series with regard to alpha; the differential coefficients are calculated with the differential operator
sampling, which is one of the Monte Carlo perturbation techniques. The first- and second-order Taylor approximations are considered in the present work and the
alpha value is determined analytically such that the k eigenvalue is unity. In order to examine the applicability of the proposed method, verification has been performed
for simple geometries of a bare fast system Godiva and an unreflected thermal system STACY. Comparisons have been done with the pulsed neutron source (PNS)
simulation and the direct calculation from the definition of the alpha value. The alpha value is obtained by least-squares fitting of time-dependent neutron flux in the
PNS simulation. The results with the proposed method show good agreement with the reference PNS simulation and the direct calculation.
- 15 -
MC2015 : M&C + SNA + MC 2015
Technical Poster Session
Monday, April 20, 2015
Chair: Christopher Perfetti
325
Plantation Lobby
Neutronics Analyses for the SNS Second Target Station
Igor Remec, Franz X. Gallmeier, Mark J. Rennich, Thomas J. McManamy, Wei Lu
Oak Ridge National Laboratory Oak Ridge, TN 37831-6476
The first round of the optimization of the target/moderators/reflector assembly for the Spallation Neutron Source Second Target Station at the Oak Ridge National
Laboratory was completed with the goal to achieve cold neutron pulses with outstanding brightness. The simulations were performed with the MCNPX particle
transport code within a global optimization procedure. Based on the projected STS neutron scattering instruments preferential treatment was given to the coupled
moderators. Mostly due to explicit optimization for the peak brightness, and the resulting compact target and moderator design, promising results were obtained. For
the STS operating at 467 kW beam power at 10 Hz, projected gains in the peak brightness are 10 to 13 times for the coupled para-H2 moderators, 3 times for
decoupled para-H2 moderator and 4 times for the decoupled H2O moderator with respect to the First Target Station operating at 2 MW beam power and 50 Hz pulse
repetition rate
258
Simulation of a Commercial Siemens PET Scanner Using the Monte Carlo-based GATE Code
J.J. Giner-Sanz, S. Gallardo, G. Verdú (1), C. Torrijo(2)
(1) Instituto de Seguridad Industrial, Radiofísica y Medioambiental (ISIRYM), Universitat Politècnica de València, Valencia, Spain, (2) Depto. Medicina Nuclear Hospital Casa de Salud, Valencia, Spain
In this work, a computational model of the Siemens biograph 2 CT/PET scanner located at “Hospital La Salud” (Valencia, Spain) was elaborated using GATE (Geant4
application for tomographic emission). NEMA standard NU-2 2001 protocol was used in order to validate this computational model. The parameters used for the
model validation were: the spatial resolution, the scatter coincidence rate and the random coincidence rate. These parameters were selected since they are key
parameters for the assessment of PET scanners performance. The considered definition for these parameters was the definition given in NEMA standard NU-2 2001.
The different NEMA protocol tests were simulated using the built computational model. The values of these parameters, were compared to the parameters obtained
experimentally or provided by the scanner supplier. Moreover, a particular phantom acquisition was simulated using the computational model. The reconstructed
image was compared to the experimental phantom. Finally, the energy spectrum of the photons detected by the scanner detectors was analyzed. All these
verifications showed that the elaborated computational model behaved as the real scanner.
56
A Domain Decomposition Method in APOLLO3® Solver, Minaret
Nans Odry, Jean-François Vidal, And Gérald Rimpault (1), Anne-Marie Baudron And Jean-Jacques Lautard (2)
1) CEA, DEN, CADARACHE, SPRC-LEPh, St Paul Les Durance, France, 2) CEA, DEN, SACLAY, SERMA-LLPR, Gif sur Yvette cedex, France.
The aim of this paper is to present the last developments made on Domain Decomposition Method inside the APOLLO3 core solver, MINARET. The fundamental idea
involves splitting a large boundary value problem into several similar but smaller sub-problems. Since the sub-problems are only connected through their boundary
conditions, each of them can be solved independently. The Domain Decomposition Method is therefore a natural candidate to introduce more parallel computing into
deterministic schemes. Yet, the real originality of this work does not rest on the well-tried Domain Decomposition Method, but on its implementation inside the Sn
transport solver MINARET. The first verification elements show a perfect agreement between the Domain Decomposition and the standard whole core schemes, in
terms of both effective multiplication factor and flux mapping. A “relatively“ low increase of computation time due to Domain Decomposition can be observed. It is very
encouraging for future performances, particularly when parallelization and Diffusion-based acceleration will be implemented. This association will make the new
scheme an efficient tool, able to deal with the large variety of geometries needed for nuclear core concepts.
116
Dose Estimation for Complex Urban Environments Using RUGUD, SWORD, ADVANTG and Denovo
Andy Li and George Lekoudis (1), Scott Mosher, Tom Evans, and Seth Johnson (2)
1) Applied Research Associates, Inc., Arlington, VA, 2) Oak Ridge National Laboratory, Oak Ridge, TN
This paper proposes a novel dose estimation methodology using a combination of RUGUD, SWORD, ADVANTG, and Denovo to estimate radiation doses to
populations residing in complex urban environments. Previous methodologies use MCNP in combination with geometries generated with proprietary software or inhouse developed fast-running tools. In comparison, the proposed method uses a combination of software that is being actively developed for governmental users. In
the proposed method, the urban geometry is represented by RUGUD-generated CTDB, SWORD is used to add scenario-specific information to the geometry, and
subsequently ADVANTG and Denovo are used to perform a set of discrete ordinates calculations to determine a flux map resulting from the source and geometry.
Subsequent processing tools can be used to convert the relevant flux maps to doses and other quantities of interest. The proposed methodology requires minimal
user input and a relatively short computation time (~5 hours wall time); it represents an alternative way to compute flux and dose in urban environments. In this paper,
the RUGUD-SWORD-ADVANTG-Denovo toolchain is first described. Then, a use-case scenario is presented to examine the dose to the outdoor population
associated with a hypothetical nuclear detonation in Washington, D.C. Finally, the limitations and the future directions of this work are discussed.
32
A Control Theory Approach to Adaptive Stepsize Selection for Lattice Physics Depletion Simulations
Daniel J. Walter and Annalisa Manera
Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, Michigan
A control theory approach is adopted to determine the temporal discretization during lattice physics depletion simulations. The primary benefit of automated and
adaptive stepsize control is realized in high-fidelity multiphysics simulations, e.g. lattice depletion loosely coupled with other physics, where the coupled physics are
nonlinear in time and stepsize changes may be necessary to obtain an accurate coupled solution. A conventional predictor-corrector method is used to address the
nonlinearity of the nuclide transmutation and neutron flux. The one-group scalar neutron flux is monitored at both the predictor and corrector steps to approximate the
convergence residual of the nonlinear solution. User-specified tolerances on changes in the scalar neutron flux are utilized by the stepsize controller. A controller of
the type proportional-integral is parameterized for two-dimensional pressurized water reactor 17x17 fuel pin assemblies. Three distinct fuel loadings are considered,
including no burnable absorbers, Integral Fuel Burnable Absorber, and gadolinium fuel pins. The required depletion stepsizes, as predicted throughout the cycle by the
controller, are compared with a very small stepsize (0.01 MWd/kgHM) reference solution and a solution obtained by a typical rule of thumb depletion stepsize
sequence.
95
Performance of Predictor-Corrector Algorithms in Modelling Gadolinium Burn Out
W. Haeck, R. Ichou, and L. Jutier
Institut de Radioprotection et de Sûreté Nucléaire (IRSN), Fontenay-aux-Roses, France
A study of the impact of the time step size and the application of predictor-corrector algorithms on the calculation of the gadolinium burn out in the EGBUC Phase IIIC
Benchmark has been presented.Initially, the standard calculation procedure applied for VESTA calculations (which was also applied to the benchmark calculation)
used time steps of 1 MWd kgHM−1 using a predictor only. On the benchmark case, this lead to an underestimation of the kinf value below 16 MWd kgHM−1 as well as
an overestimation of the burn up at which the peak kinf value was obtained. It has been concluded that the initial calculation procedure is not capable of correctly
capturing the spectral changes due to gadolinium burn out in the benchmark. As a result, a new standard procedure for VESTA calculations has been proposed for
PWR and BWR applications. The new proposed procedure uses time steps of 0.5 MWd kgHM−1 with a projected predictor-corrector algorithm. A comparison with a
reference calculation with an extremely small time step size (of the order of 0.01 MWd kgHM−1 ) has shown that the new procedure produces results that are
statistically equivalent with the reference calculation.
146
The SCALE 6.2 ORIGEN API for High-Performance Depletion
W. A. Wieselquist
Oak Ridge National Laboratory, Oak Ridge, Tennessee
In recent years, the SCALE 6.2 development efforts have included modernization a key components of the modular SCALE code system. The ORIGEN
depletion/decay module has received extensive improvement including an application programming interface (API) for both C++ and Fortran with modern objectoriented design, and various solver enhancements. This paper highlights the API capabilities that are currently available in beta release for embedding ORIGEN
depletion calculations in other codes, e.g. for coupled transport/depletion calculations. One important highlight is the new ability to create small, specific-purpose
burnup/decay chains from the general purpose ORIGEN burnup/decay chain of ~2200 nuclides. These simplified burnup chains are especially important when
ORIGEN is used in high-performance, coupled transport/depletion problems with limited memory resources such as full-core, pin-resolved deterministic transport in
the CASL core simulator MPACT.
- 16 -
MC2015 : M&C + SNA + MC 2015
53
A Multigrid Method for the Self-Adjoint Angular Flux Equation Based on Cellwise Block Jacobi Iteration
Jeffery D. Densmore and Daniel F. Gill (1), Justin M. Pounders (2)
1) Bettis Atomic Power Laboratory, West Mifflin, PA, USA, 2) Nuclear Engineering Program, University of Massachusetts Lowell, Lowell, MA
We present a multigrid method for the Self-Adjoint Angular Flux (SAAF) equation in two-dimensional Cartesian geometry. For smoothing, we employ a multistage
smoother based on cellwise block Jacobi iteration. In addition, we find simply applying the same discretization on coarse grids as used for the fine grid, i.e., direct
coarse-grid approximation, can severely limit the overall rate of convergence. Instead, we employ a discretization constructed in part using Galerkin coarse-grid
approximation. With a set of numerical examples, we demonstrate that our multigrid method is effective under a wide range of conditions.
175
Scalability Benchmarking Methodology for Hybrid Parallel Core Calculations with the Code nTRACER
S. Canepa, M. Krack, H. Ferroukhi (1), A. Pautz (1,2)
(1) Paul Scherrer Institut, PSI, Switzerland (2) Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland
As part of the collaboration with the Seoul National University, the code nTRACER was recently adopted at the Paul Scherrer Institut as a candidate solver for 3D core
direct pin-by-pin transport calculations. Because of the hybrid parallel MPI/OpenMP implementation, testing the code scalability on the in-house mid-range computing
cluster MERLIN-4 was considered a key element to guide the assessment activities and to adapt the overall code validation and verification strategy. Therefore, a
scalability benchmarking methodology for nTRACER has been set-up to determine the scalability. The study is performed monitoring the wall-clock time of execution
of some of the tasks performed by the code, relying on the internal method of measurement. Most of the monitored tasks include serial and parallel implementations of
the executed routines. In addition, it has been paid particular attention to the evaluation of thermal-hydraulic feedbacks, which do not affect directly the total
computational time, but show an impact on the other tasks performed by the code, in particular the resonance treatment. As a result, the code shows a good
scalability when dealing with the spatial domain decomposed and associated to different MPI processes. However, the task solved in parallel with several OpenMP
threads show a limited scalability. The limit in maximum number of thread which can be efficiently used is, most probably, related to the allocation of the memory
disregarding the presence of two NUMA nodes in each physical computing node. Finally, the code shows a proper weak scaling, opening the possibility to increase
the details in the solution whenever more computational resources become available.
186
Fourier Analysis of a Nonlinear Two-Grid Method for Multigroup Neutron Diffusion Problems
Dmitriy Y. Anistratov, Luke R. Cornejo, and Jesse P. Jones
Department of Nuclear Engineering, North Carolina State University, Raleigh, NC
We analyze a nonlinear acceleration method for solving multigroup diffusion equations in multidimensional geometry. It uses two energy grids: (i) original energy
groups and (ii) one coarse group. We perform theoretical studies of stability of the nonlinear two-grid (NTG) iteration method for fixed-source and k-eigenvalue
problems. The Fourier analysis is applied to the NTG equations linearized near solution of infinite-medium problems. The developed analysis of the NTG method
enables us to predict its convergence properties in various types of neutron diffusion problems. Numerical results of problems in 2D Cartesian geometry are presented
to confirm theoretical predictions.
223
Non-Linear Iterative Method for Radiative Transfer Problems
Anthony P. Barbu and Marvin L. Adams
Texas A&M University, College Station, TX
We present thermal radiation-transport solution techniques that use gray (one-group) diffusion low-order equations to speed iterative convergence. In the Gray
Diffusion Acceleration (GDA) method, a diffusion adaptation of Grey Transport Acceleration (GTA) by Larsen, a gray diffusion equation is used to accelerate or
precondition linear iterations for the absorption-rate density (ARD). We discuss theoretical considerations and present results from test problems that include Marshak
waves and the well-known "tophat' problem.
279
Current Coupling Collision Probability Method with Orthonormal Flux Expansion for Unstructured Geometries
Dzianis Litskevich and Bruno Merk
Helmholtz-Zentrum Dresden-Rossendorf Dresden, Germany
An advanced methodology based on the current coupling collision probability method with an orthonormal expansion of the flux is proposed and applied for the case of
the outer arbitrary convex polygon with annular zones inside. Gramm-Schmidt procedure for the orthogonalization of the polynomials is applied. The results of the
calculations for a single unit cells show good agreement with the results of Monte Carlo calculations. The results of the pin-power reconstruction using proposed
methodology coupled into a nodal solver for the test mini core, demonstrate good agreement with the results of the full core transport calculations.
316
Convergence study of Rattlesnake solutions for the two-dimensional C5G7 MOX benchmark
Yaqi Wang, Mark D. DeHart, Derek R. Gaston, Frederic N. Gleicher, Richard C. Martineau, Javier Ortensi, John W. Peterson, and Sebastian
Schunert
Idaho National Laboratory Idaho Falls, ID, USA
This work presents a convergence study of the Rattlesnake self-adjoint angular flux (SAAF) implementation for the two-dimensional C5G7 benchmark problem.
Angular direction and space are discretized using the discrete ordinates ($S_N$) method and continuous finite element methods (CFEM), respectively. Convergence
results with respect to angular and spatial refinement are reported. Rattlesnake computed eigenvalue and power distribution converge to the reference solution to
within 10 pcm and 0.001\% root mean square error. The largest calculation features in excess of 10 billion degrees of freedom and executes in under 6 hours on 2376
CPUs. This demonstrates ability of MOOSE applications to solve extremely large problems in an efficient manner.
323
Preserving Positivity of Solutions to the Diffusion Equation for Higher-Order Finite Elements in Under Resolved
Regions
Thomas A. Brunner
Lawrence Livermore National Laboratory Livermore, CA 94550
Higher order finite element methods hold a lot of promise for doing more useful work per memory accessed and stored, which might be advantageous on future
computer architectures. But in problems where there are under-resolved features such as boundary layers, the higher order methods often produce non-physical
solutions. Traditionally, for linear finite elements, the mass matrix is lumped to preserve positivity. But this technique fails to restore positivity for higher-order elements.
We propose a different solution where the higher order zone is refined and a low-order method used to discretize the zone, all while keeping number of unknowns and
their locations fixed so that they can be interpreted using both the low-order and high-order basis functions. This restores positivity in a simple test problem while
preserving the convergence rate of the high-order method.
204
Evaluation of the RACER Monte Carlo Fuel Depletion Benchmark Using TRANSLAT Version 2.10
Dean B. Jones
TransWare Enterprises Inc., Sycamore, Illinois
TRANSLAT is a lattice physics code developed by TransWare Enterprises Inc. This paper evaluates the eigenvalue, reaction rate, and fuel depletion methods of the
code against the results of a Monte Carlo fuel depletion benchmark. The benchmark problem is a highly heterogeneous BWR, 8x8, D-lattice fuel assembly design with
varying fuel enrichments, a heavy loading of gadolinium, a large central water rod, and wide and narrow water gap regions. Results are provided for calculated kinfinites and pin-wise fission density distributions over a burnup history of 17,000 hours. The fuel depletion benchmark was evaluated with TRANSLAT using two
transport methods and two fuel depletion methods. The results show that the TRANSLAT methods fall within the confidence intervals of the Monte Carlo k-infinite
calculations for 18 of 36 burnup steps using a 2-flux per burnup step depletion method. TRANSLAT also predicts the same peak pin locations, with differences in the
peak pins ranging from 0.5% to 0.7%. The RMS values calculated for the pin-wise fission density distribution ranged from 1.3% to 1.6%. The MCNP Monte Carlo code
was used to cross-check the TRANSLAT-RACER results. It is shown that the RACER and MCNP k-infinites agree to within 32 pcm. TRANSLAT over-predicted the
MCNP k-infinite by 79 pcm using collision probabilities and 121 pcm using the method of characteristics. Comparisons of the TRANSLAT-MCNP pin- wise reaction
rate distributions show RMS values of 0.8% for the fission rates and 0.5% for the capture rates.
- 17 -
MC2015 : M&C + SNA + MC 2015
153
Full-Core PWR Transport Simulations on Xeon Phi Clusters
David Ozog and Allen D. Malony (1), Andrew Siegel(2)
(1) Department of Computer and Information Science, University of Oregon, Eugene, OR (2)Center for Exascale Simulation of Advanced Reactors Argonne National Laboratory, Argonne, IL
Recent work has shown that the Intel Xeon Phi platform satisfactorily accelerates Monte Carlo neutron transport calculations. In that work, an event-based algorithm
attains very high performance gains on simplified simulations, but it is difficult to realize a full featured implementation that incorporates sophisticated physical
phenomena such as incoherent inelastic scattering, unresolved resonance range calculations, and thermal motion effects. On the other hand, it is possible to
incorporate all physics in a scalable Xeon Phi implementation by running MPI in symmetric mode with slightly modified code. It is prudent to understand and optimize
this full-physics implementation on the Xeon Phi. In symmetric mode, the best performance occurs when balancing load across the CPU and Xeon Phi device(s) in
congruence with their different calculation rates. Because the Xeon Phi calculates up to 2.2 times faster than the CPU, it can handle approximately twice the number
particles per neutron batch. The value of this load balance ratio between CPU and coprocessor depends on the number of materials (or nuclide isotopes) in the input
geometry: it increases with the total number of isotopes in the nuclear fuel. This suggests that high-fidelity reactor core simulations will perform well on the Xeon Phi
and will scale to a large number of processors. This paper analyzes this phenomenon and shows the best performance gains achieved with 4 Xeon Phi devices per
node using appropriate choices of the load balance ratio.
300
Status of ARCHER --- A Monte Carlo Code for the High-Performance Heterogeneous Platforms Involving GPU and MIC
Tianyu Liu, Noah Wolfe, Christopher D. Carothers, Wei Ji, and X. George Xu
Rensselaer Polytechnic Institute Troy, NY 12180
Accelerators such as Graphics Processing Units (GPUs) and Many Integrated Core (MIC) coprocessor are advanced computing devices with outstandingly high
computing performance and energy efficiency. The Monte Carlo transport simulation community views these advanced devices as an opportunity to effectively reduce
the computation time for performance-critical applications. In this paper, we report on our recent progress in developing ARCHER (Accelerated Radiation-transport
Computations in Heterogeneous EnviRonments), an innovative parallel Monte Carlo code for accurate and fast dosimetry applications on the CPU, GPU and MIC
platforms.
301
Concurrent CPU, GPU and MIC Execution Algorithms for Archer Monte Carlo Code Involving Photon and Neutron
Radiation Transport Problems
Noah Wolfe and Christopher Carothers (1), Tianyu Liu and X. George Xu (2)
(1) Department of Computer Science Rensselaer Polytechnic Institute Troy, NY (2) Nuclear Engineering Program Rensselaer Polytechnic Institute Troy, NY
ARCHER-CT and ARCHER-Neutron are Monte Carlo photon and neutron transport applications that have now been updated to utilize CPU, GPU and MIC computing
devices concurrently. ARCHER detects and simultaneously utilizes all CPU, GPU and MIC processing devices available. A different device layout and load-balancing
algorithm is implemented for each Monte Carlo transport application. ARCHER-CT utilizes a new "self service" approach that efficiently and effectively allows each
device to independently grab portions of the domain and compute concurrently until the entire CT phantom domain has been simulated. ARCHER-Neutron uses a
dynamic load-balancing algorithm that distributes the particles in each batch to each device based on its particles per second rate for the previous batch. This
algorithm allows multiple architectures and devices to execute concurrently. A near linear scaling speedup is observed when using only GPU devices concurrently.
New timing benchmarks using various combinations of various Intel and NVIDIA devices are made and presented for each application. A speedup of 16x for
ARCHER-Neutron and 44x for ARCHER-CT has been observed when utilizing an entire 4U, 9 device heterogeneous computing system composed of an Intel CPU, an
Intel MIC and 7 NVIDIA GPUs.
308
Application of CMFD with Wielandt Method on Continuous Energy Monte Carlo Simulation for Eigenvalue Problems
Hyunsuk Lee and Deokjung Lee
Ulsan National Institute of Science and Technology Ulsan, 689-798, Korea
The Coarse Mesh Finite Difference Method (CMFD) has been widely used to accelerate the convergence of deterministic methods. Recently, there have been
researches to apply the CMFD to accelerate the Monte Carlo (MC). Unlikely the deterministic method, MC has inactive cycle to make the converged fission source,
and active cycle to tally the results. It was shown that the CMFD method can accelerate the fission source convergence of MC. It means that the number of inactive
cycle can be reduced. And, it was expected that the effect of CMFD on active cycle is also good if CMFD can cut the inter-cycle correlation. It turns out the CMFD also
has inter-cycle correlation since the parameters for the CMFD calculation was generated by MC. To reduce the inter-cycle correlation of CMFD accelerated MC,
Wielandt method was adopted. In this paper, the inter-cycle correlation of CMFD, Wielandt Method, and CMFD with Wielandt method were studied.
236
First-Principles Calculation Studies on Cesium in Environmental Situations: Hydration Structures and Adsorption on
Clay Minerals
Masahiko Machida, Masahiko Okumura, and Hiroki Nakamura (1), Kazuhiro Sakuramoto(2)
(1) CCSE, Japan Atomic Energy Agency, Kashiwa, Chiba, 278-0871 Japan, (2) Foundation for Promotion of Material Science and Technology of Japan, Tokyo 157-0067, Japan.
In order to clarify physicochemical behaviors of radioactive Cs released into environment from the Fukushima Daiichi nuclear power plants, we study on two issues, i.
e., hydration structures of Cs+ and its adsorption on a specific edge in a clay particle (mica) by employing first principles calculations. After the fallout on the land,
radioactive Cs+ inside water droplets has been mainly adsorbed on clay surfaces and transferred into a specific edge called “Frayed Edge Site”, which strongly
stabilizes Cs. Thus, Cs mainly migrates together with the clay particles by water flow, while a part of Cs is dissolved into water as a cation, again. Relevant to these
transport processes, we find that the above two situations are important as the chemical form of Cs in environment. However, since the two forms are too small to
experimentally study in details, the first-principles calculation is a powerful tool. In this paper, firstly, we report on hydration structures of Cs+ by using BornOppenheimer first-principles molecular dynamics. Our striking finding in the hydration structures is that Cs+ has no clear second hydration shell in contrast to any
other alkali cations. Secondly, we construct a model of the Frayed Edge Site and confirm that the model actually becomes selective for Cs when expanding the
interlayer distance from that of the original crystal structure through the calculation of the ion-exchange energy.
327
Uncertainty Analysis for Non-Destructive Assay with Application to an On-Line Enrichment Monitor
Kenneth D. Jarman, L. Eric Smith, Richard S. Wittman, Mital A. Zalavadia (1), Stephen Croft(2), and Tom Burr(3)
(1) Pacific Northwest National Laboratory , Richland, WA, USA (2) Oak Ridge National Laboratory , Oak Ridge, TN, USA (3) International Atomic Energy Agency , Vienna, Austria
Simulation tools, validated by real measurements across multiple scenarios, represent an important component of non-destructive assay technology development for
nuclear safeguards applications. Such tools are needed because realistic models of fielded measurement methods are too complicated to be expressed with
conventional assumptions about variations in data and/or model parameters. Especially important is the ability in simulations to characterize all relevant sources of
random and systematic uncertainty and identify the major contributing factors to overall uncertainty in the parameters to be assayed, for example uranium enrichment
or plutonium mass. The view of uncertainty quantification as presented here, is to add to a growing set of modeling, simulation, and analysis tools that are easily
implemented by technology developers to support exploration of different instrument designs and data analysis methods. The combination of simulation and
uncertainty quantification, referred to here as an “uncertainty emulator,” is intended to enable rapid exploration of uncertainty under different measurement scenarios
and conditions. This approach can, for example, help to identify which designs and algorithms meet IAEA requirements, and help identify where to focus effort to
reduce uncertainty. We describe the general concept, and demonstrate on the IAEA’s prototype On-Line Enrichment Monitor. We illustrate the utility of the uncertainty
quantification process by comparing different formulations of enrichment and uncertainty estimates and consider tradeoffs between them.
66
The Geant4 Version 10 Series
Makoto Asai, Andrea Dotti, Dennis H. Wright (1), Gabriele Cosmo, Vladimir Ivantchenko,
Albert Ribon (2), Laurent Garnier (3), Ivana Hřivnáčová (4), Sebastien Incerti (5), and Marc Verderi (6)
1) SLAC National Accelerator Laboratory, 2) CERN, 3) Laboratoire de L’accélérateur Linéaire, IN2P3, 4) Institut de Physique Nucléaire Orsay, Université Paris-Sud, IN2P3, 5) Centre d’Etudes nucléaires
de Bordeaux Gradignan, IN2P3, 6) Laboratoire Leprince-Ringuet - École polytechnique, IN2P3
A major release of Geant4 version 10.0 was made as scheduled in December 2013, which included the adoption of multithreading, improved treatment of isomers, the
extension and improvement of physics models, enhancements in variance reduction options, improvements to the geometry modeler with a revised implementation of
most geometrical primitives, advances in histogramming tools, and visualization and graphical user interfaces. In December of 2014, version 10.1 was released, which
built on the advances of release 10.0. The implementation of multithreading and its benchmarking results will be discussed. Certain electromagnetic and hadronic
model extensions added in 10.0 and 10.1 will also be discussed along with their effects on validation results with experimental data. Also, new variance reduction
options, an embedded histogramming tool with multithreading capability, and visualization and graphical user inteface improvements will be highlighted. Finally, the
prospects of short and longer term refinements of the toolkit beyond version 10.1 will be discussed.
- 18 -
MC2015 : M&C + SNA + MC 2015
131
Comparisons Of 3D Heterogeneous PWR Full-Core Transport Calculations by RMC and the SN Domino Solver From
Cocagne Platform Using Multi-Group Cross-Sections
Shichang Liu, Yishu Qiu, Kan Wang, Jingang Liang
Department of Engineering Physics, Tsinghua University, Beijing, P.R. China
With the developments of the computational technology, three dimensional (3D) transport calculations with both deterministic methods and Monte Carlo methods can
be applied to the full-core problems. The code to code verifications on the full-core benchmarks are essential for both deterministic and Monte Carlo methods. EDF
R&D and the Department of Engineering Physics of Tsinghua University collaborate to make code to code comparisons between the Cartesian SN DOMINO solver in
COCAGNE platform and Monte Carlo code RMC. This study focuses on three new 3D PWR full-core benchmarks, to compare the DOMINO and RMC solvers on
configurations with anisotropic scattering and asymmetrical configurations due to the existence of control rods. The results of DOMINO and RMC agree well. It can be
concluded that both DOMINO and RMC are capable of 3D PWR full-core transport solutions with heterogeneous asymmetry and the anisotropic scattering.
163
New Features and Enhancements of Reactor Monte Carlo Code RMC
Kan Wang, Jingang Liang, Jiankai Yu, Xiao Fan, Yishu Qiu, Shichang Liu, Feng Yang, Xiaotong Shang, Gaochen Wu, Qicang Shen, Ouwen
Yexin, Zonghuan Chen and Ganglin Yu (1), Qi Xu(2), Ding She, Zeguang Li, Songyang Li (3)
(1) Department of Engineering Physics, Tsinghua University, Beijing, P.R. China (2) Institute of Applied Physics and Computational Mathematics, Beijing (3) Institute of Nuclear and New Energy
Technology, Tsinghua University, Beijing
This paper is to summarize the new features and enhancements of Reactor Monte Carlo code RMC. Random number generator’s period has been extended to 2126
by implementing 128-bit integer multiplication based on Linear Congruential Algorithm. On-the-fly Doppler broadening methods are explored in two feasible ways, i.e.,
pre-Doppler broadening before transport calculations and stochastic sampling Doppler broadening based on Maxwell Boltzmann distribution for the target nuclei
agitation. Three approaches including Random Lattice Method, Chord Length Sampling and explicit modeling with mesh acceleration are implemented in RMC for
stochastic medium simulations. Photon transport is added and photon-neutron coupling transport calculations are achieved for broader applications of RMC. Iterated
fission probability (IFP) method and Wielandt method are employed to make RMC having the capabilities of sensitivity and uncertainty analysis. RMC is approaching
3D full core burnup calculations with combined tally and depletion data decomposition. The kinetics simulation capability is implemented in RMC based on the
predictor-corrector quasi-static method.
192
A Volume-Dependent Fleck Factor for Added Robustness in Implicit Monte Carlo Calculations
Jacob T. Landman and Ryan G. McClarren
Department of Nuclear Engineering, Texas A&M University, College Station, TX
In this article we discuss a particular interaction of Monte Carlo noise and mesh effects in x-ray radiative transfer simulations using the implicit Monte Carlo method
and implicit capture when there are large changes in zone volume, such as in RZ geometry or on AMR meshes. In such a situation it is possible to have noisy
solutions near the axis that result in "anomalous heating". To address this issue, we decrease the Fleck factor when a particle enters a zone of smaller volume than
where it was born. Results on a test problem in RZ geometry demonstrate that our volume-dependent Fleck factor produces a much less noisy solution and effectively
removes anomalous heating on the axis, where the zone volume is the smallest.
326
penORNL: A Parallel Monte Carlo Photon and Electron Transport Package Using Penelope
Kursat B. Bekar, Thomas M. Miller, Bruce W. Patton, and Charles F. Weber
Oak Ridge National Laboratory Oak Ridge, TN U.S.A
The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning
electron microscope (SEM) simulations on high performance computing systems. This paper discusses the implementations, capabilities and parallel performance of
the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new
features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors,
and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with
several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for
SEM simulations upon completion of the new pulse-height tally implementation.
42
Coupled Neutronics and Thermal-Hydraulics Transient Calculations Based on a Fission Matrix Approach: Application
to the Molten Salt Fast Reactor
A. Laureau, M. Aufiero, P.R. Rubiolo, E. Merle-Lucotte, and D. Heuer
LPSC, Université Grenoble-Alpes, CNRS/IN2P3,Grenoble Cedex, France
This work presents a time dependent version of the fission matrix method named Transient Fission Matrix (TFM) developed to perform kinetics calculations. Coupled
neutronics and thermal-hydraulics transient calculations are studied using the TFM approach and a Computational Fluid Dynamics (CFD) code. The generation of the
matrices is performed using the Monte Carlo neutronic code SERPENT beforehand the transient calculation. The neutronic module and the coupling are directly
implemented in the CFD opensource code OpenFOAM. An application case is presented on the Molten Salt Fast Reactor (MSFR). This system is a circulating liquid
fuel reactor characterized by a two meter core cavity and a fast spectrum. Thus the present approach is well suited since an accurate distribution of the velocity of the
liquid fuel circulating in the cavity and of the delayed neutron precursor transport is required. A reactivity insertion incident of around 2.5$ is presented, showing the
good behavior of the MSFR in such a case. A load-following from 50% to the nominal power generation is also discussed.
115
Development of a High Order Boron Transport Scheme in TRAC-BF1
Teresa Barrachina (1), Amparo Soler (2), Nicolás Olmo-Juan, Rafael Miró, Gumersindo Verdú (1), Alberto Concejal (2)
1) Institute for Industrial, Radiophysical and Environmental Safety, Universitat Politècnica de València (UPV), Valencia, Spain, 2) Iberdrola Ingeniería y Construción S.A.U., Madrid, Spain.
In pressurized water reactors (PWR) the reactivity control is accomplished through movement of the control rods and boron dilution, meanwhile in boiling water
reactors (BWR) the importance of boron transport lies in maintaining the core integrity during ATWS-kind severe accidents in which under certain circumstances a
boron injection is required. This is the reason for implementing boron transport models thermal hydraulic codes in NRC codes as TRAC-BF1, RELAP5 and TRACE.
The boron transport models implemented in NRC codes are based on a calculation according to a first order accurate upwind difference scheme. There is a needing
of reviewing and improving this model. Four numerical schemes that solve the boron transport model have been analysed and compared with the analytical solution
that provides the Burgers equation. The studied numerical schemes are: first order Upwind, second order Godunov, second-order modified Godunov adding physical
diffusion term and a third-order QUICKEST using the ULTIMATE universal limiter (UL). The modified Godunov scheme has been implemented in TRAC-BF1 source
code. The results using these new schemes are presented in this paper.
277
Effects of Various Treatments of Temperature-Dependent Cross Sections on HTTR Criticality Calculations
Ta-Wei Lin, Rong-Jiun Sheu*, and Yen-Wan Hsueh Liu (1), Yang-Chien Tien (2)
(1) Institute of Nuclear Engineering and Science Department of Nuclear Engineering and System Science National Tsing-Hua University, Taiwan (2) Taiwan Power Company, Taiwan
This study investigates the effects of various treatments of temperature-dependent cross sections on a hot HTTR criticality calculation, which involves many nuclides
in the core configuration and a detailed temperature distribution. By using MCNP, the authors considered and compared five popular cross-section treatments: (1)
approximate temperatures by rounding up or down to the nearest available temperature in data libraries, (2) a pseudo material method based on interpolation through
mixing of nuclides at two temperatures, (3) the makxsf utility with Doppler broadening and interpolation to create customized libraries, (4) an on-the-fly methodology to
create the Doppler broadened data sets, and (5) the fundamental NJOY nuclear data processing system to generate data libraries at problem-specific temperatures.
The MCNP results with the NJOY-processed cross sections was taken as a reference base on which the accuracies of various cross-section treatments were
evaluated. The eigenvalue comparisons show that both the on-the-fly and makxsf treatments gave satisfactory results with small differences of approximately 20-70
pcm; the methods of approximate temperatures and pseudo materials led to slightly larger discrepancies of approximately 100-300 pcm. Looking into the details of
axial flux distribution, the comparisons indicate that the makxsf treatment provided the best consistent result (<0.5%) with NJOY. The makxsf creates nuclide datasets
at new temperatures by considering Doppler broadening of resolved resonances and interpolations of both unresolved resonance probability tables and S(a,b) thermal
scattering data. The on-the-fly method showed a slightly larger discrepancy of approximately 1.5% in flux distribution because of no specific treatment for temperaturedependent unresolved probability tables and S(a,b), corresponding data at approximate temperatures were used instead in this case. The flux discrepancies caused
by cross sections generated using the pseudo material and approximate temperature methods increased to certain extent, approximately 2.5% and 3.5%,
respectively. Most of these differences were identified resulting from approximations of temperature-dependent S(a,b) data for the huge amount of graphite in the
HTTR core.
- 19 -
MC2015 : M&C + SNA + MC 2015
55
Neutron Flux Measurement and Calculation Behind VVER-1000 Reactor Pressure Vessel Simulator Placed in LR-0
Reactor
Michal Košťál, Martin Schulc, Vojtěch Rypar, Marie Švadlenková, Evžen Losa, Ján Milčák (1), František Cvachovec (2), Sergey Zaritskyi (3)
1) Research Center Rez, Husinec Rez 250 68, Czech Republic, 2) Department of Physics, University of Defence, Czech Republic, 3) Department of Reactor Physics, RRC Kurchatov Institute, Moscow,
Russia
The neutron flux in the reactor pressure vessel is an important physical quantity affecting material degradation which reflects in the residual lifetime. The fluxes are
normalized per 1nA of monitor current which correspond to 0.01W, namely 3.631E8 fiss/s. The scaling factor for such evaluation was determined from neutron flux in
reference position, evaluated by means of reaction rate in well defined activation detector. This scaling factor was verified by means of gamma spectroscopy of
irradiated fuel. This independent method is based on the proportionality between the net peak area of selected fission product and released energy. 92Sr fission
product is used on the LR-0 reactor due to its suitable values of gamma energies with no parasitic peaks and half-life allowing reasonable manipulation time after
irradiation. 92Sr has also no coincidence photons with measurable activity after defined irradiation conditions and it has also little difference between the theoretical
and measured decay. Both neutron and photon transport was performed with the MCNPX 2.6.0 code with different nuclear data libraries. The fast fluxes behind RPV
simulator were calculated using fixed source model with defined (calculated by MCNPX) power density across core. The results clearly show on distinguishable effect
of nuclear data libraries on the results. The effect is mostly notable in the flux over 5 MeV where the JENDL 4 results overestimate experiment by 46% while JEFF 3.1
by 9.5%.
169
Monte Carlo Simulation on Hard X-ray Dose Produced inInteraction Between High Intensity Laser and Solid Target
Bo Yang, Rui Qiu, Hui Zhang, Wei Lu, Xin Wang, Zhen Wu, Junli Li
Department of Physics Engineering, Tsinghua University, Beijing, China
Key Laboratory for particle and radiation imaging, Ministry of Education
Key Laboratory for High Energy Radiation Imaging Fundamental Science for National Defencse
The X-ray dose produced in the interaction between high intensity laser and solid target was studied by simulation using Monte Carlo code. The calculation model was
verified compared with experimental results. This model was used to study the effect on X-ray dose with different electron temperatures, target materials (including Au,
Cu and PE) and thicknesses. The results indicate that the X-ray dose is mainly determined by the electron temperature, and will be affected by the target parameters.
The X-ray dose of Au is about 1.2 times that of Cu, and is about 5 times that of PE (polyethylene). In addition, when target thickness is 0.5 RCSDA which is the
continuous-slowing-down approximation range for an electron with the average energy, the X-ray dose is relatively larger than any other target thicknesses. These
results will provide references on evaluating the ionizing radiation dose for laser devices.
315
Design and Feasibility Study of a Compact Neutron Source for Extra-terrestrial Geochronology Applications
Madicken Munk, Rachel Slaybaugh, and Karl Van Bibber (1), Leah Morgan, Brett Davidheiser-Kroll, and Darren Mark (2)
(1) Department of Nuclear Engineering University of California, Berkeley, CA (2) Scottish Universities Environmental Research Centre East Kilbride G75 0QF, UK
The 40Ar/39Ar radiometric dating technique is an attractive option for future martian age-dating applications. However, in-situ 40Ar/39Ar radiometric dating on Mars
presents unique challenges to the design of a device capable of achieving sufficient precision on geological samples obtained on the Martian surface. For this
application, a fast neutron source with a low thermal neutron flux is ideal for inducing the 39K(n,p)39Ar reaction with few competing reactions that require agecorrection factors. This paper explores the design of a neutron emitting device specifically for in-situ geochronological applications on Mars. We have determined that
the most feasible design is likely a 252Cf spontaneous fission source shielded by polyethylene layered with a strong thermal neutron absorber. Although boosting
options--induced fission sources, (alpha,n)--are available, they do not provide sufficient neutron multiplicity to justify the increased mass of the device. Furthermore,
shielding the rover from the neutron source will likely comprise the largest fractional mass of the device, which will be reduced by shielding only a small solid angle of
the source. While we have determined that it is possible to design such a neutron source, there will also be other instrumentation competing for a mass fraction of the
Rover instrument payload, which may make it difficult to design a device that achieves the required mass and fluence limitations for a future mission. This work
provides an initial path forward in determining a workable design.
227
The NESTLE 3D Nodal Core Simulator: Modern Reactor Models
Nicholas P. Luciano, Keith E. Ottinger, P. Eric Collins, Cole Gentry, Nathan George, A.J. Pawel, Kelly Kenner
and G. Ivan Maldonado (1), Filip Fejt(2)
, Shane Harty, Ondrej Chvala,
(1) The University of Tennessee, Department of Nuclear Engineering Knoxville, TN, (2) Czech Technical University in Prague, Faculty of Nuclear Sciences and Physical Engineering, Department of
Nuclear Reactors
The NESTLE reactor core simulator was developed originally in the late 1980s at NC State University under the direction of Prof. Paul J. Turinsky and has been used
widely over the last twenty years. NESTLE utilizes the nodal expansion method for eigenvalue, adjoint, fixed source steady-state and transient problems. A
collaboration among the University of Tennessee, Oak Ridge National Laboratory, and NC State University during the last five years has led to a new and improved
version of NESTLE written in modern Fortran and developed with modern software engineering practices. New features include a simplified input format, a drift-flux
model for high slip two-phase thermal hydraulics, advanced depletion and isotope tracking using ORIGEN, output files compatible with VISIT visualization software,
and compatibility with SCALE, SERPENT, and CASMO lattice physics. The new features have expanded NESTLE’s versatility from large pressurized water reactors
to new core models including boiling water reactors, small modular reactors, and fluoride salt cooled high temperature reactors. Also under implementation, are
options to perform nuclear fuel management optimization for single and multiple cycles.
230
Accuracy of the Linear Discontinuous Galerkin Method for 3D Reactor Analysis with Resolved Fuel Pins
Carolyn N. McGraw, Marvin L. Adams, W. Daryl Hawkins, and Michael P. Adams (1), Timmie Smith(2)
(1) Department of Nuclear Engineering, Texas A&M University, College Station, Texas, USA, (2) Department of Computer Science, Texas A&M University, College Station, Texas, USA
Significant literature exists on the accuracy of the Method of Characteristics (MOC) for solving the transport equation for reactors with realistic representations of
geometries. The same is not true for Discontinuous Finite Element Methods. We present a resolution study and error analysis detailing how the Linear DFEM (LD)
spatial discretization method performs on the well-known three-dimensional C5G7 benchmark problem as a function of spatial and angular resolution, for spatial
meshes that conform to the pin geometries. We compare pin powers and k-eigenvalues against reference MCNP results, as a function of spatial and angular
resolution. We use ``product" Gauss-Chebyshev quadrature sets that range from 12 to 32 polar levels and 32 to 192 azimuthal angles. Our x-y spatial resolution
ranges from 64 to 576 quadrilateral cells per pincell and our z resolution ranges from 8 to 80 cells. We find that the LD method performs well, with k and pin-power
results within few percent of reference results.
263
MCC-3/DIF3D Analysis for the ZPPR-15 Doppler and Sodium Void Worth Measurements
Micheal A. Smith, Richard M. Lell, and Changho Lee
Argonne National Laboratory, Argonne, IL
This manuscript covers validation efforts for our deterministic codes at Argonne National Laboratory. The experimental results come from the ZPPR-15 work in 1985
-1986 which was focused on the accuracy of physics data for the integral fast reactor concept. Results for six loadings are studied in this document and focus on
Doppler sample worths and sodium void worths. The ZPPR-15 loadings are modeled using the MC2-3/DIF3D codes developed and maintained at ANL and the MCNP
code from LANL. The deterministic models are generated by processing the as-built geometry information, i.e. MCNP input, and generating MC2-3 cross section
generation instructions and a drawer homogenized equivalence problem. The Doppler reactivity worth measurements are small heated samples which insert very
small amounts of reactivity into the system (< 2 pcm). The results generated by the MC2-3/DIF3D codes were excellent for ZPPR-15A and ZPPR-15B and good for
ZPPR-15D, compared to the MCNP solutions. In all cases, notable improvements were made over the analysis techniques applied to the same problems in 1987. The
sodium void worths from MC2-3/DIF3D were quite good at 37.5 pcm while MCNP result was 33 pcm and the measured result was 31.5 pcm.
270
Continuous-Energy Monte Carlo Method for Reactor Transient Analysis Based on Quasi-Static Methods
YuGwon Jo, Bumhee Cho, and Nam Zin Cho
Korea Advanced Institute of Science and Technology, Daejeon, Korea
As computing power increases, the Monte Carlo method becomes popular in nuclear reactor physics analysis due to its capability of scalable parallelization and
handling complex geometry and continuous-energy nuclear data. This paper describes and compares the Monte Carlo method based on two quasi-static methods; 1)
the improved quasi-static method and 2) the predictor-corrector quasi-static method. In both methods, a linear approximation of fission source distributions during a
time step is used to provide delayed neutron sources. In numerical results, the two quasi-static methods for Monte Carlo calculation are compared with the direct timedependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer codes. Then, transient analyses via the two quasi-static
methods are presented and compared on a continuous-energy problem.
- 20 -
MC2015 : M&C + SNA + MC 2015
312
Feasibility of Albedo-corrected Parameterized Equivalence Constants for Nodal Equivalence Theory
Woosong Kim and Yonghee Kim
Department of Nuclear & Quantum Engineering Korea Advanced Institute of Science and Technology (KAIST) Republic of Korea
The conventional Simplified Equivalence Theory (SET) is based on the single assembly lattice calculation with net-zero current boundary condition. However, SET has
a problem that it can’t reflect the actual node interface current on the equivalence constants so that it shows discrepancy with the reference value. For more accurate
reactor core analysis, the conventional SET was modified by introducing Albedo-corrected Parameterized Equivalence Constants (APEC). The equivalence constants
are functionalized using lattice calculation results from the several boundary conditions. The equivalence constants are updated during iterative whole-core calculation
without compromising the computing time. The correction of discontinuity factors (DFs) are not considered in this study yet. The test calculation for two-dimensional
PWR small core showed that APCE provided much accurate results that keff value error reduced by up to 25% and the normalized assembly power distribution error
reduced by 84 % for maximum and 71 % for RMS value.
318
A Fast and Self-consistent Approach for Multi-cycle Equilibrium Core Studies Using Monte Carlo Models
Zeyun Wu(1,2) and Robert E. Williams(1)
(1) NIST Center for Neutron Research Gaithersburg, MD USA (2) Department of Materials Science and Engineering University of Maryland College Park, MD USA
A fast and self-consistent approach, PRELIM approach, is described in this paper with the aim of quickly delivering a multi-cycle equilibrium core configuration using
Monte Carlo models. This approach is based on simple reactor physics, and is easily incorporated into standard Monte Carlo core design tools. The primary purpose
of this study is to provide an efficient approach to enable routine calculations for feasibility studies of nuclear reactor design concepts based on Monte Carlo methods.
The approach is capable of realistic conceptual core design calculations in a repeated manner with sufficient accuracy of key performance parameters. The validity of
the PRELIM approach is demonstrated by benchmarking its results to solutions provided by a higher order approach with detailed core calculations in a given example
problem.
324
Coupled Neutron-Photon Two-Dimensional Whole Core Depletion Calculation using Paragon with Ultra-Fine Energy
Mesh Methodology
Mohamed Ouisloumen
Nuclear Fuel Westinghouse Electric Company LLC Cranberry Twp, PA
A new version of PARAGON lattice physics code has been developed. This version uses the ultrafine energy mesh for flux solution with a cross-section library that
has the resonance scattering model employed for all isotopes. The coupling of neutron-photon transport calculation capability was also implemented in this code. We
used PARAGON to simulate the two-dimensional AP1000® Pressurized Water Reactor whole core and were able to deplete this core to high burnup while maintaining
great detailed representations of all neutron and photon transport variables. In this paper we demonstrated that the simulation of the depletion of a two-dimensional
PWR full core with high resolution in energy, angle and space variables is feasible using modest computing power in a reasonable amount of running time. We also
analyzed the effect of the resonance scattering model on reactivity and local variables such as multi-group fluxes, pin powers and heat generated by gamma rays. Key
Words: Whole Core, Depletion, Transport
72
Progress on Sensitivity and Uncertainty Analysis Function in Continuous-Energy Monte Carlo Code RMC
Yishu Qiu, Dan Shen, Jiankai Yu, Kan Wang (1), Ding She (2)
1) Department of Engineering Physics, Tsinghua University, Beijing, P.R. China, 2) Institute of Nuclear and New Energy Technology, Tsinghua University
Recently, the Reactor Monte Carlo code RMC has developed a new capability to compute uncertainties of the effective multiplication factor, keff, due to cross-section
covariance data, as well as a capability to produce sensitivity coefficients including the constrained fission chi sensitivity coefficients which are indispensable for
uncertainty analysis. Two nuclear data covariance libraries, the ENDF/B-VII.1 covariance data processed by NJOY and the 44-group covariance library in SCALE6.1
code package, are used to perform uncertainty calculations. After reading the problem-dependent covariance data, RMC performs a continuous-energy forward
calculation to generate the necessary sensitivity coefficients found in pre-generated dictionary file and the folds them with covariance matrix to obtain uncertainty
information. A multi-group infinite homogeneous medium and a polyethylene sphere are used to verify the new capabilities in RMC with the analytic solutions, MCNP6
and TSUNAMI-3D.
322
Seismic Response Analysis of Reactor Building and Equipment with Hazard Consistent Ground Motions by 3D-FE
Model
Akemi Nishida and Ken Muramatsu (1), Tsuyoshi Takada(2)
(1) Center for Computational Science and e-Systems, Japan Atomic Energy Agency Kashiwa, Chiba 277-0871, Japan (2) Department of Architecture, The University of Tokyo Tokyo 113-8654, Japan
Research and development on next-generation seismic probabilistic risk assessment (PRA) by using 3D vibration simulators is ongoing to evaluate the seismic safety
performance of nuclear plants with high reliability. Most structural PRA uses probabilistic schemes such as the scheme that uses probabilistic seismic hazard and
fragility curves. Even when earthquake ground motions are required in Monte Carlo Simulations (MCS), they are generated to fit the specified response spectra, such
as uniform hazard spectra at a specified exceedance probability. However, these ground motions are not directly linked with their corresponding seismic source
characteristics. In this context, the authors propose a methodology based on MCS to reproduce a set of input ground motions to develop an advanced PRA scheme.
This advanced scheme can explain the exceedance probability and sequence of functional loss in a nuclear power plant. This paper reports the methodology to
reproduce a set of input ground motions and the analytical results of a nuclear plant building using those input ground motions.
150
2-T and 3-T High-Energy-Density Thermal Radiative Transfer Benchmarks
D.A. Holladay and R.G. McClarren
Department of Nuclear Engineering, Texas A&M University, College Station, TX
[email protected]; [email protected]
High energy density thermal radiative transfer benchmark solutions are presented for a 1-D slab geometry using a three-temperature (electron, ion, and radiation)
model and a 1-D spherical geometry using the standard 2-T (material, radiation) model. In the 3-T model, full transport is used to model the radiation, a conduction
model is used for the electrons, and ion motion is assumed negligible. These benchmarks are useful in the verification and testing of simulation codes for laboratory
astrophysics as well as high-energy density physics. The solutions require linearization of the coupled equations and are obtained via specific cubic functional forms
(in temperatures) for the heat capacities and electron-ion coupling factor. These solutions are semi-analytic in that their exact forms can be written down, but 2-D
integrals must be computed numerically for each point in space and time. These integrals are slowly convergent and so a numerical integration routine was developed
in OpenCL to take advantage of the high throughput that heterogeneous computing offers.
58
Comparison Between Sample Size and Computational Uncertainty in Propagating Manufacturing Uncertainties Using
Sampling Based Method and MCNPX
Daniel Campolina and Claubia Pereira
Centro de Desenvolvimento da Tecnologia Nuclear, Cidade Universitária - Pampulha - Belo Horizonte - MG - Brazil
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty
propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain
outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness
were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates
was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor keff was accomplished by using sample of size 93 and
computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the
uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by
Monte Carlo process in MCNPX code.
- 21 -
MC2015 : M&C + SNA + MC 2015
298
Initial 1-D Single Phase Liquid Verification of CTF
Chris Dances and Dr. Maria Avramova (1), Dr. Vince Mousseau (2)
(1) Department of Mechanical and Nuclear Engineering The Pennsylvania State University University Park, PA, USA (2) Computer Science Research Institute Sandia National Laboratories, Albuquerque,
NM 87123, USA
Nuclear engineering codes are being used to simulate more challenging problems and at higher fidelities than they were initially developed for. In order to expand the
capabilities of these codes, state of the art numerical methods and computer science need to be implemented. One of the key players in this effort is the Consortium
for Advanced Simulation of Light Water Reactors (CASL) through development of the Virtual Environment for Reactor Applications (VERA). The sub-channel thermal
hydraulic code used in VERA, COBRA-TF (Coolant-Boiling in Rod Arrays - Three Fluids), is partially developed at the Pennsylvania State University by the Reactor
Dynamics and Fuel Management Research Group (RDFMG). The RDFMG of version COBRA-TF is referred to as CTF. In an effort to help meet the objectives of
CASL, a version of CTF has been developed that solves the residual formulation of the one dimensional single-phase conservation equations. The formulation of the
base equations as residuals allows the for the isoloation of different sources of error and is a good tool for verification purposes. This paper outlines the initial
verification work of both the original version of CTF and its residual formulation. The verification problem is a simple 1-D single phase liquid channel with no heat
conduction, friction, and gravity. A transient boundary condition is applied that alters the inlet density and temperature while keeping the velocity within the channel
constant. The constant velocity simplifies the modified equation analysis and the order of accuracy is readily obtained. A Richardson extrapolation is performed on the
problem on the temporal and spatial step sizes to determine the convergence and order of accuracy of the discretization error. While extensive validation work has
been present for CTF, there has been little to no verification work previously.
314
Further Investigation of Error Bounds for Reduced Order Modeling
Mohammad Abdo, Congjian Wang, and Hany S. Abdel-Khalik.
School of Nuclear Engineering, Purdue University, West Lafayette, IN
This manuscript investigates the level of conservatism of the bounds developed in earlier work to capture the errors resulting from reduced order modeling. Reduced
order modeling is premised on the fact that large areas of the input and/or output spaces can be safely discarded from the analysis without affecting the quality of
predictions for the quantities of interest. For this premise to be credible, ROM models must be equipped with theoretical bounds that can guarantee the quality of the
ROM model predictions. Earlier work has devised an approach in which a small number of oversamples are used to predict such bound. Results indicated that the
bound may sometimes be too conservative, which would negatively impact the size and hence the efficiency of the ROM model.
313
Scaling of Intrusive Stochastic Collocation and Stochastic Galerkin Methods for Uncertainty Quantification in Monte
Carlo Particle Transport
Aaron J. Olson(1), Brian C. Franke(2), and Anil K. Prinja(1)
(1) Department of Nuclear Engineering University of New Mexico Albuquerque, NM
(2) Sandia National Laboratories Albuquerque, NM
A Monte Carlo solution method for the system of deterministic equations arising in the application of stochastic collocation (SCM) and stochastic Galerkin (SGM)
methods in radiation transport computations with uncertainty is presented for an arbitrary number of materials each containing two uncertain random cross sections.
Moments of the resulting random flux are calculated using an intrusive and a non-intrusive Monte Carlo based SCM and two different SGM implementations each with
two different truncation methods and compared to the brute force Monte Carlo sampling approach. For the intrusive SCM and SGM, a single set of particle histories is
solved and weight adjustments are used to produce flux moments for the stochastic problem. Memory and runtime scaling of each method is compared for increased
complexity in stochastic dimensionality and moment truncation. Results are also compared for efficiency in terms of a statistical figure-of-merit. The memory savings
for the total-order truncation method prove significant over the full-tensor-product truncation. Scaling shows relatively constant cost per moment calculated of SCM
and tensor-product SGM. Total-order truncation may be worthwhile despite poorer runtime scaling by achieving better accuracy at lower cost. The figure-of-merit
results show that all of the intrusive methods can improve efficiency for calculating low-order moments, but the intrusive SCM approach is the most efficient for
calculating high-order moments.
- 22 -
MC2015 : M&C + SNA + MC 2015
Monte Carlo Methods
Tuesday, April 21, 2015
Chair: Dr. Edmund Caro
122
8:30 AM
Hermitage C
Development of a Monte-Carlo Based Method for Calculating the Effect of Stationary Fluctuations
E. E. Pettersen, C. Demazière, K. Jareteg (1), T. Schönfeldt, E. Nonbøl and B. Lauritzen (2)
1) Chalmers University of Technology, Department of Applied Physics, Division of Nuclear Engineering, Gothenburg, Sweden, 2) Center for Nuclear Technologies, Technical University of Denmark,
Roskilde, Denmark
This paper deals with the development of a novel method for performing Monte Carlo calculations of the effect, on the neutron flux, of stationary fluctuations in
macroscopic cross-sections. The basic principle relies on the formulation of two equivalent problems in the frequency domain: one that corresponds to the real part of
the neutron balance, and one that corresponds to the imaginary part. The two equivalent problems are in nature similar to two subcritical systems driven by external
neutron sources, and can thus be treated as such in a Monte Carlo framework. The definition of these two equivalent problems nevertheless requires the possibility to
modify the macroscopic cross-sections, and we use the work of Kuijper, van der Marck and Hogenbirk to define group-wise macroscopic cross-sections in MCNP. The
method is illustrated in this paper at a frequency of 1 Hz, for which only the real part of the neutron balance plays a significant role. A semi-analytical diffusion-based
solution is used to verify the implementation of the method on a test case representative of light water reactor conditions in an infinite lattice of fuel pins surrounded by
water. The test case highlights flux gradients that are steeper in the Monte Carlo-based transport solution than in the diffusion-based solution. Compared to other
Monte Carlo-based methods earlier proposed for carrying out stationary dynamic calculations, the presented method does not require any modification of the Monte
Carlo code.
110
Real Variance Estimation of BEAVRS benchmark in McCARD Monte Carlo Eigenvalue Calculations
Ho Jin Park, Hyun Chul Lee, Jin Young Cho (1) Hyung Jin Shim, Chang Hyo Kim (2)
1) Korea Atomic Energy Research Institute, Korea, 2) Department of Nuclear Engineering, Seoul National University, Seoul, Korea
The McCARD code has several real variance estimation methods such as Gelbard’s batch method, Ueki’s method, the Fission Source Distribution (FSD) method, and
History-based Batch (HB) method. The real variances of local tallies such as pin-wise and assembly-wise fission power were estimated using the real variance
estimation methods implemented in the McCARD for the BEAVRS fresh core problem, which is known to have a high dominance ratio. The results show that the
apparent variance of local MC tally estimate tends to be smaller than the real one, whereas the apparent variance of the global MC tally such as keff is similar to the
real one. Moreover it was observed that the real to apparent standard deviation (SD) ratio in assembly-wise fission power is larger than that in pin-wise fission power.
The large real to apparent SD ratio in the assembly-wise fission power were explained using the correlation coefficients between the local tallies.
124
Accelerated Path Generation and Visualization for Numerical Integration of Feynman Path Integrals for Radiative
Transfer
Paul Kilgo and Jerry Tessendorf
School of Computing, Clemson University, Clemson, S.C.
Previous work in framing radiative transfer in terms of Feynman path integrals (FPI) is extended. The path sampling algorithm is extended with a constant time target
function for root finding. Benchmarking for root finding schemes is reported. Additional analysis of the path weighting scheme is given. Utilization for the Metropolis
algorithm is discussed. A log-normal fit for convergence of the integral sum is found. A visualization tool for attributed path data is presented.
133
Relativistic Kinematics for Photoneutron Production in Monte Carlo Transport Calculations
Edmund Caro and Paul K. Romano (1), Stephen C. Marin and David P. Griesheimer (2)
1) Bechtel Marine Propulsion Corporation, Knolls Atomic Power Laboratory, Schenectady, NY, 2) Bechtel Marine Propulsion Corporation, Bettis Atomic Power Laboratory, West Mifflin, PA
In this paper, an algorithm for the production of photon-induced neutrons is described. Relativistic two-body kinematics are used to derive exact expressions for the
secondary energy and angle distribution of photoneutrons; this treatment is important in cases where the evaluated photonuclear data does not give an explicit energy
distribution. Comparisons of the relativistic relations were made to approximations in MCNP5 and TRIPOLI-4, highlighting the magnitude of the error of those
approximations. Finally, photon transport simulations including photoneutron production were carried out in both MCNP5 and MC21 and demonstrate generally good
agreement between the two codes except for cases where relativistic effects are important.
Reactor Physics
Tuesday, April 21, 2015
8:30 AM
Hermitage D
Chairs: Dr. Benjamin R. Betzler, Dr. Alain Hebert
297
Crane: A New Scale Super-Sequence for Neutron Transport Calculations
Congjian Wang and Hany S. Abdel-Khalik (1), Ugur Mertyurek (2)
(1) School of Nuclear Engineering, Purdue University (2) Oak Ridge National Laboratory
A new “super-sequence” called CRANE has been developed to perform automated sensitivity analysis and uncertainty quantification (SA/UQ) and reduced order
modeling (ROM) for any SCALE MG sequence. This new sequence is designed to support computationally intensive analyses that require repeated execution of
reactor models with variations in their design parameters and nuclear data. This manuscript provides a brief overview of CRANE and demonstrates its applications to
representative reactor physics calculations. Our goal is to provide a prototypic capability that allow users to further explore and investigate ROM in their respective
domain and help guide further development of the methodology and evolution of the tools. The input space reduction, intersection subspace for UQ, and EPGPT
methodologies are described, and example applications are shown for spent-fuel analysis.
17
Sensitivity of Reactor Transient Analysis on 3-D Effective Beta Core Model
S. Kalcheva and E. Koonen
SCK•CEN, BR2 Reactor, Boeretang, Belgium
The effect of spatial fluctuations of the effective beta fraction on the reactor transient characteristics is investigated in this paper. A detailed 3-D effective delayed
neutron fraction core model has been developed and applied for the transient analysis of the reactor BR2, taking into account the detailed 3-D power and burn-up
distributions in the core. Due to the fuel depletion, the effective beta fraction is changing in time and forms a complex 3-D distribution profile in the reactor core. Two
detailed 3-D effective beta core models have been developed using a standard MCNP method and a new MCNP based method. The new method is alternative to the
standard MCNP approach (1-kp/kp+d) and it is based on tally calculations of fission integrals and actual delayed neutron fraction, β_m^k, for each fissile isotope m in
a fuel type k. The standard 3-D effective beta model is characterized with high spatial fluctuations, caused by MC statistical errors in small calculation meshes, while
the model developed with the new MCNP method has a smoothed spatial profile. The new MCNP method has been applied also for estimation of the effective delayed
photoneutron fraction in the beryllium reflected BR2 reactor core. The new method has been validated on measurements of the delayed neutron and delayed
photoneutron fractions in the BR02 mock up criticality facility. The different effective beta models are tested on sensitivity of the reactor response to power,
temperature and energy release distributions. The study is performed for protected and unprotected reactivity insertion transients, assuming zero reactivity feedback
coefficients.
28
Optimization of Depletion Modeling and Simulation for the High Flux Isotope Reactor
B. R. Betzler, B. J. Ade, D. Chandler, G. Ilas, and E. E. Sunny
Oak Ridge National Laboratory, Oak Ridge, TN
Monte Carlo–based depletion tools used for the high-fidelity modeling and simulation of the High Flux Isotope Reactor (HFIR) come at a great computational cost;
finding sufficient approximations is necessary to make the use of these tools feasible. The optimization of the neutronics and depletion model for the HFIR is based on
two factors: (i) the explicit representation of the involute fuel plates with sets of polyhedra and (ii) the treatment of depletion mixtures and control element position
during depletion calculations. A very fine representation (i.e., more polyhedra in the involute plate approximation) does not significantly improve simulation accuracy.
The recommended representation closely represents the physical plates and ensures sufficient fidelity in regions with high flux gradients. Including the fissile targets in
the central flux trap of the reactor as depletion mixtures has the greatest effect on the calculated cycle length, while localized effects (e.g., the burn up of specific
isotopes or the power distribution evolution over the cycle) are more noticeable consequences of including a critical control element search or depleting burnable
absorbers outside the fuel region.
- 23 -
MC2015 : M&C + SNA + MC 2015
34
Accuracy of a Subgroup Method for Pressurized Water Reactor Fuel Assembly Models
Axel Canbakan and Alain Hébert (1), Jean-François Vidal (2)
1) École Polytechnique de Montréal, Montréal Qc. CANADA, 2) CEA, DEN, DER, SPRC, LEPh, Saint-Paul-lez-Durance, France
We are investigating the accuracy of a self-shielding model based on a subgroup method for pressurized water reactor (PWR) fuel assembly models. Until now,
APOLLO2 lattice code was using the Sanchez-Coste method based on an equivalence in dilution with a 281-group Santamarina-Hfaiedh energy mesh (SHEM). Here,
we validate a subgroup approach with an improved 361-energy group SHEM at burnup 0 and with isotopic depletion. The aim is to show this new self-shielding
technique is more precise than the current one and leads to simpler production computational schemes by avoiding complicated correction algorithms for the mutual
resonant self-shielding effects. Compared to a Monte Carlo reference case, the new approach leads to encouraging results in almost every cases. This subgroup
technique is proposed as a short-term replacement for the Sanchez-Coste method used in production computational schemes dedicated to the production of multiparameter cross-section reactor databases.
Next Generation Sn Mesh Sweeps
Tuesday, April 21, 2015
8:30 AM
Hermitage A-B
Chairs: Dr. Robert J. Zerr, Dr. Jae H. Chang
37
Research at AWE on the Exploitation of Many-Core Technologies for the Efficient Solution of Parallel Deterministic
Transport Problems
Richard P. Smedley-Stevenson, David J. Barrett, Wayne P. Gaudin, Andrew W. Hagues, Simon R. Merton, Iain R. Miller, and David M. Turland
AWE PLC, Berkshire, UK
This paper provides an overview of the "Path to Many Core" (PtMC) strategy, adopted by AWE as the route forward for exploiting future high performance computing
architectures, with an emphasis on the deterministic transport work-stream which is focused on discrete ordinate based solvers for both structured and unstructured
meshes. Recent progress has been made by EPCC in collaboration with AWE on the porting of structured grid deterministic transport solvers to GPUs and FPGAs,
paving the way for exploiting new architectures. Performance tuning of an existing unstructured grid solver on current multi-core processors is also described,
illustrating the potential for vectorisation to improve the performance of the code, but highlighting the significant limitations imposed by the lack of memory bandwidth.
Performance tuning of large scale applications is difficult due to their complexity, so significant effort is being expended on developing small scale (< 10K lines of
code) applications - mini-apps - which are representative of the key computational kernels which form the workload on the AWE HPC platforms, but small enough to
allow wholesale re-writes in languages which are specifically targeted at particular platforms. Similar efforts are underway at the US National Laboratories and a key
aspect of the strategy is to work together with external partners to maximize the impact of the research.
57
An SN Algorithm for Modern Architectures
Randal S. Baker
Los Alamos National Laboratory, Los Alamos, NM
LANL discrete ordinates transport packages are required to perform large, computationally intensive time-dependent calculations on massively parallel architectures,
where even a single such calculation may need many months to complete. While KBA methods scale out well to very large numbers of compute nodes, we are limited
by practical constraints on the number of such nodes we can actually apply to any given calculation. Instead, this paper describes a modified KBA algorithm that
allows realization of the reductions in solution time offered by both the current, and future, architectural changes within a compute node.
250
Three-dimensional discrete ordinates reactor assembly calculations on GPUs
Thomas M. Evans, Wayne Joubert, Steven P. Hamilton, Seth R. Johnson, John A. Turner, Gregory G. Davidson, and Tara M. Pandya
Oak Ridge National Laboratory, Oak Ridge, TN
In this paper we describe and demonstrate a discrete ordinates sweep algorithm on GPUs. This sweep algorithm is nested within a multilevel comunication-based
decomposition based on energy. We demonstrated the effectiveness of this algorithm on detailed three-dimensional critical experiments and PWR lattice problems.
For these problems we show improvement factors of 4–6 over conventional communication-based, CPU-only sweeps. These sweep kernel speedups resulted in a
factor of 2 total time-to-solution improvement.
138
Parallel Sn Sweeps on Adapted Meshes
Bruno Turcksin
Department of Mathematics, Texas A&M University, College Station, TX
We study parallel sweeps on adaptively refined meshes. Unlike parallel sweeps on regular grids, there is not a known optimal parallel sweep on unstructured meshes
and thus, multiple heuristics have been proposed over the years. In this paper, we study the CAP-PFB (Cut Arc Preference - Parallel Forward Backward) algorithm on
regular grids and adaptively refined meshes. We begin by recalling the CAP-PFB heuristic, then we explain how it can be applied on adapted meshes. After that, we
compare the sweeps produced by CAP-PFB when different initial sweeps are used on regular and adapted meshes. We show that on regular grids, CAP-PFB finds an
optimal sweep independently of the initial sweep. On adapted meshes, the best results are obtained when using a serial initial sweep for CAP-PFB. This is somewhat
unexpected; the "worst' initial sweep leads to the best result. We conclude that a good initial sweep for CAP-PFB, on adapted meshes, should take into account the
interaction of sweeps along different directions before trying to minimize the number of stages required.
Computational Thermal Hydraulics and Fluid Dynamics
Tuesday, April 21, 2015
8:30 AM
Two Rivers
Chairs: Dr. Yassin Hassan, Lane B. Carasik
303
Application of the Reactor System Code RELAP-7 to Single- and Two-Phase Flow Water-Hammer Problems
Marc O. Delchini and Jean C. Ragusa (1), Ray A. Berry, David Andrs, and Richard Martineau (2)
(1) Department of Nuclear Engineering Texas A&M University College Station, TX (2) Idaho National Laboratory Idaho Falls, ID
The primary basis of the RELAP-7 governing theory includes the single-phase Euler equations and the 7-equation two-phase flow models. It is well established that
these hyperbolic conservation laws can develop shocks and discontinuities and thus, require a stabilization numerical method. The all-Mach flow Entropy Viscosity
Method is now employed in RELAP-7 as a stabilization numerical method for both above flow models. The entropy viscosity technique is a viscous regularization
technique: adequate dissipation terms (viscous fluxes) are added to the governing laws while ensuring the entropy minimum principle still holds. Viscosity coefficients
modulates the magnitude of the added dissipation such that it is large in shock regions and vanishingly small elsewhere. The stabilization capabilities of the Entropy
Viscosity Method are demonstrated in the system code RELAP-7 by simulating a 1-D single and two-phase water-hammers.
- 24 -
MC2015 : M&C + SNA + MC 2015
260
PETSC-Based Parallel Semi-Implicit CFD Code Gasflow-Mpi in Application of Hydrogen Safety Analysis in Containment
of Nuclear Power Plant
Jianjun Xiao(1), John R. Travis(2), Peter Royl, Anatoly Svishcheva, Thomas Jordana and Wolfgang Breitunga (1)
(1) Institute of Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, Germany, (2) Engineering and Scientific Software Inc., Santa Fe, New Mexico
GASFLOW is a CFD software solution used to predict fluid dynamics, heat and mass transfer, chemical kinetics, aerosol transportation and other related phenomena
during a postulated severe accident in the containment of nuclear power plant (NPP). The generalized 3-D transient, two- phase, compressible Navier-Stokes
equations for multi-species are solved in GASFLOW, using a proven semi-implicit pressure-based algorithm of Implicit Continuous Eulerian – Arbitrary LagrangianEulerian (ICE’d-ALE) methodology. GASFLOW has been intensively validated with international experimental benchmarks, and has been widely used in the hydrogen
explosion risk analysis involving NPP containments. The simulation results of the GASFLOW code have been widely accepted by the nuclear authorities in several
European and Asian countries. GASFLOW was originally designed as a supercomputer serial code and could be only run on vector machines with a single processor.
With the increasing requirement of the users from the nuclear industry, detailed geometrical and physical models were used in the GASFLOW simulations. The users
in nuclear industry heavily suffered from the extremely long computational times utilizing a single processor, even up to as much as 3-4 months, which were
unacceptable for most of the industrial users. Therefore, a project was initiated in 2013 in order to parallelize GASFLOW using the paradigms of Message Passing
Interface (MPI) and domain decomposition. The data structure and parallel linear solvers in the Portable Extensible Toolkit for Scientific Computing (PETSc) were
employed in the GASFLOW parallel version: GASFLOW-MPI. The strategy of parallelization of GASFLOW serial version is briefly discussed. The scaling of the
GASFLOW-MPI was studied. GASFLOW-MPI is validated using the well accepted benchmarks by the CFD community. GASFLOW-MPI was also applied in the real
large scale nuclear containments, and very good agreements were obtained compared to the results of the GASFLOW sequential version. The computational time can
be dramatically reduced depending on the size of the problem and the high-performance computing (HPC) cluster. GASFLOW parallelization adds tremendous value
to large scale containment simulations by enabling high-fidelity models, including more geometric details and more complex physical phenomena that occur during a
severe accident, which yield detailed and precise insights. GASFLOW-MPI will be further developed as the high performance engineering CFD code for the thermal
hydraulics and safety analyses in NPP containments and other large scale industrial applications.
121
Development and Test of a Transient Fine-Mesh LWR Mutliphysics Solver in a CFD Framework
Klas Jareteg, Rasmus Andersson, and Christophe Demazière
Chalmers University of Technology, Department of Applied Physics, Gothenburg, Sweden
We present a framework for fine-mesh, transient simulations of coupled neutronics and thermal-hydraulics for Light Water Reactor (LWR) fuel assemblies. The
framework includes models of single-phase fluid transport for the coolant and conjugate-heat transfer between the coolant and the fuel pins, complemented by a
neutronic solver. The thermal-hydraulic models are based on a CFD approach, resolving the pressure and velocity coupling via an iterative algorithm. Similarly, the
neutronics is formulated in a fine-mesh manner with resolved fuel pins. The neutronic and thermal-hydraulic equations are discretized and solved in the same
numerical framework (foam-extend-3.1). A test case of a quarter of a fuel pin is used to test the transient behavior of the code for a set of different initial reactivities.
The same geometry is used to simulate a decrease of the inlet temperature, which demonstrates the response both in the CFD and the neutronics for an increase in
reactivity. Furthermore, a system of 7x7 fuel pins is simulated with the same inlet temperature decrease and we present the temporal development of the temperature
as well as an analysis of the heterogeneities captured by the fine-mesh approach. The solver is shown to capture the transient multiphysics couplings and
demonstrates the numerical and computational applicability based on the presented cases.
119
Numerical Investigation of Instabilities in the Two-Fluid Model for CFD Simulations of LWRs
Klas Jareteg (1), Henrik Ström, Srdjan Sasic (2), and Christophe Demazière (1)
1) Division of Nuclear Engineering, Department of Applied Physics, Chalmers University of Technology, Gothenburg, Sweden, 2) Division of Fluid Dynamics, Department of Applied Mechanics, Chalmers
University of Technology, Sweden
We present a two-fluid framework for simulation of adiabatic gas-liquid flow. The aim of the investigation is to confirm and analyze phase instabilities and meso-scale
flow patterns for the vapor phase arising due to instabilities in the two-fluid model. For this purpose, the solver is applied to a set of two-dimensional, periodic problems
with initially flat velocity and void fraction distributions. We demonstrate the occurrence of such instabilities and we analyze the temporal development of the void
fraction. The instabilities are shown to emerge from the initially uniform distribution of void, via a numerically unstable but non-physical distribution leading to the
appearance of meso-scale structures. The importance of the equation discretization schemes is evaluated and it is shown that the lower order schemes postpone the
emergence of the instabilities. Furthermore, horizontally confined systems of different widths are studied and it is shown that the instabilities do not occur below a
certain system width with the current model formulation and conditions. We also investigate different formulations of the void fraction equation and we show that not all
the proposed formulations are able to capture the meso-scale structures. The presented results and analysis propose that the appearance of mesoscopic structures
and void instabilities in a typical two-fluid model can be pronounced and thus need to be recovered in order to accurately model the liquid-vapor flow in nuclear
reactors.
Mathematical Methods in Nuclear Nonproliferation and Safeguards Applications
Tuesday, April 21, 2015
8:30 AM
Belmont
Chairs: Prof. Shikha Prasad, Dr Andrea Favalli, Dr. Shaheen A. Dewji, Dr. Stephen Croft
13
Passive Neutron Interrogation for Fissile Mass Estimation in Systems with an Unknown Detection Efficiency
C. Dubi (1), B. Pedersen (2), A. Ocherashvilli, and H. Ettegui (1)
1) Physics Department, Nuclear Research Center of the Negev, 2) JRC Laboratory, Ispra, Italy
Passive neutron interrogation for fissile mass estimation, relying on neutrons coming from spontaneous fission events, is considered a standard NDT procedure in the
nuclear safeguard and safety community. Since most structure materials are (relatively) transparent to neutron radiation, passive neutron interrogation is considered
highly effective in the analysis of dirty, poorly characterized samples. On the other hand, since a typical passive interrogation assembly is based on 3He detectors
imbedded in a moderating medium , neutrons from additional neutron sources (mainly (alpha, n) reactions and induced fissions in the tested sample) cannot be
separated from the main spontaneous fission source through energetic spectral analysis. There for, applying the passive interrogation methods requires the
implementation of Neutron Multiplicity Counting Methods (NMC) for separation between the main fission source and the additional sources. Applying standard NMC
methods requires a well characterized system, in the sense that both system die away time and detection efficiency must be well known (and in particular,
independent of the tested sample). Hence, the implementation of passive neutron interrogation methods on systems with a poorly characterized detection efficiency
(such as systems with a large uncertainty on the position of the sample, or on samples containing neutron absorbers), is not trivial. In the present study we introduce a
new NMC method in which the detection efficiency is computed directly through the measurement itself, without the need of a prior calibration. Such a method might
prove extremely useful in the above mentioned cases. From a theoretical point of view, we have developed explicit formulas for the Quadruples rate in the detection
signal, allowing us to consider a fourth unknown. The method has been implemented on a fairly large data set (about 20 measurements), showing both promising and
interesting results.
16
Estimating Moments of the Neutron Chain Size Distribution in Multiplying Items
Nick Hengartner (1), Tom Burr (2), Stephen Croft (3), and Mark Smith-Nelson (4)
1) Theoretical Biology and Biophysics Group, Los Alamos National Laboratory, Los
Alamos, NM, 2) Statistical Sciences Group, MS F600, Los Alamos National Laboratory, Los Alamos, NM, 3) Safeguards and Security Technology, Oak Ridge National Laboratory, Oak Ridge, TN, 4)
Advanced Nuclear Technology, Los Alamos National Laboratory, Los Alamos, NM
In many nuclear nonproliferation applications, one seeks to detect and/or characterize items containing special nuclear materials (SNM). In some items, this can be
achieved by detecting neutrons emitted from the SNM-containing item of interest. Some materials (eg the even isotopes of Pu) can spontaneously fission emitting
neutrons in bursts. A wider range of alpha-emitting nuclear materials can generate (α,n) reactions. When released in a sub-critical multiplying item these primary
random events initiate a distribution of fission chains of finite length with an associated multiplicity distribution of emergent neutrons that carries information about the
primary fission rate, the (α,n) rate, and the induced fission processes taking place inside the item. This paper develops a new method to estimate moments of the
distribution of the chain size occurring in neutron chain reactions that uses the actual neutron detection times (in so-called "list model"). Our key result uses tools from
the theory of point processes to provide a new method with a flexible test function option for using time-tagged detected neutrons to infer the chain size distribution. If
one uses an intuitive special case of the test function, then traditional histograms of counts in neutron-triggered time bins are used to infer moments of the neutron
chain size distribution. By using a different test function, we illustrate, for example, that inference quality can be improved by accounting for correlations between
counts in neighboring time bins. A numerical example on simulated data that follows the so-called point model is provided that illustrates practical limitations of the
new method.
- 25 -
MC2015 : M&C + SNA + MC 2015
25
Evaluation of the True Coincidence Summing Effects on Uranium Enrichment Measurements Using a Monte Carlo
Approach
A. Bosko and A. Berlizov
International Atomic Energy Agency (IAEA), Vienna, Austria
Uranium enrichment measurement is one of the key assays in non-proliferation and safeguards applications. Various measurement techniques are used to determine
the isotopic composition of uranium items. Non-destructive measurements of X-ray and gamma-ray emissions with high purity germanium (HPGe) detectors are
among the most popular methods. Several computer codes are available to determine uranium enrichment from analysis of gamma spectra resulting from such
measurements. The Multi-Group Analysis code for Uranium (MGAU) is one of the codes, which is widely used by the safeguards community. The MGAU analysis
utilizes the complex and highly overlapping 90-100 keV region in uranium spectra, which requires use of a counting system providing high energy resolution.
Traditionally this type of measurements has been performed using small volume planar low-energy germanium (LEGe) detectors. During recent years, however, large
volume Broad Energy Germanium (BEGe) detectors have become a common option in many applications – including safeguards, environmental, forensics, and
radioactive waste measurements. While BEGe detectors provide an energy resolution, which is comparable with LEGe detectors, the main benefit of using such
detectors is their considerably higher absolute detection efficiency across a wide energy range, which helps achieving a desired accuracy with much shorter counting
times. In many cases, however, high detection efficiency may cause a bias in the measured intensity of gamma- and X-ray lines due to the true-coincidence summing
effect. The observation of the decay schemes for uranium isotopes and their daughters suggests that several prominent peaks in the 90-100 keV region of a uranium
spectrum suffer from such effect, which may potentially result in an observable negative bias in MGAU analysis results. In this paper we review the true coincidence
summing effect for uranium measurements and its possible implication on enrichment assays. The Monte Carlo approach using MCNP-CP code along with
experimentally measured data is used to estimate the magnitude of such effect.
30
A Method to Determine Detector Response Functions in a Heavily Shielded Environment and Application to Spent Fuel
Measurements with Cadmium Zinc Telluride Detectors
A. Borella, R. Rossa, K. van der Meer
SCK•CEN Belgian Nuclear Research Centre, Mol, Belgium
SCK•CEN is developing an instrument for the measurement of neutron and gamma signatures from a spent fuel element. To design and optimize the system the
MCNPX code is used. One of the quantities that are being studied is the expected gamma-ray energy spectrum from a Cadmium Zinc Telluride detector in presence
of spent fuel element. Due to the presence of a highly shielded configuration, a small detector and an extended source, standard Monte Carlo methods are very
inefficient and time consuming. The electron transport, needed when simulating gamma-ray spectra, is also significantly slowing down the computational speed. In
addition, the need to determine the detector response for different source terms results in additional computational burden.
To tackle this problem, an original approach was developed and applied. We de-coupled the transport problem in two sub-problems; in a first simulation the gammaray spectrum impinging the detector as a function of the source gamma-ray energy is determined; in a separate simulation, the intrinsic detector response function
was computed; the results from these independent simulations were then combined to calculate the expected detector response to a generic gamma-ray source. The
originality of the approach presented here lies in the fact that it entirely relies on Monte Carlo calculations without using interpolation of the data with analytical
functions. In this paper, we show that the computational time is reduced by a factor 40 with good agreement with the results that are obtained when the full particle
transport is carried out. The proposed methodology is also applied to determine the detector response when a Compton suppression system is used in a heavily
shielded environment.
Monte Carlo Methods
Tuesday, April 21, 2015
Chair: Dr. Robert Grove
210
10:40 AM
Hermitage C
Variance Reduction Applications Using Visual MCNP
Randy Schwarz(1), Oyeon Kum(2), Angel Licea(3), Mauritius Hiller(4)
(1)̈ Visual Editor Consultants, Richland, WA, (2) Korea Institute of Radiological and Medical Sciences, Seoul, Korea, (3) Canadian Nuclear Safety Commision, Montreal, Canada, (4) Helmholtz Zentrum
Munchen, Neuherberg, Germany
The Visual MCNP Editor has visual tools that can help solve complex variance reduction. This paper will show several applications where Visual MCNP has been
used to solve real world problems.
149
Explicit Modelling of Double-Heterogeneous Pebble-Bed Reactors with the RMC Code
Ding She, Fei Xie, Fu Li (1), Shichang Liu, Kan Wang (2)
1) Institute of Nuclear and New Energy Technology, Collaborative Innovation Center of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of
Education Tsinghua University, Beijing, China 2) Department of Engineering Physics, Tsinghua University
Beijing, China
HTR-10 and HTR-PM are pebble-bed high temperature gas-cooled reactors designed by the Institute of Nuclear Energy Technology (INET), Tsinghua University. The
pebble-bed reactors contain randomly located pebbles in the core and coated fuel particles (CFP) in the pebbles, which is known as the feature of stochastic double
heterogeneity. Because of the difficulties in geometric modelling, our previous Monte Carlo (MC) simulations for pebble-bed reactors were usually done by using
regular lattice approximations, without fully considering stochastic double heterogeneity. In this paper, an explicit-modelling approach is applied to model pebble-bed
reactors with the reactor Monte Carlo code RMC and in-house packing tools. The explicit-modelling approach is examined for physical analyses of HTR-10. Good
agreements are observed among the explicit-modelling calculation with RMC, the deterministic calculation with VSOP, and the criticality experiment results. Besides,
stochastic effects of random distributed CFPs and pebbles are analyzed by using independent explicit-modelling MC calculations.
296
Using Fission Chain Analysis to Inform Probability of Extinction/Initiation Calculations with MCATK
Steven Nolen
Los Alamos National Laboratory, Los Alamos, NM
A probability of initiation algorithm has been implemented into the Los Alamos National Laboratory’s Monte Carlo Application ToolKit (MCATK) library. The algorithm is
based on Booth’s probability of extinction (POE) method but uses an alternative importance function than the one proposed by Booth. To develop the alternative
importance function we initially developed a parallel, 0-dimensional computer code to model and analyze the stochastic evolution of fission chains in idealized
systems. By varying the probability of fission, we produced a detailed sampling of the fission chain characteristics for problems which range from sub- to near-critical.
The distributions were then studied to identify patterns from which we derived a new approach for determining the likelihood of a chain terminating while propagating
in a supercritical system. We then use this new importance estimate while applying Booth’s POE methodology to continue the chain analysis for several supercritical
systems. The new, alternative importance function is now available within MCATK allowing similar analysis to be performed with realistic nuclear data and non-trivial
geometries. The results from a MCATK-based application are presented for a variety of problems including Booth’s original and a series of bare spheres. A
comparison of the results with previously published approaches indicates an improved prediction of the system behavior near critical especially with respect to
convergence.
Response Methods for Particle Transport Modeling and Simulation
Tuesday, April 21, 2015
10:40 AM
Hermitage D
Chairs: Dr. Alireza Haghighat, Dr. Farzad Rahnema
243
Response Matrix Solution to Discrete Ordinates Approximation of the 1D Monoenergetic Neutron Transport Equation
Barry D. Ganapol
Department of Aerospace and Mechanical Engineering, University of Arizona
For nearly 70 years, the solution to the discrete ordinates approximation of the 1D monoenergetic neutron transport equation has been an effective approximation.
During that time, the method has experienced numerous improvements as numerical and computational techniques have matured. Here, we propose a new,
consistent expression of the analytical solution to the 1D, monoenergetic discrete ordinates equations, called the Response Matrix DOM (RM/DOM), which is an
improvement over past forms. The approach is to take advantage of the second order form of the discrete ordinates approximation to express the solution in terms of
hyperbolic functions rather than ordinary exponentials. By comparison, a highly anisotropic radiative transfer benchmark will demonstrate the precision of the solution.
We then establish a new high order benchmark for scattering in a purely hydrogenous medium and apply RM/DOM to general monoenergetic elastic scattering.
- 26 -
MC2015 : M&C + SNA + MC 2015
264
An Adaptive Comet Method Solution to a Configuration of the C5G7 Benchmark Problem
Kyle Remley and Farzad Rahnema
Georgia Institute of Technology, Atlanta, Georgia, United States
The COMET method has been used to solve whole core reactor eigenvalue and flux distribution problems. The accuracy of the method has been shown to be on par
with Monte Carlo techniques but at a computational speed that is several orders of magnitude faster. The efficiency of the method relies upon a flux expansion that
preserves accuracy but is sufficiently low in order to not prohibitively slow calculations. In an effort to further improve this computational efficiency, an adaptive
expansion technique is developed, where different coarse meshes in a problem can be expanded to different orders, whereas before, the flux expansion was
truncated to the same order in all meshes in a problem. Improvements to this technique from its previous introduction are made, and the result is used to solve the
Rodded B configuration of the C5G7 benchmark problem. The agreement between the standard COMET solution and the adaptive COMET solution is excellent. The
eigenvalue difference between solutions falls within the solutions’ combined uncertainty, and the average pin fission density difference is much less than 1%. The
increase in computational efficiency for the adaptive case over the standard COMET method was by a factor of 2.1, which, after adjusting for a slowed convergence
pattern, improves to a factor of 3.7. These encouraging results call for extension of the method to full-core problems and suggest that the computational efficiency of
the COMET method could be improved while at the same time minimizing the need for user intuition in determining expansion orders.
319
A Fission Matrix Approach to Calculate Pin-wise 3-D Fission Density Distribution
William Walters, Nathan Roskoff, and Alireza Haghighat
Nuclear Engineering Program Department Of Mechanical Engineering Virginia Tech Arllington, VA, USA
This paper presents utilization of the fission matrix (FM) methodology to analyze a spent fuel pool. This FM approach utilizes a pre-calculated MCNP-generated
database of fission matrix coefficients which are created at different burnups and cooling times. Certain simplifying assumptions are made based on geometric
considerations, greatly reducing the amount of pre-computation. This approach is capable of quickly and accurately determine pin-wise, axial fission density
distribution and subcritical multiplication (M) or criticality (k) of a spent fuel pool, in any arrangement, without recalculating FM coefficients. This paper examines the
use of the FM approach for different test pool arrangements and conditions. Excellent agreements with an MCNP reference calculation have been achieved with a
significant reduction in computation time.
Next Generation Sn Mesh Sweeps
Tuesday, April 21, 2015
10:40 AM
Hermitage A-B
Chairs: Dr. Robert J. Zerr, Dr. Jae H. Chang
320
Provably Optimal Parallel Transport Sweeps with Non-Contiguous Partitions
Michael P. Adams, Marvin L. Adams, Carolyn N. McGraw, and Andrew T. Till (1), Teresa S. Bailey(2)
(1) Department of Nuclear Engineering Texas A&M University College Station, TX
(2) Lawrence Livermore National Laboratory
We have found provably optimal algorithms for full-domain discrete-ordinate transport sweeps in 2D and 3D Cartesian geometry for partitionings that assign noncontiguous spatial subdomains to each processor. We describe these algorithms and show theoretically that they always execute the full eight-octant sweep in the
minimum possible number of stages provided that the partitioning satisfies conditions that we derive. Computational results from a sweep-emulation code agree with
our theoretical results, showing that our optimal scheduling algorithm does execute sweeps in the minimum possible stage count whenever the partitioning meets the
defined conditions. Previous work has shown that sweeps can be executed with high parallel efficiency on core counts approaching 10(6) given a different class of
partitionings with contiguous subdomains assigned to each processor. Our results here show that discontiguous subdomains, an example of a “domain overloading”
technique, can allow even higher efficiencies at higher processor counts, because in many cases they allow sweeps to complete in fewer stages than is possible with
contiguous processor domains. Key Words: transport sweeps, parallel transport, domain overloading, performance models.
254
KRIPKE - A Massively Parallel Transport Mini-App
Adam J. Kunen, Teresa S. Bailey, and Peter N. Brown
Lawrence Livermore National Laboratory, Livermore, California
As computer architectures become more complex, developing high performance computing codes becomes more challenging. Processors are getting more cores,
which tend to be simpler and support multiple hardware threads, and ever wider SIMD (vector) units. Memory is becoming more hierarchical with more diverse
bandwidths and latencies. GPU’s push these trends to an extreme. Existing simulation codes that had good performance on the previous generation of computers will
most likely not perform as well on new architectures. Rewriting existing codes from scratch is a monumental task. Refactoring existing codes is often more tractable.
Proxy Applications are proving to be valuable research tools that help explore the best approaches to use in existing codes. They provide a much smaller code that
can be refactored or rewritten at little cost, but provide insight into how the parent code would behave with a similar (but much more expensive) refactoring effort. In
this paper we introduce KRIPKE, a mini-app developed at Lawrence Livermore National Laboratory, designed to be a proxy for a fully functional discrete-ordinates
(SN) transport code. KRIPKE was developed to study the performance characteristics of data layouts, programming models, and sweep algorithms. KRIPKE is
designed to support different in-memory data layouts, and allows work to be grouped into sets in order to expose more on-node parallelism. Different data layouts
change the way in which software is implemented, how that software is compiled for a given architecture, and how that generated code eventually performs on a given
architecture.
80
Parallel Deterministic Transport Sweeps of Structured and Unstructured Meshes with Overloaded Mesh
Decompositions
Shawn D. Pautz (1), Teresa S. Bailey (2)
1) Sandia National Laboratories, Albuquerque, NM, 2) Lawrence Livermore National Laboratory, Livermore, CA
The efficiency of discrete-ordinates transport sweeps depends on the scheduling algorithm, domain decomposition, the problem to be solved, and the computational
platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh
partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured
meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We
find that an appropriate amount of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the
test problems examined on up to 10^5 processor cores.
Computational Methods using HPC
Tuesday, April 21, 2015
Chair: Dr. Marvin L. Adams
61
10:40 AM
Two Rivers
Radiation Hydrodynamics with a High-Order, Low-Order Method
A.B. Wollaber, H. Park, R.B. Lowrie, R. Rauenzahn, and M.A. Cleveland
Los Alamos National Laboratory, Los Alamos NM
Recent efforts at Los Alamos National Laboratory to develop a moment-based, scale-bridging algorithm (or High-Order, Low-Order, HO-LO) for solving large varieties
of the transport (kinetic) systems have shown promising results. A part of our ongoing effort is incorporating this methodology into the framework of the Eulerian
Application Project (EAP) in order to achieve algorithmic acceleration of radiation-hydrodynamics simulations in production software. By starting from the thermal
radiative transfer equations with a “simple” material-motion correction, we derive a discretely consistent energy balance equation (LO equation). We demonstrate that
the corresponding LO system for the Monte Carlo HO solver is closely related to the original LO system without material-motion corrections. We test the
implementation on a radiative shock problem and show consistency between the energy densities and temperatures in the HO and LO solutions, as well as agreement
with the semi-analytic solution. We also test the approach on a more challenging 2-D problem and demonstrate accuracy enhancements and algorithmic speedups.
- 27 -
MC2015 : M&C + SNA + MC 2015
84
High performance simulation for sediment transport on rivers on Fukushima area: Parallelization of 2D river simulation
code Nays2D
Susumu Yamada, Akihiro Kitamura, Hiroshi Kurikami, and Masahiko Machida
Japan Atomic Energy Agency, Kashiwa, Chiba, Japan
We challenge to understand the movement of the cesium caused from Fukushima Daiichi Nuclear Power Plant (FDNPP) accident in an aquatic system such as a river
and a lake. Since cesium is strongly sorbed by soil particles, we regard the cesium transport as the sediment transport and we simulate the sediment transport with
utilizing the two-dimensional (2D) river flow simulation code which can simulate the detailed behavior of the transport. In general, the 2D codes require a huge amount
of calculation time, therefore, we propose the parallelization strategy for the 2D simulation code Nays2D. Our parallelization code achieves about 10 times speedup
using 16 cores of Fujitsu PRIMERGY BX900. This result shows our parallelized code is effective for a large area simulation which has a huge number of grid points.
Moreover, the parallel code enables us to simulate the sediment transport in Ogaki-dam reservoir, which is located in about 16km northwest from FDNPP, in a realistic
time. The simulation result of the reservoir demonstrates that as the water level of the reservoir becomes high, the amount of the clay discharged from the reservoir
decreases and that of the deposited within increases. The outcome indicates that the tuning of the water level of the reservoir can control the behavior of the cesium
transport.
86
A Study on the Parallel, Iterative Solution of Systems of Linear Equations Appearing on Analytical Nodal Schemes for
Two-dimensional Cartesian Geometry Discrete Ordinates Problems
Rudnei Dias da Cunha (1), Anderson Tres and Liliane Basso Barichello (2)
1) Instituto de Matemática, Universidade Federal do Rio Grande do Sul, Brazil, 2) Programa de Pós-Graduação em Matemática Aplicada, Instituto de Matemática, Universidade Federal do Rio Grande
do Sul, Brazil
In this work we present our approach to the solution of large linear systems that arise in connection with the solution to two-dimensional fixed-source transport
problems by the ADO method, based on the use of established iterative methods as Conjugate-Gradients, Generalized Minimum Residuals, Loose Generalized
Minimum Residuals and Transpose-free Quasi-Minimal Residuals applied to Jacobi-preconditioned normal equations transformed system. This work was motivated by
the use of alternative quadrature schemes to the classical level-symmetric quadrature scheme. These systems are large and sparse and their solution was provided
with our own MPI parallel implementations of the iterative methods above. Results obtained show that our approach was successful in solving the systems and the
parallel implementations have provided good scalability with respect to the number of processors.
Mathematical Methods in Nuclear Nonproliferation and Safeguards Applications
Tuesday, April 21, 2015
10:40 AM
Belmont
Chairs: Prof. Shikha Prasad, Dr Andrea Favalli, Dr. Shaheen A. Dewji, Dr. Stephen Croft
41
Progress on Monte Carlo Simulations for IAEA Safeguards Verification
Tae Hoon Lee, Andrey Berlizov, David H. Beddingfield and Alain Lebrun
International Atomic Energy Agency, Vienna International Centre, Vienna, Austria
Monte Carlo simulations have been used for safeguards verifications of International Atomic Energy Agency (IAEA) for many years. This includes calibrations and
optimizations of neutron and gamma-ray Non Destructive Assay (NDA) measurement systems, analysis on the effects of matrix such as neutron poisons and
impurities and investigation on the sensitivity of NDA systems for possible nuclear diversion scenarios. Accurate Monte Carlo benchmarked models of NDA systems
have been developed by the IAEA and the usage of MCNP code simulations for NDA systems have strengthened safeguards (SG) verifications. This paper
summarizes a recent progress of the IAEA on development of Monte Carlo modeling and simulations of neutron and gamma-ray NDA systems for SG verifications.
71
Sensitivity Analysis and Uncertainty Quantification of Neutron Multiplicity Statistics using Perturbation Theory
Sean O’Brien, John Mattingly, and Dmitriy Anistratov
North Carolina State University, Raleigh, NC
It is frequently important to estimate the sensitivity and uncertainty of measured and computed detector responses of subcritical experiments and simulations. These
uncertainties arise from the physical construction of the experiment, from uncertainties in the transport parameters, and from counting uncertainties. In particular, the
detector response is geometrically sensitive to the fission neutron yield distribution. The aim of our work is to apply sensitivity analysis and uncertainty quantification
(SA/UQ) to the statistics of subcritical neutron multiplicity counting distributions using first order perturbation theory. For multiplicity counting experiments, knowledge
of the higher order counting moments and their uncertainties are essential for a complete SA/UQ. We compute the sensitivity of neutron multiplicity counting moments
to arbitrarily high order.Each moment is determined by solving an adjoint transport equation with a source term that is a function of the adjoint solutions for lower order
moments. This enables moments of arbitrarily high order to be sequentially determined and shows that each moment is sensitive to the uncertainties of all lower order
moments. We derive moment closing, forward transport, equations that are a function of the forward flux and lower order moment adjoint fluxes. We validate our
calculations for the first two moments by comparison with multiplicity measurements of a subcritical plutonium metal sphere. This work will enable a new method to
adjust the evaluated values of nuclear parameters using subcritical neutron multiplicity counting experiments, and it enables a more detailed sensitivity and uncertainty
analysis of subcritical multiplicity counting measurements of fissionable material.
173
Energy Correlations of Prompt Fission Neutrons in the Laboratory Frame
Imre Pázsit and Zsolt Elter
Chalmers University of Technology, Department of Applied Physics, Division of Nuclear
Engineering, Göteborg, Sweden
Correlations between the energies and emission angles of prompt fission neutrons are of significance for all methods which use the statistics of detection events for
determining subcritical reactivity in reactor cores or for non-destructive assay of nuclear materials for safeguards purposes. There is no experimental knowledge
available on the existence or properties of such correlations. Therefore, recently increasing attempts are made to determine these correlations from the properties of
the fission process. One possible reason of such correlations between fission neutron energies and angles in the laboratory system is the fact that the prompt
neutrons are emitted from the moving fission targets, even if their energies and emission angles are independent in the moving frame of the fission fragment. In this
paper this concept is investigated analytically and through numerical simulations. It is shown that such correlations are due to the random properties (energy and
direction of motion) of the fission fragments, and the magnitude of the covariance depends on the second order moments of the fission fragment parameters.
Preliminary numerical simulations show that the correlations in energy, generated this way, are rather small.
- 28 -
MC2015 : M&C + SNA + MC 2015
Next Generation Parallelism for Monte Carlo
Tuesday, April 21, 2015
1:30 PM
Hermitage C
Chairs: Mr. Jean-Christophe P. Trama, Dr. Forrest Brown
75
MCNP6 Monte Carlo Code Optimization
Forrest Brown
Monte Carlo Codes Group, LANL, Los Alamos, NM
The MCNP6.1 Monte Carlo code, released in 2013, offers many features not available in previous versions, but runs 20-30% slower for most problems and 2-5x
slower for some problems. The MCNP 2020 initiative was established in mid-2013 to address code performance and other issues. This paper reviews the initial
performance improvements to MCNP6.1 that have been incorporated into the 2014 update, MCNP6.1.1. The performance improvements to date have included both
classic code optimizations and algorithm improvements. Testing on a variety of problems has demonstrated that the performance improvements were effective,
yielding speedups by factors of 1.2x - 4x compared to MCNP6.1, depending on the type of problem. For criticality problems, MCNP6.1.1 runs 1.5x - 1.7x faster than
MCNP6.1. Much more work is planned for improving MCNP6 performance, structure, and algorithms.
117
Advanced Computing Architecture Challenges for the Mercury Monte Carlo Particle Transport Project
Patrick S. Brantley, Shawn A. Dawson, Michael Scott McKinley, Matthew J. O’Brien, David E. Stevens, Bret R. Beck, Eugene D. Brooks III (1),
Ryan C. Bleile (2)
1) Lawrence Livermore National Laboratory, Livermore, CA, 2) Department of Computer and Information Science, University of Oregon, Eugene, OR
We describe the challenges posed to the Mercury Monte Carlo particle transport code development team from emerging and future advanced computing
architectures. We review recent work to scale Mercury to large numbers of MPI processes as well as to improve compute node parallelism via OpenMP threading and
demonstrate these capabilities using a reactor eigenvalue calculation. We then describe initial progress for enabling Mercury for the Intel Xeon Phi-based MIC
architecture. We present preliminary results of research investigations into the use of event-based algorithms in a Monte Carlo test code for application to GPU
architectures. We then briefly describe work to enable storage of nuclear data in shared memory and to enable the use of the Generalized Nuclear Data format in
Mercury via the General Interaction Data Interface.
136
High Performance Monte Carlo Computing with Tripoli : Present and Future
Francois-Xavier Hugot, Emeric Brun, Fausto Malvagi, Jean-Christophe Trama, Thierry Visonneau
CEA Saclay - DANS/DM2S/SERMA, Gif-sur-Yvette, FRANCE
Although the Monte Carlo (MC) codes are natural users of the fast growing capacities in High Performance Computing (HPC), adapting production level codes such
as TRIPOLI-4® to the exascale is very challenging. We present here the dual strategy we follow : new thoughts and developments for the next versions of TRIPOLI
-4®, as well as insights on a prototype of next generation Monte Carlo (NMC) designed from the beginning with exascale in mind. The Random Generators of the
code will also be presented as well as the strategy of verification of the parallelism.
139
Maximum Efficiency in Massively Parallel Execution of Monte Carlo Criticality Calculations
J. Eduard Hoogenboom (1), Aleksandar Ivanov and Victor Sanchez (2)
1) Delft Nuclear Consultancy, The Netherlands, 2) Institute for Neutron Physics and Reactor Technology, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
The parallel efficiency of Monte Carlo codes in reactor criticality calculations is deteriorating at large numbers of processor cores. Many useful improvements to the
structure of Monte Carlo codes are discussed to improve the parallel efficiency. Major improvements are running almost independent tasks on different computer
nodes and limiting the execution time of the simulations per node to a maximum wall clock time. In a demonstration calculation with an improved version of MCNP5 for
a full-size reactor core it was shown that the parallel efficiency when using 2,048 computer nodes each with 16 processor cores was 99 % of the theoretical maximum.
Validation, Verification, and UQ
Tuesday, April 21, 2015
1:30 PM
Hermitage D
Chairs: Dr. Hany S. Abdel-Khalik, Dr Ugur Mertyurek
295
Physics-guided Coverage Mapping (PCM): A New Methodology for Model Validation
Hany S. Abdel-Khalik (1) and Ayman I. Hawari (2)
(1) School of Nuclear Engineering, Purdue University, West Lafayette, IN (2) Department of Nuclear Engineering, North Carolina State University, Raleigh, NC
This manuscript deals with a fundamental question in any reactor model validation practice: given a body of available experiments, and an envisaged domain of
reactor operating conditions (referred to as reactor application), can one develop a quantitative measure that measures the portion of the prior uncertainties of the
reactor application that is covered by the available experiments? Coverage here means that the uncertainties of the reactor application are originating from and
behaving in exactly the same way as those observed at the experimental conditions. This approach is valuable as it provides a scientifically defendable criterion by
which experimentally measured biases can be credibly extrapolated (i.e., mapped) to biases for the reactor applications. Our proposed approach is referred to as
physics-guided coverage mapping, and in this introductory manuscript, we will demonstrate its application to fission reactors criticality safety applications. We discuss
the potential advantages of PCM over the methods of similarity indices, data assimilation, and model calibration commonly employed in the nuclear community.
52
Depletion Calculation and Uncertainty / Sensitivity Analysis for a Sodium-Cooled Fast Spectrum Fuel Assembly
A. Aures, F. Bostelmann, V. Hannstein, K. Velkov, W. Zwermann (1), N. Guilliard, J. Lapins, W. Bernnat (2)
1) Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) mbH, Garching, Germany, 2) Institut für Kernenergetik und Energiesysteme (IKE), Universität Stuttgart, Stuttgart, Germany
The impact of the nuclear data libraries on the multiplication factor and the nuclide densities during depletion of a sodium-cooled fast system are investigated through
performance of depletion calculations for one cycle. The model used in this study represents a fuel assembly of the inner core region of the Generation-IV sodiumcooled fast reactor concept MOX-3600. On the basis of this benchmark, comparative analyses are also carried out by the OECD/NEA Working Group SFR-FT. In this
paper, the nuclear data libraries ENDF/B-VII.0 / -VII.1 and JEFF-3.1.1 / -3.1.2 / -3.2 are used in continuous-energy format by the Monte Carlo codes MCNP-6 and
Serpent 2. Additionally, multi-group calculations are performed with TRITON/NEWT, TRITON/KENO using the 238-group ENDF/B-VII.0 library and with HELIOS using
the 190-group ENDF/B-VI library. A reactivity difference of about 500 – 600 pcm between JEFF-3.1.2 and JEFF-3.2 is observed. For most actinides, the various
depletion sequences obtain similar nuclide densities, except for U-234, Np-237, Am and Cm, where the deviations are larger. Furthermore, systematic uncertainty and
sensitivity analysis concerning uncertainties of nuclear data are done with the sampling-based XSUSA methodology and with the TSUNAMI module of the SCALE 6.1
code system. In contrast to light water reactor systems, the uncertainty of the multiplication factor is significantly larger. The sensitivity analysis showed that the main
contribution originates from the uncertainty of the inelastic scattering of U-238, and due to a strong correlation, also from the elastic scattering.
321
Probabilistic Error Bounds for Reduced Order Modeling
Mohammad G. Abdo, Congjian Wang, and Hany S. Abdel-Khalik
School of Nuclear Engineering, Purdue University, West Lafayette, IN 47906
Reduced order modeling has proven to be an effective tool when repeated execution of reactor analysis codes is required. ROM operates on the assumption that the
intrinsic dimensionality of the associated reactor physics models is sufficiently small when compared to the nominal dimensionality of the input and output data
streams. By employing a truncation technique with roots in linear algebra matrix decomposition theory, ROM effectively discards all components of the input and
output data that have negligible impact on reactor attributes of interest. This manuscript introduces a mathematical approach to quantify the errors resulting from the
discarded ROM components. As supported by numerical experiments, the introduced analysis proves that the contribution of the discarded components could be
upper-bounded with an overwhelmingly high probability. The reverse of this statement implies that the ROM algorithm can self-adapt to determine the level of the
reduction needed such that the maximum resulting reduction error is below a given tolerance limit that is set by the user.
- 29 -
MC2015 : M&C + SNA + MC 2015
111
Development of a Genetic Algorithm for Neutron Energy Spectrum Adjustment
Richard M. Vega and Edward J. Parma
Sandia National Laboratoriesy, Albuquerque, NM
We describe a new method for neutron energy spectrum adjustment which uses a genetic algorithm to minimize the difference between calculated and measured
reaction probabilities. The measured reaction probabilities are found using neutron activation analysis. The method adjusts a trial spectrum provided by the user which
is typically calculated using a neutron transport code such as MCNP. Observed benefits of this method over currently existing methods include the reduction in
unrealistic artifacts in the spectral shape as well as a reduced sensitivity to increases in the energy resolution of the derived spectrum. The method has thus far been
used to perform spectrum adjustments on several spectrum-modifying environments in the central cavity of the Annular Core Research Reactor (ACRR) at Sandia
National Laboratories, NM. Presented in this paper are the adjustment results for the polyethylene-lead-graphite (PLG) bucket environment along with a comparison to
an adjustment obtained using the code LSL-M2, which uses a logarithmic least squares approach. The genetic algorithm produces spectrum-averaged reaction
probabilities with agreement to measured values, and comparable to those resulting from LSL-M2. The true benefit to this method, the reduction of shape artifacts in
the spectrum, is difficult to quantify but can be clearly seen in the comparison of the final adjustments.
Deterministic Transport Methods
Tuesday, April 21, 2015
1:30 PM
Hermitage A-B
Chairs: Dr Jean C. Ragusa, Dr Troy L. Becker
99
Stabilization Methods for CMFD Acceleration
M. Jarrett, B. Kelley, B. Kochunas, T. Downar, E. Larsen
Department of Nuclear Engineering, University of Michigan, Ann Arbor, MI
The Coarse Mesh Finite Difference (CMFD) method is one of the most widely used methods for accelerating the convergence of numerical transport solutions.
However, in some situations, iterative methods using CMFD can become unstable and fail to converge. In this paper we evaluate several different modifications of the
CMFD scheme that are known to stabilize the iterative method. We perform Fourier analysis on a linearized version of each scheme applied to an idealized
(monoenergetic 1D infinite homogeneous medium planar SN) problem to characterize the stability and rate of convergence. We also compare the effectiveness of the
methods numerically by applying each to a 2D benchmark problem and a 2D/1D solution of a standard 3D benchmark problem using the MPACT code. We show that
several methods are capable of stabilizing a 2D MOC solution with CMFD acceleration, and examine the advantages and disadvantages of each. The numerical
results show that there is potential for significant reductions in MPACT run time using improved CMFD stabilization methods.
118
A Linear Stability Analysis of the Multigroup High-Order Low-Order (HOLO) Method
T. S. Haut, R. B. Lowrie, H. Park, R. M. Rauenzahn, and A. B. Wollaber
Computational Physics and Methods Group, Los Alamos National Laboratory, Los Alamos, NM
The thermal radiative transfer (TRT) equations are a nonlinear system of PDEs that describe the interaction of radiation with a high-energy background material. Their
high-dimensionality and numerical stiffness often render traditional time-stepping methods prohibitively expensive. The multigroup high-order low-order (HOLO)
scheme is a recently developed moment-based acceleration scheme for the time evolution of the TRT equations, and can achieve orders of magnitude speedup over
traditional time-stepping methods. However, numerical evidence suggests that the HOLO method can become unstable in certain parameter regimes. To better
understand this phenomonen, we provide a linear stability analysis of the HOLO method when the solution is close to equilibrium. The result of the analysis is a
dispersion relation connecting the (exponential) decay/growth rate in time with the spatial frequency, the time step, the equilibrium temperature, and the TRT
parameters. We use the dispersion relation to explore stability of the HOLO scheme as a function of the time step and the TRT parameters, and validate the analysis
against direct numerical simulation.
209
Nonlinear Diffusion Acceleration Method with Multigrid Multiplicative Corrections For Multigroup Eigenvalue
Transport Problems
Luke R. Cornejo and Dmitriy Y. Anistratov
Department of Nuclear Engineering North Carolina State University, Raleigh, North Carolina
Nonlinear diffusion acceleration (NDA) methods for solving multigroup k-eigenvalue problems for multidimensional geometry are developed. These methods are
defined by multigrid multiplicative corrections which use LONDA solutions on coarse energy grids to accelerate the multigroup iterations. Two-grid and three-grid
methods are presented. The performance of the methods is analyzed using tests based on C5G7 cross sections and geometry. Numerical results that show the
performance of these multigrid methods are demonstrated.
212
Techniques for stabilizing Coarse-Mesh Finite Difference (CMFD) in Methods of Characteristics (MOC)
Lulu Li, Kord Smith, and Benoit Forget
Department of Nuclear Science & Engineering, Massachusetts Institute of Technology, Cambridge, MA
The Coarse-Mesh Finite Difference (CMFD) method has been widely used to effectively accelerate neutron transport calculations. It was however found to be at times
unstable in the presence of strong heterogeneities. The common practice to improve stability is to employ a damping factor on the non-linear diffusion coefficient
terms, but there is no method to determine the optimal damping factor for a practical reactor problem prior to the calculation. This paper investigates two problemagnostic techniques that stabilize reactor calculations that would otherwise diverge with undamped CMFD. The first technique is to perform additional energy sweeps
for the upscattering group region during the high-order MOC calculation to generate more accurate information to pass into the CMFD calculation. The second
technique extends the traditional scalar flux prolongation to provide spatial variations inside each acceleration cell. This study uses the 2D C5G7 problem and the
Babcock & Wilcox 1810 series critical experiment benchmark to evaluate these methods. Numerical simulations showed that both techniques stabilize CMFD, and that
the linear prolongation technique did not incur additional computational cost compared to the optimally damped conventional method.
Monte Carlo with CAD and Complex Geometries
Tuesday, April 21, 2015
1:30 PM
Two Rivers
Chairs: Dr Paul Hulse, Dr Paul P.H. Wilson
88
Convex-based Void Filling Method for CAD-based Monte Carlo Geometry Modeling
Shengpeng Yu, Mengyun Cheng, Song Jing, Pengcheng Long, Liqin Hu
Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, China
For generating complex Monte Carlo (MC) calculation geometry, automatic geometry modeling is more efficient than manual description approach which is
tediousness, intensive labor and error-prone. For MC codes such as MCNP, all the space including solids and void spaces between them need to be described due to
the accuracy and efficiency. For the systems with complicated geometry, the void space modeling is time consuming and error-prone and effects the efficiency of
modeling and calculation. An advanced void filling method named Convex-based Void Filling (CVF) and based on convex volumes and sub-space subdivision strategy
considering the splitting quality was proposed in this paper. This method subdivides all the transport space into sub-spaces iteratively, test the convex volumes
generated from decomposing the solids in the CAD model with sub-spaces and then generate the description of void spaces by complementary describing the
volumes in the sub-spaces. This method has been implemented in SuperMC/MCAM, the Multiple-Physics Coupling Analysis Modeling Program and tested with
neutron transport calculation of International Thermonuclear Experimental Reactor (ITER) Alite model. Higher efficiency of the proposed method for both geometry
converting and MC calculation was demonstrated by the testing results.
- 30 -
MC2015 : M&C + SNA + MC 2015
90
Advanced Geometry Navigation Methods without Cavity Representation for Fusion Reactors
Bin Wu, Zhenping Chen, Jing Song, Pengcheng Long, Liqin Hu
Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, China
Particle transport simulation with plenty of complex and irregular geometrical shapes is a great challenge in neutronics design and analysis of fusion reactors. All the
space in the particle transport universe should be described including cavity in MCNP which is the most common dep-loyed tool in fusion reactors. The quality of the
cavity directly affects the calculation accuracy and efficiency. In view of the difficulty in describing the cavity and the instability of calculation efficiency, a new geometry
representation method without cavity description was formed in Super Monte Carlo Simulation Program for Nuclear and Radiation Process (SuperMC). And advanced
geometry navigation was developed in SuperMC to get high performance for particle transport in fusion reactors. The ITER benchmark model, a validation model
released by ITER International Organization, was used to verify the accuracy and efficiency of geometry representation and navigation methods in SuperMC. The
results in SuperMC had been great consistent with MCNP, and the computation speed was faster than MCNP.
129
CAD-Based Geometry Type in Serpent 2 -- Application in Fusion Neutronics
Jaakko Leppänen
VTT Technical Research Centre of Finland, Espoo, Finland
This paper presents a practical demonstration of the CAD-based geometry type developed for the Serpent 2 Monte Carlo code. The geometry is constructed of threedimensional solid bodies, with boundaries defined by a triangulated surface mesh. The data is read in the STL file format, which can be exported by most CAD design
tools. Cell search and other geometry routines developed for handling the triangulated surfaces are introduced, and the methodology is demonstrated with a
complicated full-scale model of the ITER fusion reactor. The calculations involve verification of the complex geometry and assessment of computational performance.
It is concluded that Serpent can be considered a viable simulation tool for fusion neutronics applications. The work continues with the development of source sampling
and variance reduction methods.
185
Preliminary Investigation of MCNP6 Unstructured Mesh Geometry for Radiation Flux Calculations Involving Space
Environment
Kristofer Zieb, Hui Lin, Wei Ji, Peter F. Caracappa, X. George Xu
Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, NY
The latest release of MCNP6 contains the capability to represent geometry in unstructured meshes. The unstructured mesh features, however, have been tested with
only limited examples to date. The aim of this paper is to examine the use of the new unstructured mesh features for space radiation dose calculations involving a
space habitat during a solar particle event. High energy proton transport, alongside its secondary particles, a modeling capability integrated from MCNPX, was tested
with MCNP6’s unstructured mesh feature to gain insight into the potential uses and limitations of MCNP6’s development. Abaqus was used to generate an
unstructured tetrahedral mesh of a space habitat structure, which was then used with MCNP6.1.1 Beta to simulate a Solar Particle Event (SPE) consisting of a high
flux of protons of energies up to 500 MeV. Trial simulations were performed using 1st and 2nd order tetrahedral meshes, however it is concluded that high energy
proton transport still requires further development. Key Words: Space, Unstructured Mesh, MCNP6, Solar Particle Event.
Whole-Core Modeling and Simulation
Tuesday, April 21, 2015
1:30 PM
Chair: Dr. Benjamin S. Collins
81
Belmont
Analysis of BEAVRS Benchmark Problem by Using Enhanced Monte Carlo Code MVP with JENDL-4.0
Motomu Suzuki, Yasushi Nauchi
Nuclear Technology Research Laboratory, Central Research Institute of Electric Power Industry (CRIEPI), Tokyo, Japan
A continuous-energy Monte Carlo code MVP was enhanced to use a large size memory on 64-bit OS for application to the large-scale calculation such as a pin-power
evaluation of a whole core. To validate the enhanced MVP, a PWR whole core of MIT BEAVRS benchmark problem was precisely modeled, and the initial startup
tests of Hot Zero Power (HZP) condition were calculated with the nuclear data library JENDL-4.0. In the preliminary calculation, the eigen-value and the pin-power
distribution were evaluated to investigate the convergence of the calculation. A symmetrical power distribution was calculated as expected. The calculation results for
the criticality boron concentration, the control rod bank worth, the isothermal temperature coefficient (ITC) and in-core detector signals were compared with the
measurement data that are included in the benchmark problem. For the criticality and the control rod bank worth, the calculation results agree well with the
measurement data. The ITC results of the enhanced MVP can give fairly good prediction, considering that the calculations do not incorporate the effects of the exact
resonance scattering model such as Doppler Broadening Rejection Correction (DBRC). In the results of the axially-integrated in-core detector signal, the tilt for
horizontal direction that the similar trends were reported in other codes showed.
205
Improved Diffusion Coefficents for SPn Axial Solvers in the MPACT 2D/1D Method Applied to the AP1000® PWR StartUp Core Models
Shane Stimpson(1), Fausto Franceschini(2), Benjamin Collins, Andrew Godfrey, Kang Seog Kim (3), Aaron Graham, and Thomas Downar (1)
(1) Department of Nuclear Engineering and Radiological Sciences University of Michigan, Ann Arbor, Michigan, (2) Westinghouse Electric Co. LLC Cranberry Township, Pennsylvania. (3) Oak Ridge
National Laboratory, Oak Ridge, Tennessee
As part of the Virtual Environment for Reactor Applications Core Simulator (VERA-CS), the 2D/1D capability in the MPACT code is being developed collaboratively by
Oak Ridge National Laboratory and the University of Michigan. MPACT was used to model the AP1000® reactor start-up cores.† One of the major shortcomings
observed in initial results was the ability to accurately resolve the pin power distributions for cases with partial-length burnable poison pins. The primary source of the
errors was determined to be the diffusion coefficients that are used in the axial transport solvers of the 2D/1D scheme. The work here demonstrates the deficiency of
the previous method used to determine the diffusion coefficients by using the out-scatter approximation to calculate the transport cross sections. New results are
obtained by using the in-scatter and what is being termed the “Neutron Leakage Conservation” approximations to more accurately determine the transport cross
sections being used to construct the diffusion coefficients. Additionally, the methods were applied to both 3D assembly and core problems and comparisons of the
Nodal Expansion Method and Simplified PN axial transport solvers with these improved diffusion coefficients. Significant improvements are observed, particularly in
the single-assembly cases with partial-length Wet Annular Burnable Absorber pins. In the quarter-core cases, the improvements are less apparent because the power
distribution is flatter, though the results demonstrate that it is still worthwhile to incorporate these corrections.
291
Comet Whole Core Solution to an I2S-LWR Benchmark Problem
Dingkang Zhang, Farzad Rahnema, Ryan Hon, Gabriel Kooreman, Bojan Petrovic
Georgia Institute of Technology Atlanta, GA
In this paper, the coarse mesh radiation transport (COMET) method was used to obtain the whole core solution for an Integral Inherently Safe Light Water Reactor
concept (I2S-LWR) benchmark problem with UO2 fuel. The benchmark problem contains 121 fuel assemblies and 40,656 fuel pins. In COMET calculations, a set of
fixed-source local problems with incident fluxes imposed on the boundary were first solved to obtain the response coefficients for all unique coarse mesh. These
response coefficients were then compiled into a library to perform whole core calculation to compute the core eigenvalue and pin fission density distribution. The
COMET results were compared with those from the Monte Carlo code MCNP5. The comparison indicates both the core eigenvalue and pin fission density distributions
predicted by COMET agree very well with the MCNP reference solution when the orders of the angular flux expansion in the two spatial variables and the polar and
azimuth angles on the mesh boundaries are 4, 4, 2 and 2. The average and mean differences in the pin fission density distribution is 0.64% and 0.59%, respectively.
These pin fission density discrepancies are within 3-sigma uncertainty of the MCNP reference solution. The eigenvalue difference between the two calculations is 36
pcm. The comparison indicates that COMET can achieve accuracy comparable to Monte Carlo. It is also found that COMET’s computational speed is 3-4 orders of
magnitude faster than MCNP.
302
Analysis of the BEAVRS Benchmark Using MPACT
Benjamin Collins, Andrew Godfrey
Oak Ridge National Laboratory Oak Ridge, TN
MPACT is the primary reactor simulation tool being developed by researchers at Oak Ridge National Laboratory and the University of Michigan as an advanced pinresolved transport capability within the Virtual Environment for Reactor Analysis (VERA). VERA is the end-user reactor simulation tool being developed by the
Consortium for the Advanced Simulation of Light Water Reactors (CASL). MPACT is used to perform the Benchmark for Evaluation and Validation of Reactor
Simulations (BEAVRS), which provides two cycles’ worth of operating power history, along with a full, detailed description of the geometry, measured critical boron
concentrations, and flux maps.
- 31 -
MC2015 : M&C + SNA + MC 2015
Next Generation Parallelism for Monte Carlo
Tuesday, April 21, 2015
3:40 PM
Hermitage C
Chairs: Mr. Jean-Christophe P. Trama, Dr. Robert Grove
151
Strategies and Algorithms for Hybrid Shared-Memory/Message-Passing Parallelism in Monte Carlo Radiation
Transport Codes
David P. Griesheimer, Brian R. Nease (1), Peter S. Dobreff, Paul K. Romano(2), Daniel F. Gill(1)
Bechtel Marine Propulsion Corporation (1) Bettis Atomic Power Laboratory, West Mifflin, PA (2) Knolls Atomic Power Laboratory, Schenectady, NY
Presently, most high-performance Monte Carlo (MC) radiation transport solvers use a hybrid message-passing and shared-memory approach to parallelism, which
enables the codes to utilize available computing resources, both between and within compute nodes on a cluster. In this paper we review several strategies,
algorithms, and best practices for improving efficiency and scalability of hybrid shared-memory/message-passing parallelism within a MC radiation transport code. The
code-design strategies and best-practices presented within are largely based on experience obtained while developing MC21, an in-house, continuous-energy MC
solver specifically designed for large-scale reactor analysis simulations on massively-parallel computing systems. In addition, the paper provides details on three novel
parallel algorithms for operations that play a fundamental role in MC transport simulations: random number generation, fission source renormalization, and fixedsource sampling. The parallel efficiency and scalability of these methods is established through a series of scaling studies, performed with MC21, which cover a
variety of representative scenarios for large-scale reactor analysis simulations. Results from these studies demonstrate that MC21, using the methods described in
this paper, is able to handle extremely large problem sizes (up to 1 TB of memory, including 1.1×10^11 depletable nuclides) and scales well through thousands of
processors.
180
Shift: A Massively Parallel Monte Carlo Radiation Transport Package
Tara M. Pandya, Seth R. Johnson, Gregory G. Davidson, Thomas M. Evans, and Steven P. Hamilton
Oak Ridge National Laboratory, Oak Ridge, TN
This paper discusses the massively parallel Monte Carlo radiation transport package Shift, developed at Oak Ridge National Laboratory and it reviews the capabilities,
implementation, and parallel performance of this code package. This code package is designed to scale well on high performance architectures. Scaling results
demonstrate very good strong and weak scaling as applied to LWR analysis problems. Also, benchmark results from various reactor problems show that Shift results
compare well to other contemporary Monte Carlo codes and experimental results.
238
Influence of the memory subsystem on Monte Carlo code performance
Paul K. Romano(1), Andrew R. Siegel and Ronald O. Rahaman(2)
(1) Bechtel Marine Propulsion Corporation, Knolls Atomic Power Laboratory, Schenectady, NY, (2) Argonne National Laboratory, Theory and Computing Sciences, Argonne, IL
In this study, a detailed look at how miss rates and latencies in a multi-level memory hierarchy can have significant effects on the performance of a Monte Carlo code
is presented. Simulations of the Monte Carlo performance benchmark were run, and hardware performance counters were collected using the Performance API
(PAPI). The results of the simulations and an accompanying analysis suggest that for light-water reactor depletion problems, the most important factor that determines
performance is the effective memory latency accounting for characteristics of the L2 cache, L3 cache, and main memory. Observed performance in multi-socket
NUMA architectures was also explained by the performance counters collected.
Validation, Verification, and UQ
Tuesday, April 21, 2015
3:40 PM
Chair: Dr. Hany S. Abdel-Khalik
271
Hermitage D
JSI TRIGA Fission Rate Experimental Benchmark
Žiga Štancar, Luka Snoj(1), Loic Barbot and Christophe Domergue(2)
(1) Jožef Stefan Institute Ljubljana, Slovenia, (2) CEA, DEN, DER, Instrumentation Sensors and Dosimetry Laboratory Cadarache, Saint-Paul-Lez-Durance, France
In the recent years an effort has been made to improve the Monte Carlo calculational model of the TRIGA Mark II reactor at the Jozef Stefan Institute (IJS). An
important step in the process of the experimental verification of the model was performing an experiment of fission rate profile measurements using miniature fission
chambers developed by the Commisariat a lEnergie Atomique (CEA). These were inserted into the core of the reactor and axial scans of absolute fission reaction
rates in multiple positions were measured. The experiment was consequently modelled in detail with the Monte Carlo method and a comparison between the
measured and calculated fission rates was performed. It has been shown that the agreement between the absolute reaction rates is relatively good for all measuring
positions, with the average relative discrepancies being below five percent. In order to complete the validation process of the reactor Monte Carlo model an extensive
evaluation of experimental and calculational uncertainties has been performed. This included the study of fission chamber positioning uncertainties, material
composition perturbation and the evaluation of other uncertainty sources like the use of different nuclear data libraries, core temperature effects etc. Current analyses
show that the total experimental uncertainty is sufficiently low that the experiment can be considered as an experimental benchmark and will be proposed for inclusion
into the International Reactor Physics Experiment Evaluation (IRPhE) Project handbook.
35
Modeling and Simulation of Hanford B Reactor Experiments
Germina Ilas, Ian Gauld, Eva Sunny, Mike Westfall (1), Jennifer Nguyen (2)
1) Oak Ridge National Laboratory, Oak Ridge, TN, USA, 2) Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI USA
Experimental data on isotopic concentrations in irradiated nuclear fuel are essential for the validation of computational methods and nuclear data applied in the reactor
modeling and simulations used in spent fuel safety and nuclear safeguards. This study investigates the potential use of declassified experimental data from the
Hanford B reactor as a reactor and spent fuel benchmark. Unlike most spent fuel benchmarks involving commercial fuel, the Hanford B experimental data include
unique measurements for very low exposure production fuel of less than 3 GWd/MTU. Details are provided on analysis results using lattice physics methods as well
as preliminary findings from full-core models.
73
Impact of Fission Yield Correlations on Burnup Problems
Luca Fiorito (1,2), Alexey Stankovskiy, Gert Van den Eynde (1), Carlos J. Diez, and Oscar Cabellos (3)
1) Institute for Advanced Nuclear Systems, SCK•CEN, 2400 Mol, Belgium, 2) ULB, Université Libre de Bruxelles, Bruxelles, Belgium, 3) OECD Nuclear Energy Agency (NEA)/ Data Bank, Issy-lesMoulineaux, France
Nuclear data libraries generally provide independent fission yield estimates along with their uncertainties, but devoid of complete covariance matrices. Such
uncertainties should be considered in the uncertainty quantification of burnup responses — e.g. isotopic inventory, k-eigenvalue. However, several incongruities were
detected amongst the fission yield evaluated uncertainties, which could impact on uncertainty quantification (UQ) studies. As a part of this work, we sorted out the data
inconsistency found in the JEFF-3.1.1 library introducing fission yield correlations. Such correlations were produced using a generalised least square updating
approach, with conservation equations acting as fitting models. The process was iterative and fission yield estimates and covariances were revised, each time
introducing specific sets of measured values, when available, or evaluated conservation criteria. We conveyed the information of the new covariance dataset into
randomly perturbed files, ready for random sampling calculations. The number of samples was large enough to grant convergence of the first two moments. Then, we
quantified the uncertainty of the isotopic inventory and keff of the PWR fuel rod sample of the REBUS international program, first using updated and then original data.
This procedure included data sampling followed by depletion calculations using ALEPH, the SCK•CEN burnup code, which simulated the irradiation history. The
response uncertainty estimate, obtained through a statistical analysis of the results, showed a sharp drop when using correlated fission yields.
- 32 -
MC2015 : M&C + SNA + MC 2015
Deterministic Transport Methods
Tuesday, April 21, 2015
3:40 PM
Hermitage A-B
Chairs: Dr Troy L. Becker, Dr. Erin Fichtl
94
An Analytical Discrete Ordinates Solution for One-Speed Slab Geometry Adjoint Transport Problems with Isotropic
Scattering
Cássio B. Pazinatto, Solange R. Cromianski (1), Ricardo C. Barros (2), and Liliane B. Barichello (3)
1) Programa de Pós-graduação em Matemática Aplicada, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brasil, 2) Programa de Pós-graduação em Modelagem Computacional, Instituto
Politécnico, Universidade do Estado do Rio de Janeiro, Nova Friburgo, RJ, Brasil, 3) Instituto de Matemática, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brasil
The adjoint neutral-particle transport equation in discrete ordinates formulation is solved by the analytical discrete ordinates method (ADO). An explicit solution in
terms of the spatial variable is derived for isotropic scattering problems in heterogeneous slabs with reflective boundary conditions. The computed adjoint discrete
ordinates solution is used to evaluate the absorption of neutral particles for source-detector problems. Numerical results to two typical model problems are given to
illustrate the efficiency and accuracy of the offered method.
38
A Benchmark for Assessing the Effectiveness of Diffusion Synthetic Acceleration Schemes
Richard P. Smedley-Stevenson and Andrew W. Hagues (1), József Kópházi (2)
1) AWE PLC, Reading, Berkshire, UK, 2) Department of Mechanical Engineering, South Kensington Campus, SW7
This paper sets out a proposal for a benchmark problem aimed at quantifying the effectiveness of acceleration schemes applied to sweep based discrete ordinates
codes for strongly heterogeneous media, modeled with optically thick cells. We investigate the steady-state limit of the pipeflow test problem, which is designed to test
the ability of transport codes to solve complex thermal radiation transport problems, but can be run with any transport code which supports vacuum boundary
conditions and an external isotropic source term. We use this problem to study the effectiveness of two different variants of diffusion synthetic acceleration (DSA)
applied to a linear discontinuous finite element spatial discretisation. One scheme is based on the asymptotic diffusion limit (ADL) behaviour of the transport
discretisation, while the other is based on a modified form of the symmetric interior penalty (MIP) discretisation of the diffusion equation. The MIP equations are
symmetric positive definite and we are therefore able to use powerful blackbox multi-grid algorithms which offer a scalable solution strategy (where the solver time
scales linearly with the number of unknowns), but we observe degraded convergence of the transport iteration, so this solution strategy is not necessarily optimal. The
overall effectiveness of the two DSA schemes depends crucially on the computational time spent in both the linear solve and performing the transport sweeps, and the
ADL-DSA equations remain superior in terms of their acceleration properties.
45
A Surface Integral Based Momentum Advection Scheme for Neutron Transport in Moving Materials
Erin D. Fichtl, Randal S. Baker, and Jon A. Dahl (1), Jim E. Morel (2)
1) Los Alamos National Laboratory, Computational Physics and Methods, CCS-2,, Los Alamos, NM, 2) Department of Nuclear Engineering, Texas A&M University, College Station, TX
A new momentum advection scheme for neutron transport in moving materials is developed and tested for 1-D spatial geometries. In previous work, the momentum
advection term was discretized by taking integrals over velocity cell volumes. The new scheme replaces these integrals over cell volumes with surface integrals by
applying the divergence theorem. The surface integral approach is shown to be accurate and numerically stable and is additionally easier to implement and extend to
multi-dimensional geometries than the old scheme. Preconditioning strategies that exploit the structure of the momentum advection matrix equation and can be
applied to either advection scheme are also developed and shown to accelerate the convergence of the GMRES solver. The new advection scheme and
preconditioning strategies are tested on a problem of interest, namely the r-process in collapsing proto-neutron stars.
Monte Carlo with CAD and Complex Geometries
Tuesday, April 21, 2015
3:40 PM
Two Rivers
Chairs: Dr Paul Hulse, Dr Paul P.H. Wilson
231
Cad Based High Energy Particle Transport Using DAGMC Toolkit
Andrew Davis and Paul P.H. Wilson (1), Kerry T. Lee(2)
The Unverisity of Wisconsin-Madison, Madison, WI,
National Aeronautics and Space Administration Johnson Space Center, Houston, TX
The codes Geant4 & FLUKA have been interfaced with the DAGMC toolkit, resulting in FluDAG - Fluka with DAGMC and DagSolid a DAGMC geometry class within
Geant4. Simple comparisons between DAG-MCNP5, DagGeant4 & FluDAG show acceptable agreement given the limitations of the problem. A detailed comparison
between Fluka & FluDAG was performed in complex geometry and excellent agreement was found between the codes for all parameters determined.
269
Meteor: CAD, CSG and Woodcock Models
K Searson, F Fleurot, D Dewar, S Connolly and P Hulse
Sellafield Ltd., Risley, Warrington, UK
Meteor is a new criticality code developed by Sellafield Ltd., which supports fast direct tracking through CAD models, including those with NURBS surfaces, without
relying on model simplifications or facetting. Meteor is currently in the testing phase and this paper presents the current k-effective and speed comparisons against the
MONK criticality code and k-effective comparisons to experimental cases. The tests show very good statistical agreement between Meteor and MONK’s k-effective
values. Meteor also shows higher calculation speed, being on average about 2.5 times faster on the MONK validation set. With the CAD models currently tested, the
speeds are either comparable or not significantly slower (1.5 times slower for the model presented here) than CSG models. This last result is encouraging as
traditionally direct CAD tracking is believed to be orders of magnitude slower.
278
The CAD to MC Geometry Conversion Tool MCCAD: Recent Advancements and Applications
L. Lu, U. Fischer, Y. Qiu, and P. Pereslavtsev
Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen, Germany
The latest advancements implemented to KIT’s McCad geometry conversion tool are presented in this paper. These include improved core conversion algorithms and
an improved interface. McCad has been also integrated as module into the Open Source computation platform SALOME which provides a convenient interactive
environment for geometry modeling, visualization and the coupling to multi-physics calculation tools. The current version of McCad is shown to be suitable for the
conversion of rather large and complex models such as ITER including various detailed components and sub-systems and the European Demonstration power reactor
with a complex breeding blanket configuration.
Theoretical Topics in Neutron Transport Theory
Tuesday, April 21, 2015
Chair: Dr Barry D. Ganapol
48
3:40 PM
Belmont
Stability of SN K-Eigenvalue Iterations using CMFD Acceleration
Kendra P. Keady and Edward W. Larsen
Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI
In this paper, a Fourier analysis is employed to assess the stability of discrete ordinates k-eigenvalue calculations with coarse-mesh finite difference (CMFD)
acceleration using a fixed number of inner iterations per outer (hereafter referred to as the SN -CMFD method). We describe the SN -CMFD iteration equations for a
representative one-dimensional k-eigenvalue transport problem. Since the k-eigenvalue iteration is inherently nonlinear, we linearize the system to produce equations
amenable to Fourier analysis. This linearization is carried out for a homogeneous problem with periodic boundary conditions and an integer number of fine spatial cells
per coarse cell. The subsequent Fourier analysis yields a matrix system of equations that can be solved to obtain the theoretical spectral radius. We compare the
theoretical value to numerical estimates obtained using a 1-D SN -CMFD simulation for a large slab problem with vacuum boundaries. The experimental values
compare favorably with theoretical predictions for most cases, but discrepancies are present when the scattering ratio is high and the number of inner iterations per
outer (N ) is small. The Fourier analysis correctly predicts that the spectral radius decreases as N increases.
- 33 -
MC2015 : M&C + SNA + MC 2015
235
A New Derivation of the Doppler-Broadened Kernel for Elastic Scattering and Application to Upscattering Analysis in
the Resonance Range Of 238U
Richard Sanchez
Department of Nuclear Engineering, Seoul National University, Gwanak-gu, Seoul, 151-744, Republic of Korea
A new independent derivation for the classical Blackshow-Murray formula for the Dopplerbroadening elastic scattering kernel is given. This derivation includes the
effects of anisotropy of scattering in the Center of Mass and it is much shorter and easy to understand than the previous one. We also introduce an asymptotic
ordering, which includes on the same foot thermal agitation and resonance effects to analyze and explain the behavior of the Doppler-broadened kernel near the lower
resonances of heavy isotopes. The ordering is complemented by a detailed analysis for the case of colinear scattering. Besides the well-known divergence for the
singular case when the neutron velocity does not change, the behavior of the Doppler-broadened kernel is characterized by the predominance of head-on
backscattering with secondary energies exhibiting a fast transition from a small increase in downscattering to a large, dominant upscattering as the initial neutron
energy increases on the left wing of a resonance, while the opposite behavior is observed on the right wing.
208
An Azimuthal, Fourier Moment-Based Transverse Leakage Approximation for the MPACT 2D/1D Method
Shane Stimpson(1), Benjamin Collins(2), and Thomas Downar(1)
(1) Department of Nuclear Engineering and Radiological Sciences University of Michigan, Ann Arbor, Michigan, USA, (2) Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA
The MPACT code being developed collaboratively by Oak Ridge National Laboratory and the University of Michigan is the primary neutron transport solver within the
Virtual Environment for Reactor Applications Core Simulator (VERA-CS). The 2D/1D scheme is the most commonly used method for solving three-dimensional
problems. Several axial solvers in this scheme assume isotropic transverse leakages, but work with the axial SN solver has extended these leakages to include both
polar and azimuthal dependence. However, explicit angular representation can be burdensome, both in terms of run time and memory requirements. The work here
alleviates this burden by assuming the azimuthal dependence of the angular flux and transverse leakages are represented by the Fourier series expansion. At the
heart of this is a new axial SN solver that takes in a Fourier expanded radial transverse leakage and generates the angular fluxes used to construct the axial
transverse leakages used in the 2D-MOC calculations. These new capabilities are demonstrated for the rodded Takeda LWR benchmark problem and the rodded B
configuration of the extended C5G7 benchmark suite. Results with heterogeneous pins, as in the C5G7 benchmark, indicate that cancellation of error between the
angular and spatial representation of the transverse leakages may be a factor. To test this, an alternative C5G7 problem has been formulated using homogenized pin
cells to reduce the errors introduced by assuming the axial transverse leakage is spatially flat. In both the Takeda and C5G7 problems with homogeneous pins,
excellent agreement is observed at fraction of the runtime and signification reductions in memory footprint.
- 34 -
MC2015 : M&C + SNA + MC 2015
Tuesday, April 21, 2015
Monte Carlo Code Poster Session
5:30 PM
Plantation Lobby
Organized by Dr. Christopher Perfetti (ORNL) and Dr. Jean-Christophe Trama (CEA)
This poster session will provide a forum for Monte Carlo code development teams to showcase their recent code developments
and discuss their newest code features. This session will feature an introductory keynote speech by Dr. John Wagner, Director of
the Reactor and Nuclear Systems Division at Oak Ridge National Laboratory. All conference attendees are welcome to attend.
Brief descriptions of the codes that are participating in this session are given below, as well as the layout for the posters. This
layout was determined randomly using, of course, the Monte Carlo method.
1 – MC21
David Griesheimer
Bechtel Marine Propulsion Corporation
MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional
models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21
has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code supports many in-line
reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy
neutron/nucleus interaction physics over the range from 10E-5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic
and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuousenergy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering,
pair production, and photoelectric interactions. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create
spatial cells from first- and second-order surfaces. Models can also be built as hierarchical collections of previously defined spatial cells, with interior detail provided by
grids and template overlays. Results are collected by a generalized tally capability, which allows users to edit integral flux and reaction rate information. Results can
be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each tally.
Keywords: MC21, Monte Carlo, Reactor Calculations, Feedback, High-Performance Computing
2 – Monte Carlo Application Toolkit (MCATK)
Jeremy Sweezy, Steve Nolen, Travis Trahan
Los Alamos National Laboratory
The Monte Carlo Application ToolKit (MCATK) is a modern C++ component-based software library for Monte Carlo particle transport that has been in development at
Los Alamos National Laboratory (LANL) since 2008. It is designed to provide new component-based functionality for existing software as well as provide the building
blocks for specialized applications. Over the last year a number of new capabilities have been developed including: including probability of initiation (POI), multitemperature cross-sections, surface source read and write, and 3-D computational solid body geometry.
Keywords: Monte Carlo Particle Transport, Probability of Extinction, Computational Solid Body Geometry, MCATK
- 35 -
MC2015 : M&C + SNA + MC 2015
Tuesday, April 21, 2015
3 – SCALE Code System
Bradley Rearden, Douglas Peplow, Christopher Perfetti
Oak Ridge National Laboratory
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for
criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, industry, and research institutions
around the world have used SCALE for nuclear safety analysis and design. SCALE provides a plug-and-play framework that includes three deterministic and three
Monte Carlo radiation transport solvers that are selected based on the desired solution. SCALE includes the latest nuclear data libraries for continuous-energy and
multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling,
visualization, and convenient access to desired results. SCALE 6.2 provides several new capabilities and significant improvements in many existing features,
especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity/uncertainty analysis, as well as improved
fidelity in nuclear data libraries.
Keywords: SCALE, Eigenvalue, Radiation Shielding, Depletion, Sensitivity/Uncertainty Analysis
4 – Serpent
Jaakko Leppanen, Ville Valtavirta
VTT Technical Research Centre of Finland
The recent development in the Serpent 2 Monte Carlo code is described. The work is focused on two major topics: 1) spatial homogenization and group constant
generation for deterministic reactor simulator codes, and 2) Coupled multi-physics applications involving neutronics, thermal hydraulics and fuel behavior modeling.
Entirely new applications for Serpent include fusion neutronics and radiation shielding, which are briefly introduced.
Keywords: Serpent, Monte Carlo, Spatial Homogenization, Multi-physics
5 – MCNP
Avneet Sood, Forrest Brown, Michael Rising
Los Alamos National Laboratory
The latest version of MCNP (version 6.1.1) was released by RSICC in September 2014. This beta release followed the production release of MCNP 6.1 in June 2013.
There have been nearly 8000 copies of MCNP6 distributed both domestically and internationally to users from academia, industry, and the US government.
MCNP6.1.1 has all of the features of previous versions but makes advances in charged particle and light ion transport including correlated source emissions.
Significant improvements in our capabilities for transport and variance reduction on unstructured meshes have also been included. MCNP6.1.1 is significantly faster
than MCNP6.1. We will review the current features of MCNP and discuss future direction.
Keywords: Monte Carlo, Radiation, Particle Transport, MCNP
6 – Shift
Thomas Evans, Gregory Davidson, Tara Pandya
Oak Ridge National Laboratory
This poster presents the massively parallel Monte Carlo radiation transport package Shift, developed at Oak Ridge National Laboratory and it gives the capabilities,
implementation, and parallel performance of this code package. This code package is designed to scale well on high performance architectures. Scaling results
demonstrate very good strong and weak scaling as applied to LWR analysis problems. Also, benchmark results from various reactor problems show that Shift results
compare well to other contemporary Monte Carlo codes and experimental results.
Keywords: Monte Carlo, Radiation Transport, Massively Parallel
7 – ADVANTG
Scott Mosher, Seth Johnson, Ahmad Ibrahim
Oak Ridge National Laboratory
ADVANTG is a software package for generating variance reduction parameters for fixed-source, continuous-energy neutron and photon Monte Carlo transport
simulations using MCNP5. ADVANTG automates the process of generating three-dimensional (3-D) space- and energy-dependent weight-window bounds and
consistent biased source distributions based on approximate multigroup transport solutions that are efficiently generated by the Denovo 3-D, parallel discrete
ordinates package. The code implements the Consistent Adjoint Driven Importance Sampling (CADIS) method for accelerating individual tallies and the ForwardWeighted CADIS method for obtaining relatively uniform uncertainties across tallies over multiple regions and/or energy bins, including mesh tallies. Variance
reduction parameters are output in a format directly usable by unmodified versions of MCNP. ADVANTG can also be used as a front-end for Denovo and is capable of
driving parallel SN calculations.
Keywords: Variance Reduction, MCNP, Denovo, Fixed-Source Transport
8 – MVP
Yasunobu Nagaya, Keisuke Okumura, Takamasa Mori
Japan Atomic Energy Agency
A general-purpose Monte Carlo code MVP has been developed for continuous-energy neutron and photon transport calculations since the late 1980s at Japan Atomic
Energy Agency. The MVP code is designed for nuclear reactor applications such as reactor core design/analysis, criticality safety and reactor shielding. The code has
been widely in domestic use since the first release in 1994 and the second release in 2005. Modifications and enhancements have been made with advanced Monte
Carlo methodology for reactor physics applications. Featured capabilities for version 3 are the perturbation calculation for the k-effective value, treatment of delayed
neutrons, group constant generation, exact resonance elastic scattering model, reactor kinetics parameter calculation. The perturbation calculation is based on the
correlated sampling and differential operator sampling methods. The impact of the perturbed fission-source distribution can be also taken into account. Delayed
neutrons can be explicitly treated in eigenvalue and time-dependent fixed-source problems. The group constants can be generated with the newly implemented tally
capability of group-to-group scattering reaction rates. The isotropic diffusion coefficient can be also calculated with the average cosine of the scattering angle. An
exact resonance elastic scattering model based on the weight correction method can improve the calculation accuracy of the Doppler reactivity worth. The reactor
kinetics parameters of the effective delayed neutron fraction and the generation time can be calculated with the differential operator sampling method. The abovementioned capabilities are integrated into the code and MVP version 3 is planned to be released domestically in the near future.
Keywords: MVP, Monte Carlo, Neutron/Photon Transport, Reactor Physics/Design
9 – Meteor
Keith Searson, Fabrice Fleurot
Sellafield Ltd
Meteor is a new criticality code developed by Sellafield Ltd., which supports fast direct tracking through CAD models, including those with NURBS surfaces, without
relying on model simplifications or faceting. Meteor is currently in the testing phase and this poster presents the current k-effective and speed comparisons against the
MONK criticality code. The tests show very good statistical agreement between Meteor and MONK’s k-effective values. Meteor also shows higher calculation speed,
being on average about 2.5 times faster on the MONK validation set.
With the CAD models currently tested, the speeds are either comparable or not significantly slower (1.5 times slower for the model presented here) than CSG models.
This last result is encouraging, as traditionally direct CAD tracking is believed to be orders of magnitude slower.
Keywords: Criticality, Meteor, CAD, OiNC2, Tracking
- 36 -
MC2015 : M&C + SNA + MC 2015
Tuesday, April 21, 2015
10 – Light and Individual Computer-Oriented Neutron Transport Code based on Monte Carlo Method (LIONMC)
Song Hyun Kim, Do Hyun Kim, Sangjin Lee
Hanyang University and Institute for Basic Science
In this presentation, the current status of the development of Monte Carlo simulation code in HANYANG University is described for reactor analyses. The MC code
has been developed to offer some specific features to analyze reactor core characteristics with user-friendly functions. The functions are spherical particle modeling
function in stochastic medium, automatic decision function on active/inactive cycle, on-the-fly sampling-based sensitivity and uncertainty analyses, fission matrixbased MC simulation module, automatic applications of variance reduction techniques, volume calculation function of cells, and the others. The code is being
developed based on the C++ program language, and it is planned that a test version of the code will be distributed at the end of this year.
Keywords: Monte Carlo Transport, Nuclear Reactor, User-Friendly, Automatic Decision
11 – TRIPOLI
Francois-Xavier Hugot
CEA
TRIPOLI is the generic name of a Monte Carlo radiation transport codes family dedicated to radiation protection and shielding, core physics with depletion, criticality
safety and nuclear instrumentation analyses. It has been continuously developed at CEA since the mid 60s, at Fontenay-aux-Roses then at Saclay. TRIPOLI-4, the
fourth generation of the family, is the corner stone of the CEA Radiation Transport Software Suite, which also includes the APOLLO codes, deterministic solvers
dedicated to reactor physics analyses (at both lattice- and core-level), the depletion code MENDEL, the photon point-kernel code NARMER, and CONRAD and
GALILEE for nuclear evaluation and data processing. TRIPOLI-4 is the reference industrial code for CEA (labs and reactors), EDF (58 PWRs), and branches of
AREVA. It is also the reference code of the CRISTAL Criticality Safety package developed with IRSN and AREVA.
Keywords: TRIPOLI, Monte Carlo, CEA, Neutron, Photon
12 – ARCHER
Tianyu Liu, Noah Wolfe, X. George Xu
Rensselaer Polytechnic Institute (RPI)
ARCHER (Accelerated Radiation-transport Computations in Heterogeneous EnviRonments) is a Monte Carlo simulation code for coupled photon-electron transport
that is run on CPUs, GPUs and MICs. The central part of the code is a photon-electron transport kernel. To accommodate to different needs, several applicationspecific modules are being developed, including CT dosimetry, radiotherapy dosimetry, radiation shielding design and nuclear medicine dosimetry modules. Each
module is uniquely designed and optimized for that application. For example, nuclear medicine dosimetry module can simulate radioactive decay and account for
biokinetic factors in order to accurately quantify internal dose for a patient. ARCHER also has a simplistic neutron transport module for one-group criticality calculation
used to deepen our understanding of GPU/MIC performance tuning. In addition, several powerful utility modules are being developed, for instance, to
programmatically evaluate the energy consumption of the code, to allow different computing devices CPU/GPU/MIC work concurrently, etc. The capability of ARCHER
has been continuously expanded.
Keywords: ARCHER, Monte Carlo, GPU, MIC, Xeon Phi
13 – PHITS
Tatsuhiko Ogawa
JAEA and Partner Organizations: CEA, Chalmers University, JAXA, KEK, Kyushu University, RIKEN, RIST
Particle and Heavy Ion Transport code System, PHITS is a general-purpose Monte Carlo particle transport simulation code developed under collaboration of several
institutes in Japan and Europe. The Japan Atomic Energy Agency (JAEA) is responsible for managing the entire project. PHITS can deal with the transport of nearly
all particles, including neutrons, protons, heavy ions, photons, and electrons, over wide energy ranges using various nuclear reaction models and data libraries. PHITS
has several important features, such as an event-generator mode for low-energy neutron interaction, beam transport functions, a function for calculating the
displacement per atom (DPA), and a microdosimetric tally function. Due to these features, it has been widely used for various applications. For example, PHITS was
extensively used in the design of the shielding, target, and neutron beam lines for the J-PARC project. Calculation of the dose and dose equivalents in human bodies
irradiated by various particles was carried out using PHITS in order to determine radiological protection needs and medical physics issues. The event-generator mode
is useful in the estimate of the soft error rates of semi-conductor devices. The microdosimetric function was used in the development of a new computational model for
calculating the relative biological effectiveness (RBE)-weighted dose for charged particle therapy. This PHITS package is distributed to many countries via the
Research Organization for Information Science and Technology, the Data Bank of the Organization for Economic Co-operation and Development's Nuclear Energy
Agency, and the Radiation Safety Information Computational Center.
Keywords: General-purpose Transport Simulation Code, DPA, Microdosimetry, Event Generators
14 – McCARD
Hyung Jin Shim, Chnag Hyo Kim, Ho Jin Park
Seoul National University
McCARD is a Monte Carlo (MC) neutron-photon transport simulation code designed exclusively for neutronics analyses of various nuclear reactor and fuel systems.
McCARD estimates neutronics design parameters such as effective multiplication factor, neutron flux and current, fission power, etc. by using continuous-energy cross
section libraries and detailed geometrical data of the system. Since its predecessor MCNAP was first introduced in 1999 as a MC burnup analysis tool with an
ORIGEN2-type fuel depletion equation solver, it has evolved to a versatile MC tool which is capable of performing the whole-core neutronics calculations, the reactor
fuel burnup analysis, the few group diffusion theory constant generation, sensitivity and uncertainty (S/U) analysis, and uncertainty propagation analysis. It has some
special features such as the anterior convergence diagnostics, real variance estimation, neutronics analysis with temperature feedback, B1 theory-augmented few
group constants generation, kinetics parameter generation and MC S/U analysis based on the use of adjoint flux. In the course of its evolution, a wide range of nuclear
systems such as SMART, PMR-200, KALIMER-600, fusion blankets, subcritical systems like HYPER, YALINA, and IPEN/MB-01 have been subjected to the
neutronics analyses. The R&D efforts to meet both functional and non-functional requirements for these analyses have played crucial roles in developing McCARD
into its current status.
Keywords: McCARD, Whole Core Transport Calculation, Few Group Constant Generation, Uncertainty Propagation Analysis, Sensitivity/Uncertainty Analysis
15 – FinMCool
Ryan McClarren, Jacob Landman, Alex Long
Texas A&M University
FinMCool is a Monte Carlo code for high-energy density physics radiative transfer that is based on the Fleck and Cummings implicit Monte Carlo method. The current
capabilities of the code include Cartesian and cylindrical geometries, domain-replicated parallelism, and several variance reduction techniques. The code's design and
philosophy make it a useful testbed for new methods development and we have been actively developing new weight windows, implicit capture, and gradient
estimation techniques in the code.
Keywords: Monte Carlo Methods, Radiative Transfer, High-Energy Density Physics, Variance Reduction
- 37 -
MC2015 : M&C + SNA + MC 2015
Tuesday, April 21, 2015
16 – Reactor Monte Carlo Code (RMC)
Jingang Liang, Yishu Qiu, Kan Wang
Reactor Engineering Analysis Laboratory (REAL) of Tsinghua University
The code RMC (Reactor Monte Carlo Code) has been developed by Reactor Engineering Analysis Lab (REAL) at Department of Engineering Physics in Tsinghua
University. The current version is RMC 3.1.0. Some new features and enhancements which have been developed and implemented into RMC since the end of 2013
(SNA+MC2013 in Paris) are summarized and introduced in this special Monte Carlo code session. Those new contents are mainly as follows:
(1) Random number generator’s period has been extended to 2126 by implementing 128-bit integer multiplication based on Linear Congruential Algorithm.
(2) On-the-fly Doppler broadening methods are explored in two feasible ways, i.e., pre-Doppler broadening before transport calculations and stochastic sampling
Doppler broadening based on Maxwell Boltzmann distribution for the target nuclei agitation.
(3) Three approaches including Random Lattice Method, Chord Length Sampling and explicit modeling with mesh acceleration are implemented in RMC for stochastic
medium simulations.
(4) Photon transport is added and photon-neutron coupling transport calculations are achieved for broader applications of RMC.
(5) Iterated fission probability (IFP) method and Wielandt method are employed to make RMC having the capabilities of sensitivity and uncertainty analysis.
(6) RMC is approaching 3D full core burnup calculations with combined tally and depletion data and domain decompositions. The number of burnup regions could be
up to several millions.
(7) The kinetics simulation capability has been implemented in RMC based on the predictor-corrector quasi-static method besides the direct simulation method used
before.
Keywords: RNG, On-the-fly Doppler Broadening, Stochastic Medium, Sensitivity/Uncertainty Analysis, Full Core Burnup
17 – Geant4
Makoto Asai, Marc Verderi, Andrea Dotti, Dennis Wright
The Geant4 Collaboration. Partner Organizations: SLAC (USA), IN2P3/LLR (France), CERN (Switzerland), KEK (Japan), IN2P3/CENBG (France), INFN/LNS (Italy), CEA (France), CIEMAT (Spain),
LLNL (USA), TRIUMF (Canada)
Geant4 is a general purpose Monte Carlo simulation toolkit for elementary particles and nuclides passing through and interacting with matter. Geant4 covers all
relevant physics processes, including electromagnetic and hadronic physics for energy range spanning from eV to TeV scale, decay and optical processes. The
transport of low energy neutrons down to thermal energies is also handled. The software can also simulate remnants of hadronic interactions, including atomic deexcitation and provides extension to low energies down to the DNA scale for biological modeling. Geant4 offers many types of geometrical descriptions to describe
most complicated and realistic geometries. Geant4 also offers several variance reduction options, scorers, visualization and graphical user interfaces. Its areas of
application include high energy, nuclear and accelerator physics, studies in medical and space science, shielding and radiation protection, and newly arising material
science. The recent major release (Geant4 version 10.0 released in December 2013) delivered event-level parallelism via multithreading. It has already demonstrated
excellent scalability up to hundreds of threads with good memory footprint reduction on various computing architectures including Xeon, Xeon Phi and AMD. For
example, all LHC experiments have started their projects to migrate their simulation codes to multithread with Geant4 version 10.
Keywords: Geant4, Radiation transport, Toolkit, C++, Multithread
18 – FDS Booth / SuperMC: Super Monte Simulation Program for Nuclear and Radiation Process
Jing Song, Tao He, Bin Wu
Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences - FDS Team
SuperMC is a general purpose, intelligent and multi-functional program for the design and safety analysis of nuclear systems. It is designed to perform the
comprehensive neutronics calculation, taking the radiation transport as the core and including the depletion, radiation source term/dose/biohazard, material activation
and transmutation, etc. It supports the multi-physics coupling calculation including thermo-hydraulics, structural mechanics, biology, chemistry, etc. The main technical
features are hybrid MC-deterministic methods and the adoption of advanced information technologies. The main usability features are automatic modeling of geometry
and physics, visualization and virtual simulation and cloud computing service. The latest version of SuperMC can accomplish the transport calculation of neutrons,
gamma rays, and can be applied for criticality and shielding design of reactors, medical physics analysis, etc. SuperMC has been verified by more than 2000
benchmark models and experiments. The handbook of International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the Shielding Integral Benchmark
Archive Database (SINBAD) were used to verify the correctness of SuperMC. The fusion reactor (ITER benchmark model, FDS-II), fast reactor (BN600, IAEA-ADS),
PWR (BEAVRS, HM, TCA) and cases from the International Reactor Physics handbook Evaluation Program (IRPhEP) were employed for validating the
comprehensive capability for reactor applications. The benchmarking results have been compared with MCNP5, demonstrating higher accuracy and calculation
efficiency of SuperMC, and also significant enhancement of work efficiency due to its functions of automatic modeling and visualized analysis. SuperMC has been
applied in the nuclear design and analysis of ITER and the China Lead-based Reactor (CLEAR).
Keywords: Monte Carlo, Particle Transport, Fusion, Fission
19 – NTS
Changyuan Liu
International Business Nuclear Energy Software Development Corporation (IBNESD)
NTS (Neutron Transport System) aims for the demonstration of new concepts for next-generation Monte Carlo reactor simulation codes. For geometry, NTS is
integrated with a self-developed CAD system to allow automatic volume calculations and to avoid manual tally definitions. To achieve these goals, NTS adopts a
balance between the complexity of geometry and the easiness of use. Currently it supports all geometries that are extrusions in the z-direction of all 2D shapes formed
by circles and lines. Exact topological relations are calculated with a precision down to 1E-16. As a result, NTS prevents particle losses. A demonstration website
(ibnesd.com) has been developed to demonstrate topological information calculated for some simple 2D shapes. For cross sections, NTS is capable of processing
ENDF text files directly. It is designed with the goal to reduce dependences on the NJOY system and ACE library with a self-developed cross section database
management system and a testing fast Doppler broadening algorithm. Cross sections at any temperature are generated at the beginning of execution. The processing
of the neutron data of all 422 nuclides in the ENDF/B-VII.1 library is under development. The results of the processing of a few nuclides including U-235 will be
presented.
Keywords: Monte Carlo, Doppler Broadening, CAD, ibnesd.com
20 – OpenMC
Matthew Ellis, Jon Walsh, Benoit Forget
Computational Reactor Physics Group at the Massachusetts Institute of Technology
OpenMC was developed by the Computational Reactor Physics Group (CRPG) at the Massachusetts Institute of Technology as a tool for nuclear reactor simulation
on high-performance computing platforms. Given that many legacy codes do not scale well on existing and future parallel computer architectures, OpenMC was
developed from scratch with a focus on high-performance scalable algorithms as well as modern software design practices. OpenMC is ideal for developing, testing,
and optimizing numerical methods that increase simulation accuracy and reduce memory and computational requirements. The windowed multipole Doppler
broadening method has been developed and implemented in OpenMC providing cross sections on-the-fly at any temperature in the resolved resonance region with
performance similar to single temperature ACE file lookup. Additionally, reductions in nuclear data memory requirements are achieved with high-fidelity, on-the-fly
methods for calculating Doppler-broadened unresolved resonance region cross sections, and similar methods are being developed for thermal scattering data (S(α,
β)). Spatial domain and data decomposition algorithms are being investigated to address per-node memory limitations that are often encountered due to the large
number of particles, tallies, and materials needed for a full-core Monte Carlo simulation. Finally, the inclusion of multiphysics feedback in Monte Carlo simulations has
been investigated in OpenMC using a low-order CMFD operator and alternatively the Multiphysics Object-Oriented Simulation Environment (MOOSE). In this latter
investigation Functional Expansion Tallies are used to minimize the data transfer between multiphysics applications while maintaining a high level of accuracy when
mapping to an unstructured finite element mesh.
Keywords: Monte Carlo, OpenMC, Windowed Multipole
- 38 -
MC2015 : M&C + SNA + MC 2015
Hybrid Monte Carlo/Deterministic Transport
Wednesday, April 22, 2015
Chair: Dr. Emily R. Shemon
54
8:30 AM
Hermitage C
Development, Verification and Test Application of a Hybrid CASMO-5/SERPENT Depletion Scheme
O. Leray, M. Pecchia, A. Vasiliev, H. Ferroukhi and A. Pautz (1), H.Perrier (2)
1) Laboratory of Reactor Physics and System Behavior, Paul Scherrer Institute, Villigen, Switzerland, 2) Thermofluids Division Department of Mechanical Engineering Imperial College London Exhibition
Road, London, SW7, United Kingdom
A Hybrid deterministic/stochastic Depletion Scheme based on the Casmo-5 and the Serpent v2 codes was developed at PSI. The goals of this scheme are to provide
penalty factors on nuclide composition for Burnup Credit and also to enhance the PSI deterministic code validation methodology by assessing and breakdown
computational biases on the main neutron parameters. The two biases under consideration are the methodology to solve the transport equation and the burnup
algorithm. This study deals with k-inf and nuclide composition biases for simple PWR lattice. The hybrid scheme is first compared to Casmo-5 and Serpent v2 results.
The overall good agreements between the codes concerning the k-inf, total composition and reaction rates allow to use the hybrid scheme to assess biases. Then,
comparisons of k-inf results between the hybrid scheme, Casmo-5 and Serpent show the significance of the biases induced by the burnup algorithm and the transport
methodology during depletion. Finally the scheme is validated using PIE data and again compared to Serpent and Casmo-5 on nuclide composition.
63
BWR Full Core Analysis with SERPENT/SIMULATE-3 Hybrid Stochastic/Determinstic Code Sequence
M. Hursin, L. Rossinelli, H. Ferroukhi and A. Pautz
Paul Scherrer Institut, Nukleare Energie und Sicherheit, Villigen, Switzerland
The objective of the present work is to evaluate the use of Monte Carlo (MC) code as a tool to generate nuclear data libraries for core simulators. The goal is not to
replace the usual deterministic lattice calculations but to provide them with an audit tool. A boiling water reactor (BWR) assembly is modeled with the deterministic
lattice physics code CASMO-5 and with the MC code SERPENT, for a simplified set of history and branch cases. The results obtained from CASMO-5 and SERPENT
are compared at the assembly level in terms of k-inf and macroscopic cross sections; the differences are generally found to be less than a percent, except for the
diffusion coefficients. The cost of performing lattice calculations with SERPENT is two orders of magnitude higher than with CASMO-5. Subsequently, nuclear data
library are generated for SIMULATE-3 from CASMO-5 and SERPENT results and full core cycle calculations are performed. K-eff and power distribution are
compared. The differences in terms of k-eff between SERPENT/SIMULATE-3 and CASMO-5/SIMULATE-3 are within 200pcm at low burnup increasing to 400pcm at
higher exposure. The agreement in terms of relative power fraction is within 4% over the reactor cycle.
147
A High-Order Low-Order Algorithm with Exponentially-Convergent Monte Carlo for Thermal Radiative Transfer
Simon R. Bolding and Jim E. Morel (1), Mathew A. Cleveland (2)
1) Department of Nuclear Engineering, Texas A&M University, College Station, TX, Los Alamos National Laboratory, Los Alamos, NM
We have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on spatial
and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S2
equations. The LO solver is fully implicit in time and efficiently resolves the non-linear temperature dependence at each time step. The HO solver utilizes
exponentially-convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source, pure absorber transport problem. This
global solution is used to compute consistency terms that require the HO and LO solutions to converge towards the same solution. The use of ECMC allows for the
efficient reduction of statistical noise in the MC solution, reducing inaccuracies introduced through the LO consistency terms. We compare results with an implicit
Monte Carlo (IMC) code for one-dimensional, gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this algorithm.
189
Acceleration of Shutdown Dose Rate Monte Carlo Calculations Using the Multi-Step Cadis Hybrid Method
Ahmad M. Ibrahim, Douglas E. Peplow, and Robert E. Grove
Oak Ridge National Laboratory, Oak Ridge, TN
Shutdown dose rate (SDDR) analysis requires: 1) a neutron transport calculation to estimate space- and energy-dependent neutron fluxes, 2) an activation calculation
to compute the distribution of radionuclide inventories and the associated photon sources, and 3) a photon transport calculation to estimate the final SDDR. In some
applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for immensely large systems that involve massive amounts of shielding materials.
However, these simulations are impractical because the accurate calculation of space- and energy-dependent neutron fluxes in these systems is difficult with the MC
method even if global variance reduction techniques were used. This paper describes the Multi-Step CADIS (MS-CADIS) hybrid Monte Carlo/deterministic
methodology that uses the Consistent Adjoint Driven Importance Sampling (CADIS) technique but focuses on multi-step shielding calculations. MS-CADIS speeds up
the SDDR neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. Using a simplified example, preliminary
results showed that the use of MS CADIS enhanced the efficiency of the SDDR neutron MC calculation by a factor of 550 compared to standard global variance
reduction techniques, and that the efficiency enhancement compared to analog Monte Carlo is higher than a factor of 10,000.
Improved Multigroup Cross Section Generation
Wednesday, April 22, 2015
8:30 AM
Chair: Dr. Rachel N. Slaybaugh
177
Hermitage D
Simple Benchmark for Evaluating Self-Shielding Models
Nathan A. Gibson, Kord Smith, and Benoit Forget
Massachusetts Institute of Technology, Cambridge, MA
Accounting for self-shielding effects is paramount to accurate generation of multigroup cross sections for use in deterministic reactor physics neutronics calculations.
Historically, equivalence in dilution and subgroup techniques have been the preeminent means of accounting for these effects, but recent work has proposed new
solutions, including the Embedded Self-Shielding Method (ESSM). This paper presents a very simple benchmark problem to compare these and future self-shielding
methods. The benchmark is perhaps the simplest problem in which both energy and spatial self-shielding effects are important, a two-region problem with a lumped
resonant material. A single resonance in a single energy group is considered. Scattering is approximated using the narrow resonance approximation, decoupling each
energy value and allowing an easily-computed reference solution to be obtained. Equivalence in dilution using two-term rational expansions and the subgroup method
were both found to give very accurate solutions on this benchmark, with errors less than 1\% in nearly all cases. One-term rational expansions and ESSM showed
much larger errors.
187
An Asymptotic Scaling Factor for Multigroup Cross Sections
Thomas G. Saller, Edward W. Larsen, and Thomas Downar
Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI
The construction of multigroup cross sections involves first determining a suitable neutron energy spectrum, and then flux-weighting each cross section over each
energy bin using the specified spectrum. If the spectrum is obtained from an infinite medium calculation, then the resulting multigroup cross sections preserve both the
(multigroup) infinite medium neutron spectrum and eigenvalue. Such multigroup cross sections can be modified by a multiplicative scaling factor and still preserve the
infinite medium spectrum and eigenvalue. In this paper, we derive a formula for the scaling factor that makes the modified multigroup cross sections satisfy an
additional property of the transport equation, the equilibrium diffusion approximation. Also, we demonstrate by numerical simulations that the resulting scaled
multigroup cross sections yield more accurate results for multigroup eigenvalue problems in finite media.
- 39 -
MC2015 : M&C + SNA + MC 2015
239
Multi Group Geometrical Correction for Coupled Monte Carlo Codes: Multi-Regional Thermal System
D. Kotlyar and E. Shwageraus (1), E. Fridman(2)
(1) Department of Engineering, University of Cambridge, Cambridge, United Kingdom, (2) Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany
This paper focuses on generating accurate 1-g cross section values that are necessary for evaluation of nuclide densities as a function of burnup for coupled Monte
Carlo codes. The proposed method is an alternative to the conventional direct reaction rate tally approach, which requires extensive computational efforts. The
method presented here is based on the multi-group (MG) approach, in which pre-generated MG sets are collapsed with MC calculated flux. In our previous studies we
showed that generating accurate 1-g cross sections requires their tabulation against the background cross-section (σ0) to account for the self-shielding effect.
However, in previous studies, the model that was used to calculate σ0 was simplified by fixing Bell and Dancoff factors. This work demonstrates that 1-g values
calculated under the previous simplified model may not agree with the tallied values. Therefore, the original background cross section model was extended by
implicitly accounting for the Dancoff and Bell factors. The method developed here reconstructs the correct value of σ0 by utilizing statistical data generated within the
MC transport calculation by default. The method does not carry any additional computational burden and it is universally applicable to the analysis of thermal as well
as fast reactor systems.
253
Development of a New 47-Group Library for the CASL Neutronics Simulators
Kang Seog Kim, Mark L. Williams, Dorothea Wiarda, and Andrew T. Godfrey
Oak Ridge National Laboratory, Oak Ridge, Tennessee
The MPACT neutronics module of the Consortium for Advanced Simulation of Light Water Reactors (CASL) core simulator is a 3-D whole core transport code being
developed for the CASL toolset, Virtual Environment for Reactor Analysis (VERA). MPACT is under development for neutronics and thermal-hydraulics coupled
simulation for pressurized light water reactors. Key characteristics of the MPACT code include (1) a subgroup method for resonance self-shielding and (2) a whole
core solver with a 1-D / 2-D synthesis method. Oak Ridge National Laboratory (ORNL) AMPX/SCALE code packages have been significantly improved to support
various intermediate resonance self-shielding approximations such as subgroup and embedded self-shielding methods. New 47-group AMPX and MPACT libraries,
which are based on ENDF/B-VII.0, have been generated for the CASL neutronics module MPACT. The MPACT group structure comes from the HELIOS library. This
new 47-group MPACT library includes all nuclear data required for static and transient core simulations. This study discusses a detailed procedure to generate 47group AMPX and MPACT libraries and benchmark results for VERA progression problems.
Monte Carlo Criticality Calculations with Thermal-Hydraulic Feedback
Wednesday, April 22, 2015
Chair: Dr. Maria N. Avramova
67
8:30 AM
Hermitage A-B
Numerical Methods in Coupled Monte Carlo and Thermal-Hydraulic Calculations
Daniel F. Gill, David P. Griesheimer, and David L. Aumiller
Bettis Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, West Mifflin PA
Large-scale reactor calculations with Monte Carlo, including nonlinear feedback effects, have become a reality in the course of the last decade. In particular,
implementations of coupled Monte Carlo and thermal-hydraulics calculations have been separately developed by many. Numerous Monte Carlo codes have been
coupled to a variety of thermal-hydraulics codes (system level, subchannel, and CFD). In this work we review the numerical methods which have been used to solve
the coupled Monte Carlo thermal-hydraulics problem with a particular focus on the formulation of the nonlinear problem, convergence criteria, and relaxation schemes
used to ensure stability of the iterative process. We use a simple PWR pin-cell problem to numerically investigate the stability of commonly used schemes and what
problem parameters influence the stability, or lack thereof. We also examine the role that the running strategy used in the Monte Carlo calculation plays in the
convergence of the coupled calculation.
120
Development and Testing of a Coupled MCNP6/CTF Code
A. Bennett, K. Ivanov, and M. Avramova
Department of Mechanical and Nuclear Engineering, Pennsylvania State University
There has been a recent trend towards Monte Carlo based multi-physics codes to get high accuracy reactor core solutions. These high accuracy solutions can be
used as reference solutions to validate deterministic codes. To obtain this high accuracy solution, a high fidelity coupled code was created. The coupling is done with a
Monte Carlo code and a thermal-hydraulic subchannel code. The use of a Monte Carlo code allows exact geometry modeling, as well as the use of continuous energy
cross sections. Coupling with a thermal-hydraulic code allows the feedback effects to be accurately modeled. The coupling is done with MCNP6, which is a general
purpose Monte Carlo transport code, and CTF, which is a subchannel code. The coupling was preformed using an internal coupling method for each pin and axial
level. The On-The-Fly cross sections were used to decrease the complexity of the coupling and to decrease the memory requirement. The coupled MCNP6/CTF code
was tested on a 3X3 PWR mini assembly test problem that included a guide tube as well as a single BWR fuel pin test problem with a very bottom peaked axial power
profile. The results of these test problems were compared with similar coupled Monte Carlo/thermal-hydraulic subchannel codes and there was good agreement in the
results. In this paper, the foundation of the coupled code is given, and it shows that the coupling was accurately implemented.
130
Monte Carlo Full Core Neutronics Analysis with De-Tailed Consideration of Thermal-Hydraulic Parameters
W. Bernnat, N. Guilliard, J. Lapins (1), A. Aures, I. Pasichnyk, Y. Perin, K. Velkov, W. Zwermann (2)
1) Institut für Kernenergetik und Energiesysteme (IKE), Universität Stuttgart, Stuttgart, Germany, 2) Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) mbH, Garching, Germany
Power reactors are composed of fuel assemblies with lattices of pins or other repeated struc-tures generally nested in several levels. Such lattices can be modeled in
detail even for large cores by Monte Carlo neutronics codes such as MCNP6 using appropriate geometry options. Except for fresh cores at beginning of life, there is a
varying fuel composition due to burnup in the different fuel pins. Additionally, except for zero power states the fuel, moderator or coolant temperatures and densities
vary according to the power distribution and cooling conditions. Therefore, the realistic neutronics analysis requires the consideration of the thermal-hydraulic
parameters. Depending on the degree of detail of the analysis, for a full core analysis a very large number of cells with different nuclide compositions and
temperatures must be taken into account. The assignment of different material specifications to the huge number of cells repre-senting the full core geometry is very
effortful and may exceed program limits if the standard input procedure is used. Instead of the standard input for material and temperature assignment to cells, an
internal assignment is used which overrides uniform input parameters. The temperature dependency of continuous energy resonance cross sections, probability tables
for the unresolved resonance region and thermal neutron scattering laws is taken into account by interpolation, re¬quiring only a limited number of data sets
generated for different temperatures. Alternatively, a representation of the temperature dependency of resonance cross sections is taken into account by a polynomial
fit (OTF). The thermal-hydraulic parameters will be calculated by the GRS system code ATHLET for liquid coolants. Examples will be shown for different applications
for LWRs with square and hexagonal lattices and sodium cooled fast reactors (SFR) with hexagonal lattices.
134
Towards the Development of Coupled Monte Carlo / Subchannel Thermal Hydraulic Codes for High-Fidelity Simulation
of LWR Full Cores
V. Sanchez and A. Ivanov (1), J. E. Hoogenboom (2)
1) Karlsruhe Institute of Technoly, Germany, 2) Delft Nuclear Consultancy, The Netherlands
Increased investigations are focused on the development of coupled Monte Carlo / thermal hydraulic solvers to provide high-fidelity simulations of pin clusters, fuel
assemblies, fuel assembly clusters and full cores taking into account local thermal hydraulic feedback effects in the Monte Carlo neutron transport simulations. At KIT,
the developmental work is focused on the MC-TH coupled solutions for industry- like applications meaning high-accuracy prediction of local parameters using fast
running and validated solutions. To achieve these goals break-through innovative methods must be implemented in order to simulate full cores with high accuracy in
acceptable computing time. For this purpose, KIT is developing novel coupling approaches between MC codes MCNP and subchannel codes (SUBCHANFLOW).
Those include internal coupling of codes, on-the-fly thermal hydraulic feedback treatment during the calculation of the neutron transport, introduction of the collision
estimator to tally fission heat deposition, special treatment of the temperature dependence of the thermal scattering data. The stochastic approximation was
implemented to accelerate the convergence of the coupled calculation. This paper will present selected results obtained with the coupled MC/TH codes for large scale
geometries at pin-level.
- 40 -
MC2015 : M&C + SNA + MC 2015
HPC and Algorithms for Advanced Architectures
Wednesday, April 22, 2015
Chair: Dr X George Xu
112
8:30 AM
Two Rivers
Performance Model Development and Analysis for the 3-D Method of Characteristics
Brendan Kochunas and Thomas Downar
Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI
In this paper we present a methodology for developing and analyzing a detailed latency based performance model applied to a parallel algorithm to solve the 3-D
Boltzmann transport equation using the method of characteristics. The performance model is verified against experiment and observed to predict the execution time of
the algorithm to within 10% of the measured execution times. An analysis of the performance model is then performed to evaluate the algorithms sensitivity to
machine hardware characteristics in both serial and parallel execution. This analysis shows that improvements to network latency would provide minimal benefits with
respect to the algorithm, while increasing bandwidth can provide some modest enhancements in parallel performance. The algorithm is found to have a theoretical
peak performance of 10% of the machine theoretical peak, while only half of the algorithm’s peak is realized. This suggests continued work is needed to improve the
performance of the algorithm in serial. The scalability of the algorithm is predicted and observed to be very good with efficiencies over 90% for O(1e5) processors. The
model also predicts good scalability past O(1e6) processors.
206
Optimizing the Monte Carlo neutron cross-section construction code, XSBench, to MIC and GPU platforms
Tianyu Liu, Noah Wolfe, Christopher D. Carothers, Wei Ji, and X. George Xu
Rensselaer Polytechnic Institute Troy, NY
XSBench is a proxy application developed by Argonne National Laboratory (ANL). It is used to study the performance of nuclear macroscopic cross-section data
construction --- usually the most time-consuming process in Monte Carlo neutron transport simulations. In this paper we report on our experience in optimizing
XSBench to Intel multi-core CPUs, Many Integrated Core coprocessors (MICs) and Nvidia Graphics Processing Units (GPUs). The cross-section construction that
involves the nuclear isotopes modeled in the Hoogenboom-Martin (H-M) large problem is used in our benchmark test. We demonstrate that through several tuning
techniques, particularly software-based data prefetching, the performance of XSBench on each platform can be desirably improved compared to the original
implementation on the same platform. The performance gain is 1.24x on the Westmere CPU, 1.53x on the Haswell CPU, 2.31x on the Knights Corner (KNC) MIC
coprocessor and 5.98x on the Kepler GPU. The comparison across different platforms shows that the high-end Kepler GPU outperforms the high-end Haswell CPU by
a factor of 2.18x and outperforms the KNC MIC by 1.38x.
218
SimpleMOC - A Performance Abstraction for 3D MOC
Geoffrey Gunow, John Tramm, Benoit Forget, and Kord Smith (1), Tim He(2)
(1) Department of Nuclear Science & Engineering Massachusetts Institute of Technology, Cambridge, MA, (2) Center for Exascale Simulation of Advanced Reactors Argonne National Laboratory,
Lemont, IL
The method of characteristics (MOC) is a popular method for efficiently solving two-dimensional reactor problems. Extensions to three dimensions have been
attempted with mitigated success bringing into question the ability of performing efficient full core three-dimensional (3D) analysis. Although the 3D problem presents
many computational difficulties, some simplifications can be made that allow for more efficient computation. In this investigation, we present SimpleMOC, a ``mini-app'
which mimics the computational performance of a full 3D MOC solver without involving the full physics perspective, allowing for a more straightforward analysis of the
computational challenges. A variety of simplifications are implemented that are intended to increase the computational feasibility, including the formation axiallyquadratic neutron sources. With the addition of the quadratic approximation to the neutron source, 3D MOC is cast as a CPU-intensive method with the potential for
remarkable scalability on next generation computing architectures.
288
Intel Xeon/Xeon Phi Platform Oriented Scalable Monte Carlo Linear Solver
Fan Ye and Christophe Calvin (1), Serge Petiton (2)
(1) CEA/DEN/DANS/DM2S Commissariat à l’Énergie Atomique Saclay, France
(2) Laboratoire d’Informatique Fondamentale de Lille Université de Lille 1, France
Monte Carlo is a stochastic method which relies on a large number of realizations to estimate the wanted solution. Its application in solving systems of linear algebraic
equations (SLAE) is not new and it could dates back to the 1950s. However, this method may have significant potential value in parallel computing. Since the Monte
Carlo method requires different realizations to be mutually independent, it provides a natural way of implementing a scalable solver. The modern computing facilities
often have a distributed layout. In order to fully utilize the available hardware resources, the solver needs to be asynchronous. To that end, we may find Monte Carlo
method as a good candidate. Although it may be less optimal in convergence rate compared to other deterministic numerical techniques, it is capable of producing
approximate solutions, which can be applied less exigent areas such as preconditioning. This paper revisits the Monte Carlo stochastic linear solver for sparse
matrices, and proposes an efficient implementation targeted for Intel Xeon/Xeon Phi platform based on CSR sparse format. The convergence properties of the method
is also studied.
Computational Medical Physics
Wednesday, April 22, 2015
8:30 AM
Chair: Dr. Alexander E. Maslowski
14
Belmont
AcurosCTS: A Scatter Prediction Algorithm for Cone-Beam Tomography
Alexander Maslowski, Mingshan Sun, Adam Wang, Ian Davis, Todd Wareing, Josh-Star-Lack, John McGhee, Greg Failla and Allen Barnett
Varian Medical Systems, Palo Alto CA
In cone-beam computed tomography (CBCT), a cone beam source interrogates an object to reproduce the electron density inside its volume. Unlike fan-beam and
pencil-beam configurations, the broad beam of CBCT can interrogate a complete volume with a single revolution around the object. Unfortunately, this broad beam
also produces large scatter profiles and in some detector regions, a response dominated by scattered photons. In clinical reconstruction methods, these scattered
photons add an undesired signal, which if left untreated leave undesired artifacts in CBCT reconstructed volumes. We introduce AcurosCTS, a software-based
measure to remove scatter signal in the detector by predicting the scatter profile inside the object. The AcurosCTS algorithm is organized in three steps: a ray-trace
from the beam to the object, a scatter source calculation inside the object and finally a ray-trace from the object to the detector pixels. In step 2, AcurosCTS solves the
Boltzmann equation deterministically to predict the object’s scattering source. To this end, we applied a Linear Discontinuous (LD) discretization in space, a DiscreteOrdinates discretization in angle and a multi-group description in energy, which we combine with a semi-analytical detector response of the once-collided photons to
maximize accuracy. Although some work remains ahead to deploy AcurosCTS in the clinic, preliminary results show good agreement against MCNP with clinical runtimes (< 3 secs). These run-times were achieved by implementing AcurosCTS on GPU architectures.
194
Development of a whole-body tetrahedral mesh human phantom for radiation dose calculations using new MCNP 6.1
geometrical features
Hui Lin, Kristofer Zieb, Yiming Gao, Wei Ji, Peter F. Caracappa and X. George Xu
Rensselaer Polytechnic Institute, Troy, New York
In this study, we developed a whole-body phantom model based on adaptive tetrahedral meshing techniques by converting a set of segmented images of the Visible
Human Project. The newly constructed phantom, called VIP-Man_mesh was implemented in MCNP6 to calculate organ doses from external beams of photons and
electrons with energies less than 10 MeV in the anterior–posterior (AP) irradiation direction as well as to measure computation speed. The dose values were then
compared with those for the previously developed voxel phantom, called VIP-Man_voxel. It was found that the organ doses of tetrahedral mesh phantom and
voxelized phantom are in good agreement for the photon exposures considered in this study. The computational speed of using tetrahedral mesh phantom is about
twice slower than voxelized phantom. Although limited radiation types and irradiation geometries were tested, we are impressed by the performance. It is concluded
that the tetrahedral mesh would be a promising geometric representation for human phantoms which could be accurately and effectively implemented in radiation
transport codes such as the MCNP6. Key Words: Human phantom, tetrahedral mesh, voxel, MCNP6
- 41 -
MC2015 : M&C + SNA + MC 2015
83
Acceleration of Geant4-DNA Physics Models Performance Using Graphics Processing Unit
Shogo Okada, Koichi Murakami, Katsuya Amako, Takashi Sasaki (1), Sébastien Incerti, Mathieu Karamitros (2), Nick Henderson, Margot
Gerritsen (3), Makoto Asai, Andrea Dotti (4)
1) High Energy Accelerator Research Organization, KEK, Tsukuba, Ibaraki, Japan, 2) Centre d'Etudes Nucléaires de Bordeaux Gradignan, CENBG, 33175 Gradignan, France, 3) Institute for
Computational & Mathematical Engineering, Stanford University, Stanford, CA, USA, 4) SLAC National Accelerator Laboratory, Menlo Park, CA
The Geant4-DNA extension of the general purpose Monte Carlo Geant4 toolkit has been developed for the simulation of particle-matter physical interactions down to
very low energies in liquid water, the main component of biological matter. The simulation in that energy scale requires significant computing time since it simulates all
physical interactions using a discrete approach. This work presents the implementation of the physics processes of Geant4-DNA extension in GPU architecture.
Impressive performance gain is observed, maintaining the same accuracy on physics performance.
154
Absorbed dose estimation for computed tomography by method of characteristics deterministic simulation
Edward T. Norris and Xin Liu (1), Dean B. Jones and Kenneth E. Watkins (2)
(1) Missouri University of Science and Technology, Mining and Nuclear Engineering, Rolla, MO (2) TransWare Enterprise Inc., Sycamore, IL
Organ dose estimation in CT scanning is very important. Monte Carlo methods are considered gold standard in patient dose estimation, but the computation time is
formidable for routine clinical calculations. A more efficient approach to estimate the absorbed dose is to solve the linear Boltzmann equation numerically. In this
study, an axial CT scan is modeled with the deterministic code TRANSFX, which solves the linear Boltzmann equation using the method of characteristics. The CT
scanning model includes 16 X-ray sources, beam collimators, flat filters, and bowtie filters. The phantom is the standard 32 cm CTDI phantom. A Monte Carlo
simulation was performed to benchmark the TRANSFX simulations. Comparisons of simulation results were made between TRANSFX and Monte Carlo methods. The
deterministic simulation results are in good agreement with the Monte Carlo simulation. It has been found that the method of characteristics underestimates the flux
near the periphery of the phantom (i.e. high dose region). The simulation results show that deterministic method can be used to estimate the absorbed dose in CTDI
phantom. The accuracy of the method of characteristics is close to that of a Monte Carlo simulation at low dose region. The benefit of the method of characteristics is
its potentially faster computation speed. Further optimization of this method in routine clinical CT dose estimation is expected to improve its accuracy and speed.
Next Generation Parallelism for Monte Carlo
Wednesday, April 22, 2015
10:40 AM
Hermitage C
Chairs: Mr. Jean-Christophe P. Trama, Dr. Christophe Calvin
252
Parallel computing with Particle and Heavy Ion Transport code System (PHITS)
T. Furuta, T. Sato and T. Ogawa (1), K. Niita(2), K. L. Ishikawa(3), S. Noda, S. Takagi, T. Maeyama, N. Fukunishi, K. Fukasaku and R. Himeno
(4)
(1) Japan Atomic Energy Agency, Tokai, Ibaraki, 319-1195, Japan, (2) Research Organization for Information Science and Technology, Tokai, Ibaraki, 319-1106, Japan, (3) The University of Tokyo,
Bunkyo-ku, Tokyo, 113-8656, Japan,(4) RIKEN, Wako, Saitama, 351-0198, Japan
Particle and Heavy Ion code System, PHITS, is a general-purpose Monte Carlo code, which has been used by many users in various fields of research and
development. Two parallel computing functions are available in PHITS to reduce its computation time. One is the distributed-memory parallelization using protocols of
message passing interface (MPI) and the other is the shared-memory parallelization using open multi-processing (OpenMP) directives. Each function has advantages
and disadvantages, and the performances depend on simulation details such as geometry, materials, choices of the physics models, and so on. By adopting both MPI
and OpenMP parallelization functions in PHITS, the parallel computing can be flexibly adjusted to suit users’ needs. On supercomputer systems, the so-called hybrid
parallelization using both functions can be also performed with the inter-node MPI parallelization and the intra-node OpenMP parallelization. The explanation is given
for both MPI and OpenMP parallelization functions and the performance of PHITS was tested with some applications using a typical workstation computer. The
performance of the hybrid parallelization of PHITS on a supercomputer was also tested using K computer at RIKEN. The good parallelization efficiency for strong
scaling (96.2 %) was confirmed up to 2,048 nodes (×8 intra-node cores).
265
Advancements in Monte Carlo Capabilities for SCALE 6.2
B. T. Rearden, K. B. Bekar, C. Celik, C. M. Perfetti, and S. W. D. Hart
Reactor and Nuclear Systems Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for
criticality safety, reactor physics, radiation shielding, and sensitivity/uncertainty analysis. For more than 30 years, regulators, industry, and research institutions around
the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte
Carlo radiation transport solvers that are selected based on the desired solution. SCALE includes the latest nuclear data libraries for continuous-energy and
multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling,
visualization, and convenient access to desired results. SCALE 6.2 provides several new capabilities and significant improvements in many existing features,
especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity/uncertainty analysis, as well as improved
fidelity in nuclear data libraries. An overview of advancements in SCALE continuous-energy eigenvalue capabilities is provided with emphasis on new features parallel
capabilities for SCALE 6.2.
152
Hierarchical Geometry Tree-based Method for Scoring Massive Tallies in Monte Carlo Particle Transport Calculation
Shu Zhang, Jing Song, Bin Wu, Lijuan Hao, Liqin Hu
Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety
Technology, Chinese Academy of Sciences, China
Monte Carlo codes that use a linear search to determine which tally bins need to be scored in each tracking step suffer severe performance penalties when tallying a
large number of quantities. This paper proposed a new tally method based on the hierarchical geometry tree. This method determines where to store the tally
contribution by tracking down the tree. For the same reactor model, the average active cycle execution time of tallying 6 million fuel pin segments increased about 4%
compared with inactive cycles. This method has been implemented in the CAD-based Monte Carlo code SuperMC.
Improved Multigroup Cross Section Generation
Wednesday, April 22, 2015
10:40 AM
Chair: Dr. Rachel N. Slaybaugh
273
Hermitage D
Generation of A Cross Section Libary Applicatble to Various Reactor Types
Changho Lee
Argonne National Laboratory, Argonne, IL
A method for generating a cross section library applicable to various reactor types is presented. In this method, an ultrafine group (2158 groups) cross section library
is first prepared using NJOY and MC2-3, which includes absorption, nu-fission, and scattering resonance cross section tables as a function of the background cross
section and temperature. Subsequently, for a specific reactor or reactor type of interest, this base ultrafine group library is condensed to broad-group cross section
libraries using the group condensation optimization algorithm that minimizes the change of cross section and eigenvalue over different compositions and geometry.
Based on equivalence theory, the escape cross sections representing the local heterogeneity effect are calculated by iteratively solving the fixed source problems in
which resonance cross sections are updated during the iteration. Preliminary verification tests indicate that the base ultrafine group cross section library is able to
accurately estimate eigenvalues and cross sections of various compositions from different reactor types including LWR, HTR, and SFR. The broad group cross
section libraries condensed from the base ultrafine group cross section library for a specific reactor or reactor type show good agreements in eigenvalue and power
distributions with corresponding Monte Carlo solutions.
- 42 -
MC2015 : M&C + SNA + MC 2015
309
Automatically Optimized Collapsed Neutron Energy Group Structure using Particle Swarm Optimization
Christopher A. Edgar, Ce Yi, and Glenn Sjoden
Nuclear and Radiological Engineering Program George W. Woodruff School of Mechanical Engineering Georgia Institute of Technology Atlanta, GA
The use of multi-group cross sections is widely applied for energy domain discretization when solving the Boltzmann Transport Equation. In order to reduce the
computational overhead, fine group cross section libraries are often collapsed to form broad group cross section libraries. During this process, a transport code is used
to determine the fine group flux, and then some sort of weighting scheme is applied to collapse these fine group cross sections into pre-defined broad group energy
bins. While most cross section collapsing methodologies have been widely researched, the use of pre-determined, user-specified broad group boundaries is common
practice and generally requires the user to apply their best judgment and previous experience in selecting these boundaries. This paper discusses the process and
application of applying Particle Swarm Optimization to the determination of the broad group boundaries, thus determining the optimum and minimized broad group
structure that should be used for transport calculations. The effectiveness of the PSO based approach is demonstrated using a fuel pin model, where the fitness of the
optimum broad group structure is determined by the difference in the fine group and broad group eigenvalues. In many cases, this difference was shown to be less
than 1 pcm when collapsing from 47 fine groups to between 4 and 7 coarse groups.
165
Coupled fine-group three-dimensional flux calculation and subgroups method for a FBR hexagonal assembly With The
Apollo3® Core Physics Analysis Code
D. Sciannandrone, S. Santandrea, R. Sanchez, and L. Lei-Mao (1), J.F. Vidal, J.M. Palau, and P. Archier (2)
(1) CEA Saclay - DANS/DM2S/SERMA/LTSD Gif-sur-Yvette, France (2) CEA, DEN, DER, SPRC, LEPh, Cadarache, Saint-Paul-lez-Durance cedex, France
The characteristics solver of the core physics analysis code APOLLO3 extsuperscript{ extregistered} has been recently extended to compute the solution of the multigroup transport equation in three-dimensional axial geometries. The convergence of the characteristics solver has been improved using a synthetic acceleration with a
new three-dimensional DPN (Double PN) transport operator. Further speed up is achieved by parallelizing the algorithms with OpenMP directives. The multi-group
cross sections used in the characteristics solver are computed using the subgroup method, which solves the in-group slowing down equation for a fixed external
source. In this paper we show the coupling between the three-dimensional characteristics solver and a two-dimensional subgroup method for the calculation of multigroup fluxes and self-shielded cross sections of a Fast Breeder Reactor assembly. The coupling is done following a nodal approach, by integrating the slowing down
equation on different axial nodes. Differently from what has been already done on this subject, the external source used for the subgroup calculations comes directly
from the solution of the three-dimensional characteristics solver.
Monte Carlo Criticality Calculations with Thermal-Hydraulic Feedback
Wednesday, April 22, 2015
Chair: Dr. Maria N. Avramova
304
10:40 AM
Hermitage A-B
Variance Reduction in High Resolution Coupled Monte Carlo - Thermal Hydraulics Calculations
A. Ivanov and V. Sanchez
Karlsruhe Institute of Technoly Hermann von Helmholtz Platz 1, Germany
In the recent years there has been an increasing level of interest in the use of Monte Carlo methods for performing high accuracy three dimensional nuclear reactor
calculations. As such they can provide reference solutions for the deterministic tools.The Monte Carlo method provides the most accurate solution of the neutron
transport problem. The ability to efficiently utilize high performance computer architectures and the precise physics models used by Monte Carlo codes, enable the
accurate simulation of real reactor problems. In reactor analysis calculations, nuclear and thermal-hydraulic performance is highly dependent on local material
temperatures throughout the reactor core. In order to achieve accurate results, this temperature dependence should be included in nuclear calculations for reactor
analysis and design. In essence the coupled Monte Carlo - thermal hydraulics calculations involve a series of eigenvalue calculations that take into account the
distributions of the density and the temperature within the nuclear reactor core when operating at hot full power. Therefore, these type of calculations experience the
problems of the usual eigenvalue calculations when applied to high dominance ratio problems, among others, slow convergence of the power iteration method and
large tally variances. Since the fission heat deposition is used as boundary conditions it has to be estimated within acceptable statistical accuracy. To resolve this
issue and make the simulation of real reactor core geometries possible, adequate variance reduction techniques have to be implemented in the Monte Carlo codes.
This paper presents the methodologies used by the coupled code system involving MCNP and the in-house development code SUBCHANFLOW to resolve those
issues.
64
Serpent--OpenFOAM Coupling in Transient Mode: of a Godiva Prompt Critical Burst
Manuele Aufiero (1), Carlo Fiorina (2), Axel Laureau, Pablo Rubiolo (1), and Ville Valtavirta (3)
1) LPSC, Université Grenoble-Alpes, CNRS/IN2P3, Grenoble, France, 2) Nuclear Energy and Safety, Laboratory for Reactor Physics and System Behavior, Paul Scherrer, Villigen, Switzerland, 3) VTT
Technical Research Centre of Finland, Tietotie 3 Espoo, FI-02044 VTT, Finland
This work presents the internal coupling of a Monte Carlo code and a CFD code for transient reactor simulations. Routines from the C++ finite-volume OpenFOAM
multi-physics toolkit have been merged with the Serpent Monte Carlo code and linked at compilation at source code level. The internal coupling between the two
codes allows to perform transient Monte Carlo neutronics simulations with thermal-hydraulics and thermo-mechanics feedbacks within a single tool, without requiring
input/output data transfer to external codes. A Godiva super-prompt-critical burst has been selected to test the numerical behaviour of the Serpent--OpenFOAM
coupling in presence of moving mesh. Experimental data are compared to the results of the coupled neutronics/themal-mechanics simulation.
224
Preliminary Coupling of the Monte Carlo code OpenMC and the Multiphysics Object Oriented Simulation Environment
(MOOSE) for Analyzing Doppler Feedback in Monte Carlo Simulations
Matthew Ellis, Benoit Forget, and Kord Smith (1), Derek Gaston(2)
(1) Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA, (2) Idaho National Laboratory, Idaho Falls, ID
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems.
One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The
research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation
Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics
feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel
assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion
Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional
PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin
based on pin relative powers.
Advanced Angular Discretizations for the Transport Equation
Wednesday, April 22, 2015
Chair: Dr. Cory Ahrens
92
10:40 AM
Two Rivers
Implicit Filtered PN Method in Cylindrical Coordinates for Thermal Radiation Transport
Vincent M. Laboure and Ryan G. McClarren (1), Cory D. Hauck (2)
Vincent M. Laboure and Ryan G. McClarren (1), Cory D. Hauck (2)
1) Department of Nuclear Engineering, Texas A&M University, College Station, TX, 2) Computational Mathematics Group, Computer Science and Mathematics Division, Oak Ridge National Laboratory,
Oak Ridge, TN
In this paper, we present an implicit time-integration method for solving the time-dependent thermal radiation transport equations in cylindrical coordinates. We use a
so-called filtered spherical harmonics (FPN) expansion. The filtering approach -- introduced by McClarren and Hauck -- enables to efficiently attenuate the oscillations
and negativity in the solution by smoothing out the steep variations thereof in angle. Implicit time-integration allows to simulate particles travelling at the speed of light
with high CFL number but requires to inverse a large linear system at every time step. Previous work by the authors has shown that the filter not only gives better
solutions -- compared to reference solutions such as Implicit Monte Carlo (IMC) -- but also helps the solver to converge faster. The purpose of the present work is to
extend this method to cylindrical problems by first deriving the streaming operator expression. After implementing the method, we compare the results with IMC
calculations on a standard problem, known as the Crooked Pipe test problem. We find that our method indeed gives much more satisfying results than unfiltered
calculations. Even better results can be obtained by allowing the filtering to be space-dependent.
- 43 -
MC2015 : M&C + SNA + MC 2015
125
Spherical Harmonics (PN) Methods in the Sceptre Radiation Transport Code
Clif Drumm
Sandia National Laboratories, Albuquerque, NM
The SCEPTRE radiation transport code includes several methods for handling the angular dependence of the Boltzmann transport equation, including discrete
ordinates (SN), spherical harmonics (PN) and angular finite elements. This paper presents three of the PN methods available in SCEPTRE: a first-order method using
discontinuous spatial finite elements, a second-order method using continuous spatial finite elements, and a least-squares method, also using continuous spatial finite
elements. For the least-squares method, the effect of scaled weighting on the accuracy of the solution for diffusive systems is investigated. Vacuum (inflow) boundary
conditions and discontinuous element interfaces are handled by partitioning angular boundary integrals into upwind and downwind components. The angular
integrations are accomplished using quadrature integration, and the use of standard angular quadrature is compared with the use of Galerkin quadrature. It is shown
that, by using Galerkin quadrature, the PN solver may yield identical results as the SN solver. The methods are applied to several test problems including a diffusive
test, problems including isolated sources and voids, and electron emission from a thin wire in a void. The results are compared with converged deterministic results
and Monte Carlo results.
199
Discrete-Ordinates Quadratures Based on Linear and Quadratic Discontinuous Finite Elements Over Spherical
Quadrilaterals
Cheuk Y. Lau and Marvin L. Adams
Texas A&M University, College Station, TX
We present LDFE/QDFE-SQ discrete-ordinates quadratures based on linear and quadratic discontinuous finite elements (LDFE/QDFE) over spherical quadrilaterals
(SQ) on the unit sphere. The LDFE-SQ quadratures are an extension of the Jarrell-Adams LDFE-ST quadratures which use spherical triangles (ST). The use of SQ
instead of ST produces more uniform quadrature ordinate distributions reducing local integration variability. The QDFE-SQ quadratures demonstrate higher-order (i.e.,
quadratic) basis functions can be used within the discontinuous finite-element based quadrature methodology. The LDFE-SQ (resp. QDFE-SQ) quadratures place four
(resp. nine) ordinates in each SQ. The weight of each ordinate is the integral of its basis function over the SQ surface. The LDFE/QDFE-SQ quadratures exactly
integrate all 2nd-order spherical harmonics and higher orders tested (up to 6th-order) with 4th-order accuracy. The LDFE/QDFE-SQ quadratures also integrate the
scalar flux for a simple one-cell problem with 4th-order accuracy - significantly better than Level Symmetric (1.5-order) and Gauss-Chebyshev (2nd-order) quadratures
and on-par with the Quadruple Range quadrature (4th-order). The LDFE/QDFE-SQ error convergence becomes more complicated for the Kobayashi benchmark
showing between 2nd and 3rd-order accuracy at intermediate refinement and rapid convergence at high refinement. Locally-refined LDFE-SQ quadratures show much
fewer ordinates are needed for error reduction when refinement is confined to the required cone of angle. The LDFE/QDFE-SQ quadratures are well-suited for use in
adaptive discrete-ordinates since they are locally refinable, have strictly positive weights, and can be generated for large numbers of directions.
Computational Medical Physics
Wednesday, April 22, 2015
10:40 AM
Chair: Dr. Alexander E. Maslowski
168
Belmont
Monte Carlo Validation of Proton Treatment Plans using Geant4 on Xeon Phi
Andrew Green, Hywel Owen (1), Andrea Dotti, Makoto Asai (2), Adam Aitkenhead, and Ranald Mackay(3)
(1) School of Physics and Astronomy, The University of Manchester, Manchester, UK, (2) SLAC National Accelerator Laboratory, Menlo Park CA, USA , (3) The Christie Hospital, The Christie NHS
Foundation Trust, Manchester M20 4BX
When delivering proton therapy, dose accuracy can be aided by the use of Monte Carlo (MC) simulation, but very large numbers of incident particles must be followed
to obtain good statistical uncertainty. To validate dose estimates from pencil beam approximations within the treatment field typically requires at least 10 million
primaries. The recently released Intel (TM) Xeon Phi (R) coprocessor platform offers the possibility of achieving good computational throughput whilst allowing a
common codebase with other “traditional” CPU platforms. Here we report on the adaptation of Geant4 to allow efficient MC proton therapy validation on a Xeon Phi
system. The recent multithreaded version of this code lends itself to Xeon Phi, but changes were performed to optimise memory usage allowing large numbers of
primaries to be followed in a complex patient geometry on a single card - this results in much smaller RAM requirements for a given simulation and makes validation
simulations possible on a single card.
171
Monte Carlo calculations of secondary neutron doses in adult male patients during the carbon ion radiotherapy
Hongdong Liu, Zhi Chen, XG Xu (1), Qiang Li, Tingyan Fu (2)
(1) School of Nuclear Science and Technology, University of Science and Technology of China Hefei, China (2) Institute of Modern Physics , Chinese Academy of Sciences, Lanzhou, China
Patients undergoing carbon ion radiotherapy are exposed to secondary neutrons that cause harmful biological effects such as the development of secondary cancer.
Previous studies about the secondary neutrons and fragments produced by carbon ion interacting with matters failed to evaluate organ doses absorbed by patients
due to the secondary neutrons. In this paper, we report a study to estimate the organ doses from the secondary neutrons using Monte Carlo calculations and
anatomically realistic voxel phantoms. The MCNPX code was used to simulated the transport of 230MeV/u carbon ion getting through the Ridge Filter (RF) and MultiLeaf Collimator(MLC). Patient models were based on the RPI Adult Male phantom. The similar modeling without the degradation and control of RF and MLC was also
conducted to compare the effect on the production of secondary neutrons. The specific energy spectrums of secondary neutrons and their equivalent doses in
different organs were obtained. It was found that more secondary neutrons would be produced when treat with the RF and MLC, especially for those with energy lower
than 200 MeV. In addition, a comparison about the equivalent dose between the vertical and lateral incidence was conducted to further understand the influence of RF
and MLC. Patients would suffer a higher dose due to neutrons when treat with vertical incidence, especially for organs distal from the target. Because of the external
neutrons, the doses would be higher when add the RF and MLC into the treatment.
257
Radiation Protection and Dosimetry Assessment of Digital Breast Tomosynthesis (DBT) Using Monte Carlo Simulations
and Modeling
Mariana Baptista, Salvatore Di Maria, Piménio Ferreira and Pedro Vaz
Instituto Superior Técnico, Centro de Ciências e Tecnologias Nucleares
Campus Tecnológico e Nuclear, Bobadela LRS
Digital Breast Tomosynthesis (DBT) is an emerging breast imaging technique. It produces a 3D image from a series of views, reducing the problem of overlapping
structures in 2D mammography imaging. Recent studies show that DBT presents superior image quality and better tumor visibility, indicating a better sensitivity than
digital mammography (DM). Some authors consider that DBT can be an alternative to DM in breast cancer screening, particularly for women with dense breasts. In
mammography, as in other radiodiagnostic examinations, the relationship between the image quality and the dose to the patient must be carefully assessed in order to
perform the optimization of the protection, one of the pillars of the system of Radiological Protection. During a DBT examination, the X-ray tube performs an angular
rotation around the patient´s breast. Using Monte Carlo (MC) simulation programs to model and simulate the behavior of a DBT equipment, the continuous motion of
the X-ray tube must be decomposed as a set of stationary situations, each corresponding to a discrete position of the tube. In order to compute radiometric (such as
photon fluence) and dosimetric quantities (such as the absorbed dose in the breast), the quantities computed at each discrete position of the X-ray tube must be
summed. In this work the Monte Carlo computer programs MCNPX v2.7.0 and PENELOPE version 2008 were used to model a fusion acquisition system
(MAMMOMAT Inspiration-Siemens), that allow the generation of both Cranio-Caudal (CC) images in DM mode and 2D projections during a standard DBT
examination. Moreover a comparison between DBT and DM is presented using the signal-to-noise ratio (SNR) to evaluate the image quality and the mean glandular
dose (MGD) to quantify the absorbed dose to the patient. The effect of the glandular composition on DBT dose optimization was studied. Radiological Protection of the
patient and Dosimetry issues are discussed.
- 44 -
MC2015 : M&C + SNA + MC 2015
Monte Carlo Methods
Wednesday, April 22, 2015
1:30 PM
Hermitage C
Chairs: Dr. Ahmad M. Ibrahim, Dr. Seth Johnson
191
Monte Carlo Performance Analysis for Varying Cross Section Parameter Regimes
Ronald O. Rahaman and Andrew R. Siegel (1), Paul K. Romano(2)
(1) Argonne National Laboratory, Theory and Computing Sciences, Argonne, IL, (2) Bechtel Marine Propulsion Corporation, Knolls Atomic Power Laboratory, Schenectady, NY
Identifying key performance bottlenecks and their underlying causes is critical to designing more efficient Monte Carlo transport codes, particularly on next-generation
node architectures. For many application codes, this process is more complicated than it might appear -- problem setups with different parameter regimes can
dramatically shift the most computationally intensive operations. For Monte Carlo reactor physics calculations, the number of nuclides, material regions, and tally
regions, for example, can all significantly affect conclusions about application performance. In this paper, we present a detailed single node performance analysis of
the OpenMC code, shedding light on how performance bottlenecks vary as a function of the key defining parameters of the model. We focus on parameters affecting
cross section data, including the number of nuclides and memory requirements. These analyses are expected to serve as a useful baseline for comparing optimization
strategies within the community and as a guide for better understanding how to efficiently utilize current and next generation node architectures.
142
Implementation of a Least Squares Temperature Fit of the Thermal Scattering Law in MCNP6
Andrew T. Pavlou and Wei Ji (1), Forrest B. Brown (2)
1) Rensselaer Polytechnic Institute, Department of Mechanical, Aerospace and Nuclear Engineering, Troy, NY, 2) Los Alamos National Laboratory, Monte Carlo Codes Group, XCP-3, X Computational
Physics Division, Los Alamos, NM
The accuracy of a computer simulation of neutron transport is limited by the available nuclear cross sections, which are dependent on many factors including the
velocity of the neutron and the system temperature. The thermal energy range (below about 4 eV) requires many more scattering cross section datasets than at higher
energies because of additional complications provided by the comparable energies of the neutron and target material. As a result, the memory burden for low-energy
neutron scattering can become large. On-the-fly methods have been used to reduce the storage of cross section data by finding a temperature dependence. However,
the low energy quantum, chemical and crystalline effects that can be ignored at higher energies make the temperature dependence of the double differential scattering
cross section in the thermal energy range difficult to determine. A new approach has been developed by fitting the temperature dependence of energy and squared
momentum transfer cumulative distribution functions using a least squares approach. The coefficients of these fits are used to directly sample the outgoing energy and
angle for any temperature. This method has been implemented into the continuous-energy Monte Carlo code MCNP6 and tested for the moderator material bound
carbon in graphite. The secondary energy spectra and angular distributions were observed to be in good agreement by testing with Red Cullen’s ‘broomstick’
benchmark. This new method only requires about 21.7 MB of total data storage which is orders of magnitude less than the current method which requires about 25
MB of data per temperature.
220
Direct, On-The-Fly Calculation of Unresolved Resonance Region Cross Sections in Monte Carlo Simulations
Jonathan A. Walsh, Benoit Forget, and Kord S. Smith (1), Brian C. Kiedrowski and Forrest B. Brown (2)
(1) Department of Nuclear Science and Engineering Massachusetts Institute of Technology, Cambridge, MA, (2) XCP-3, Monte Carlo Codes, Los Alamos National Laboratory, Los Alamos, NM
The theory, implementation, and testing of a method for on-the-fly unresolved resonance region cross section calculations in continuous-energy Monte Carlo neutron
transport codes are presented. With this method, each time that a cross section value is needed within the simulation, a realization of unresolved resonance
parameters is generated about the desired energy and temperature-dependent single-level Breit-Wigner resonance cross sections are computed directly via use of the
analytical ψ − χ Doppler integrals. Results indicate that, in room-temperature simulations of a system that is known to be highly sensitive to the effects of resonance
structure in unresolved region cross sections, the on-the-fly treatment produces results that are in excellent agreement with those produced with the well-established
probability table method. Additionally, similar agreement is observed between results obtained from the on-the-fly and probability table methods for another
intermediate spectrum system at temperatures of 293.6 K and 2500 K. With relatively tight statistical uncertainties at the ∼ 10 pcm level, all on-the-fly and probability
table keff eigenvalues agree to within 2σ. Also, we use the on-the-fly approach to show that accounting for the resonance structure of competitive reaction cross
sections can have non-negligible effects for intermediate/fast spectrum systems. Biases of up to 90 pcm are observed. Finally, the consequences of the on-the-fly
method with respect to simulation runtime and memory requirements are briefly discussed.
225
Implicit Monte Carlo Adaptations for Tetrahedral Meshes with Node-Based Unknowns
Alex R. Long and Ryan G. McClarren (1), Jacob I. Waltz and John G. Wohlbier (2)
(1) Texas A&M University, College Station, Texas, (2) Los Alamos National Laboratory.Los Alamos, New Mexico
The Implicit Monte Carlo method is often coupled to hydrodynamics codes that use structured meshes, with unknowns defined at the center of mesh cells. Several
modern hydrodynamic algorithms use unknowns defined at the vertices of unstructured tetrahedral meshes, specifically the CHICOMA code being developed at Los
Alamos National Laboratory. Modifications were made to the IMC method that avoid complications associated with this mesh geometry and yield accurate results. The
algorithm shifts particle tracking and emission energy from the dual-volume surrounding the node to the tetrahedra that compose the mesh. The way energy is moved
from the dual to the tetrahedra is handled with emission upwinding. Tilting is also modified to allow for use on tetrahedra where a linear representation of temperatures
is available. The algorithm qualitatively matches standard IMC implementations that have structured orthogonal meshes with unknowns defined at cell-centers.
Reactor Physics
Wednesday, April 22, 2015
1:30 PM
Hermitage D
Chairs: Prof. Imre Pazsit, Dr. Shane Stimpson
143
The Finite Element with Discontiguous Support Multigroup Method: Theory
Andrew T. Till, Marvin L. Adams, and Jim E. Morel
Texas A&M University, College Station, TX
The standard multigroup (MG) method for energy discretization of the transport equation can be sensitive to approximations in the weighting spectrum chosen for
cross-section averaging. As a result, MG often inaccurately treats important phenomena such as self-shielding variations across a fuel pin. From a finite-element
viewpoint, MG uses a single fixed basis function (the pre-selected spectrum) within each group, with no mechanism to adapt to local solution behavior. In this work, we
introduce the Finite-Element-with-Discontiguous-Support Multigroup (FEDS-MG) method, a generalization of the previously introduced Petrov-Galerkin Finite-Element
Multigroup (PG-FEMG) method, itself a generalization of the MG method. Like PG-FEMG, in FEDS-MG, the only approximation is that the angular flux is a linear
combination of basis functions. The coefficients in this combination are the unknowns. A basis function is non-zero only in the discontiguous set of energy intervals
associated with its energy element. Discontiguous energy elements are generalizations of bands introduced in PG-FEMG and are determined by minimizing a norm of
the difference between sample spectra and our finite-element space. In this paper, we present the theory of the FEDS-MG method, including the definition of the
discontiguous energy mesh, definition of the finite element space, derivation of the FEDS-MG transport equation and cross sections, definition of the minimization
problem, and derivation of a useable form of the minimization problem that can be solved to determined the energy mesh. A companion paper presents results.
144
The Finite Element with Discontiguous Support Multigroup Method: Application
Andrew T. Till, Marvin L. Adams, and Jim E. Morel
Texas A&M University, College Station, TX
The Finite-Element-with-Discontiguous-Support Multigroup (FEDS-MG) method, outlined and derived in a companion paper, is a novel energy discretization technique
for deterministic particle transport that overcomes many of the challenges associated with the typical Multigroup (MG) method, such as dependence on fixed selfshielding of the cross sections within a coarse group. Much the same way as the MG method requires a group structure, the FEDS-MG method relies on solving a
minimization problem to determine an energy mesh made up of discontiguous energy elements. We may generate this mesh without requiring reference solution
information and show convergence in energy as energy elements are added for one-dimensional pin-cell problems. Convergence holds for several definitions of basisfunction-weighted cross sections. We use our pin-cell calculations to inform our implementation of the FEDS-MG method on an energy-generalized version of the
C5G7 problem, which we call the C5G∞ problem.
- 45 -
MC2015 : M&C + SNA + MC 2015
158
Fourier Convergence Analysis of Two-Node Coarse-Mesh Finite Difference Method for Two-Group Neutron Diffusion
Eigenvalue Problem
Yongjin Jeong, Jinsu Park and Deokjung Lee (1), Hyun Chul Lee(2)
(1) School of Mechanical and Nuclear Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea (2) Korea Atomic Energy Research Institute, Daejeon, Korea
The convergence rate of the nonlinear two-node coarse-mesh finite difference (CMFD2N) method is analytically derived for one-dimensional two-group solutions of
the eigenvalue diffusion problem in an infinite homogeneous medium. The two-node analytic nodal method (ANM2N) is used to calculate current correction factor
(CCF) for CMFD2N. So far it is hard to directly apply Fourier analysis to CMFD2N with ANM2N kernel with the eigenvalue problem because there is one more nested
loop compared with the fixed source problem. In this paper, using the numerical characteristics of the convergence rate of CMFD2N with ANM2N kernel, it is newly
suggested to apply Fourier analysis to the algorithm. Generally a dominance ratio increases as a problem size increases and the dominance ratio mainly influences on
the convergence rate. In numerical convergence analysis, The CMFD2N with ANM2N algorithm has no relationship to problem size but the algorithm has strong
relationship to mesh size. It is possible to generalize from a few number of nodes to many number of nodes since we are only interested in the relationship between
the convergence rate and mesh size. The analytic convergence rate obtained by Fourier analysis and the numerical convergence rate is compared with each other. It
have been numerically studied that CMFD2N with ANM2N kernel is unconditionally stable for the practical infinite homogeneous medium. It is possible to see why the
algorithm has the stability from the analytic convergence analysis.
164
The stability of boiling water reactors as a catastrophe phenomenon
I. Pázsit, V. Dykin (1), H. Konno(2), and T. Kozlowski(3)
(1) Department of Applied Physics, Division of Nuclear Engineering, Chalmers University of Technology, Sweden (2) Faculty of Information and Systems, Department of Risk Engineering, University of
Tsukuba, Tsukuba, Japan (3) Department of Nuclear Plasma and Radiological Engineering, University of Illinois, Urbana, IL
The hypothesis is proposed that the stability parameter (decay ratio) of a complex, many-variable non-linear system might obey a cusp catastrophe. The incentive for
this surmise comes from indications in real measurements that in certain cases the decay ratio appears to behave discontinuously and might show a hysteresis as a
function of the control parameters reactor power and coolant flow. Such observations can be explained by a phenomenological catastrophe model suggested in this
article. Since a cusp-type behaviour implies that the decay ratio is many-valued in a certain region of the power-flow map, a mechanism is suggested how a Hopf
bifurcation with multiplicative noise can lead to such a behaviour.
Sensitivity and Uncertainty Analysis
Wednesday, April 22, 2015
1:30 PM
Hermitage A-B
Chairs: Dr Christopher M. Perfetti, Dr. Brian C. Kiedrowski
49
Quantifying Nuclear Data Uncertainty in nTRACER Simulation Results with the XSUSA Method
Matías Zilly, Kiril Velkov, and Winfried Zwermann (1), Yeon Sang Jung and Han Gyu Joo (2)
1) Department of Core Behavior, GRS, Garching, Germany, 2) Department of Nuclear Engineering Seoul National University, Seoul, Korea
For the first time, by applying the GRS method XSUSA, the impact of nuclear data uncertainty on nTRACER direct whole-core simulation results is quantified. Using a
set of varied nuclear data we statistically analyze k-eff and pin power distributions of the C5G7 unrodded configuration. The comparison with Monte Carlo simulations
yields excellent agreement and serves as a validation of the method. For the examined setup we find a standard deviation of 554*10^-5 in k-effective and a maximum
uncertainty of 2.5% for the pin power, the main contributions of the uncertainty stemming from the (n,gamma) cross section of U-238 and nu-bar of Pu-239.
50
Uncertainty and Sensitivity Analysis in Criticality Calculations with Perturbation Theory and Sampling
Friederike Bostelmann, Frank-Peter Weiß, Alexander Aures, Kiril Velkov, Winfried Zwermann (1), Bradley T. Rearden, Matthew A. Jessee, Mark
L. Williams, Dorothea Wiarda, William A. Wieselquist (2)
1) Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) mbH, Garching, Germany, 2) Oak Ridge National Laboratory, Oak Ridge, TN, USA
The paper presents uncertainty and sensitivity analyses in criticality calculations with respect to nuclear data performed with the SCALE module TSUNAMI based on
general perturbation theory, and the tools SAMPLER, to be available with the next SCALE release, and XSUSA; the latter are based on a random sampling approach.
The investigation mainly uses one-dimensional systems where computation times are low; nevertheless, results are given also for critical assemblies in threedimensional representation. Results from the three tools are compared for a variety of output quantities, namely multiplication factors, reactivity effects, and one-group
cross sections. The arrangements under consideration cover a wide variety of fuel and coolant materials with very different neutron spectra. For all systems and all
output quantities considered, excellent agreement is obtained with all the tools. The sampling-based results are practically identical; deviations between the TSUNAMI
and the sampling-based results are generally very low, too, with rare exceptions where the deviations are slightly larger than the 95% confidence interval of the
sampling methods. In total, the results from the different methods appear to be completely consistent.
51
Sampling-Based Nuclear Data Uncertainty Analysis in Criticality and Depletion Calculations
Friederike Bostelmann, Winfried Zwermann, Bernard Krzykacz-Hausmann, Lucia Gallner, Alexander Aures, Kiril Velkov
Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) mbH, Garching, Germany
Uncertainty and sensitivity analyses with respect to nuclear data are performed with criticality calculations for pin cells and a critical experiment, and with depletion
calculations for a boiling water reactor fuel assembly. For this, the sampling-based tool XSUSA is employed together with the criticality and depletion sequences from
the SCALE 6.1 code system. In the criticality calculations, uncertainties for multiplication factors, one-group cross sections, and the radial fission rate distribution are
determined. When possible, comparisons are performed with results from the perturbation theory based TSUNAMI code; the agreement is excellent. In the depletion
calculation, the uncertainty analysis refers to the multiplication factor and nuclide inventories. Special emphasis is put on performing group sensitivity analyses for the
calculated results; thus, in addition to the output uncertainties, the main contributors to these uncertainties are evaluated by determining squared multiple correlation
coefficients as importance indicators. For the critical assembly calculations, the impact of nuclear data uncertainties on the uncertainties of pin powers small. For
nuclide concentrations in depleted fuel assemblies, the uncertainties can be significant. The main contributions to the actinide inventory uncertainties are due to
neutron cross section uncertainties, whereas for fission products, neutron cross section uncertainties or fission yield uncertainties can be dominant.
76
On Variable Selection and Effective Estimations of Interactive and Quadratic Sensitivity Coefficients: A Collection of
Regularized Regression Techniques
Weixiong Zheng and Ryan G. McClarren
Department of Nuclear Engineering, Texas A&M University, College Station, TX
In this paper, we present effective regularized regressions on variable selection and estimation of sensitivity coefficients of keff of a TRIGA fuel pin model, up to Order
2. 23 parameters are regarded. Considering 253 interactive and 23 quadratic terms, there are 299 parameters in total. Parameters were sampled via Latin Hypercube
sampling design. We explored the variable selection among the 299 parameters with seven types of regularized methods with different sample sizes and found
several methods, e.g. Bayesian lasso, Bayesian ridge, etc. outperform the commonly used lasso compared with the reference. Also, we compared these methods,
including lasso, with linear regression on sensitivity estimation with 299 realizations. Result showed the effectiveness of several regularized methods, e.g. Bayesian
ridge, on estimating high order coefficients compared with the reference from forward simulations. When reducing sample size, results show that some methods can
still estimate interactions acceptably well.
- 46 -
MC2015 : M&C + SNA + MC 2015
Multiphysics and Transient Analysis
Wednesday, April 22, 2015
1:30 PM
Two Rivers
Chairs: Dr John A. Turner, Dr. David P. Griesheimer
31
Crud and Boron Layer Modeling Requirements using MOC Neutron Transport
Daniel J. Walter and Annalisa Manera
Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, Michigan
The method of characteristics (MOC) neutron transport modeling requirements of very thin CRUD and burnable absorber layers on pressurized water reactor (PWR)
fuel rods are investigated. Ray tracing parameters, including spacing size and number of angles, and mesh refinement studies are performed for 2-D assembly
simulations using the DeCART code. It is found that the presence of a 10 μm thick Integral Fuel Burnable Absorber (IFBA) layer within the lattice model requires a ray
spacing that is approximately five times smaller than modeling a lattice without IFBA. The modeling of a CRUD layer necessitates less stringent requirements, due to
the fact that the boron-10 concentration in a CRUD layer is about one order of magnitude lower than what is typically found in IFBA layers. The effects of radial and
azimuthal mesh refinement, including homogenization strategies of the CRUD layer with the coolant, are investigated. It is found that the azimuthal mesh refinement
including smearing of the crud layer is very dependent on the azimuthal distribution of the CRUD layer and significantly impacts the predicted multiplication factor.
281
Design of a High Fidelity Core Simulator for Analysis of Pellet-Clad Interaction
R. Pawlowski (1), K. Clarno (2), R. Montgomery (3), R. Salko, T. Evans, J. Turner (2), and D. Gaston (4)
(1) Sandia National Laboratories (2) Oak Ridge National Laboratory (3) Pacific Northwest National Laboratory (4) Idaho National Laboratory
The Tiamat code is being developed by CASL (Consortium for Advanced Simulation of LWRs) as an integrated tool for predicting pellet-clad interaction and improving
the high-fidelity core simulator. Tiamat is a large-scale parallel code that couples the the multi-dimensional Bison-CASL fuel performance code on every fuel rod with
the COBRA-TF (CTF) sub-channel thermal-hydraulics code and either the Insilico or MPACT neutronics codes. Tiamat solves a transient problem where each time
step is subcycled using Picard iteration to converge the fully-coupled nonlinear system. This report discusses the solution algorithms and software design of the
simulator. Results are shown for a five assembly cross and compared against a separate core simulator developed by CASL. Tiamat has demonstrated that it can
compute quantities of interest for analyzing pellet-clad interaction.
274
Evaluation of Accident Tolerant FeCrAl Coating for PWR Cladding under Normal Operating Conditions with Coupled
Neutron Transport and Fuel Performance
Michael Rose and Thomas J. Downar (1), Xu Wu and Tomasz Kozlowski (2)
(1) Department of Nuclear Engineering and Radiological Sciences University of Michigan Ann Arbor, MI 48105 (2) Department of Nuclear, Plasma, and Radiological Engineering University of Illinois at
Urbana-Champaign Urbana, IL, 61801
Following the Fukushima Daiichi nuclear disaster in 2011, the emphasis for nuclear fuel R&D activities has shifted to enhancing the accident tolerance of light water
reactor fuels. In a previous study, several accident tolerant fuel (ATF) designs were evaluated under normal operating conditions for pressurized water reactors with
SERPENT and BISON. One of the more promising ATF designs was a fuel rod with Zircaloy cladding and a FeCrAl coating; the current study presents Redwing
results for this design. Redwing couples MPACT and BISON in order to perform coupled neutron transport and fuel performance simulations. In both the previous and
current studies, a short fuel rod model, 10 pellets in length, was depleted for about 3 years. In the current study, the reactivity as a function of time for the same model
was obtained from Redwing; these results show that the FeCrAl coating incurs a significant, but manageable, reactivity penalty. Several important fuel performance
parameters were obtained from Redwing and compared to the BISON standalone results from the previous study: fuel/cladding gap width, fission gas released to the
plenum, plenum pressure, and other parameters not shown in this paper. The fuel performance parameters show several explainable differences between Redwing
and BISON standalone, and some parameters suggest improvements that Redwing makes over BISON standalone. Work is underway to develop a full-length model
of an ATF rod for both BISON standalone and Redwing.
59
High Fidelity Modeling of Pellet-Clad Interaction Using the CASL Virtual Environment for Reactor Applications
K.T. Clarno (1), R.P. Pawlowski (2), R.O. Montgomery (3), T.M. Evans, B.S. Collins (1), B. Kochunas (4), D. Gaston (5), and J.A. Turner (1)
1) Oak Ridge National Laboratoryy, 2) Sandia National Laboratories, 3) Pacific Northwest National Laboratory, 4) University of Michigan–Ann Arbor, 5) Idaho National Laboratory
The Tiamat code is being developed by CASL (Consortium for Advanced Simulation of Light Water Reactors) as an integrated tool for predicting pellet-clad interaction
and improving the high-fidelity core simulator. Tiamat integrates the advanced core simulator capabilities of CASL, VERA-CS, with the multi-dimensional Bison-CASL
fuel performance code. VERA-CS provides the coupling of the COBRA-TF sub-channel thermal hydraulics and fuel heat transfer capability with either the Insilico or
MPACT neutronics solvers. This report discusses the two neutronics components of VERA and provides a parametric study of the performance of Tiamat using both
neutronics codes and a comparison with the VERA-CS version of both. It is demonstrated that Tiamat is robustly capable of modeling pellet-clad interaction and
highlights some differences in results due to inclusion of a rigorous fuel performance model rather than simple pin heat transfer.
Radiation Transport and Shielding Methods
Wednesday, April 22, 2015
Chair: Dr. Jim E. Morel
193
1:30 PM
Belmont
Development and Verification of Neutron-Photon coupling transport and parallelization in RMC code
Xiao Fan, Guohui Zhang (1), Jingang Liang, Kan Wang (2)
(1) State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing, P.R. China, (2) Department of Engineering Physics, Tsinghua University, Beijing, P.R. China
A Monte Carlo particle transport code – RMC developed by Tsinghua University is currently undergoing a significant update through the development of neutronphoton coupling transport and its parallelization. High performance parallel calculations of neutron, photon and neutron-photon coupling transports are implemented
both in eigenvalue and fixed source calculation modes. The results obtained by RMC, in neutron-photon coupling transport mode, are in good agreement with those
calculated by MCNP. Through the development and verification of photonuclear reaction function, neutron and photon two-way coupling transport was realized.
272
Impact of Inflow Transport Approximation on Reactor Analysis
Sooyoung Choi, Deokjung Lee (1), Kord Smith (2)
(1) Ulsan National Institute of Science and Technology (UNIST) Ulsan Korea. (2) Department of Nuclear Science and Engineering Massachusetts Institute of Technology (MIT) Cambridge, MA 02139,
United States
The methodologies for transport approximations are investigated. The methods include the consistent PN approximation, the outflow transport approximation and the
inflow transport approximation. The detailed derivation for the approximation is described. The three transport approximations are implemented in the lattice physics
code STREAM and tested for several verification problems in order to investigate the effect and accuracy of each transport approximation. From the verification, the
consistent PN approximation and the outflow transport approximations cause the significant error in the eigenvalue and the power distribution for the high leakage
problems. The inflow transport approximations show the most accurate and consistent results for the verification problems.
140
How to Use the Receiver Operating Characteristic Tally Option in MCNP6
Garrett E. McMath, Trevor A. Wilcox, and Gregg W. McKinney
Los Alamos National Laboratory
A receiver operating characteristic (ROC) curve capability has been added to the radiation transport code MCNP6. ROC curves are widely used as signal detection
metrics to quantify the sensitivity of detector systems to false positive signals. ROC curves show the probability of detection of a true signal versus the probability of a
false alarm. The most common use in radiation detection is the detectability of a source in the presence of background radiation. The ROC feature was first
implemented in MCNPX 2.7.D and has been carried over with improvements to MCNP6. MCNP6 can now produce ROC curves by utilizing newly implemented tally
treatments and background source options. A ROC curve is produced in the output file along with a data table of the bin-wise probability distribution function values
and cumulative distribution function values used to produce the curve. This paper provides two examples on how to use the ROC feature in MCNP6. Key Words:
ROC, receiver, operating, characteristic, MCNP
- 47 -
MC2015 : M&C + SNA + MC 2015
161
Topographic Effects on Ambient Dose Equivalent Rates from Radiocesium Fallout
Alex Malins, Masahiko Okumura, and Masahiko Machida (1), Kimiaki Saito(2)
(1) Center for Computational Science & e-Systems, Japan Atomic Energy Agency, Chiba, Japan (2) Fukushima Environmental Safety Center, Japan Atomic Energy Agency, Chiyoda-ku, Tokyo, Japan
Land topography can affect air radiation dose rates by locating radiation sources closer to, or further, from detector locations when compared to perfectly flat terrain.
Hills and slopes can also shield against the propagation of gamma rays. To understand the possible magnitude of topographic effects on air dose rates, this study
presents calculations for ambient dose equivalent rates at a range of heights above the ground for varying land topographies. The geometries considered were angled
ground at the intersection of two planar surfaces, which is a model for slopes neighboring flat land, and a simple conical geometry, representing settings from hilltops
to valley bottoms. In each case the radiation source was radioactive cesium fallout, and the slope angle was varied systematically to determine the effect of
topography on the air dose rate. Under the assumption of homogeneous fallout across the land surface, and for these geometries and detector locations, the dose
rates at high altitudes are more strongly affected by the underlying land topography than those close to ground level. At a height of 300 m, uneven topographies can
lead to a 50% change in air dose rates compared to if the ground were uniformly flat. However, in practice the effect will more often than not be smaller than this, and
heterogeneity in the source distribution is likely to be a more significant factor in determining local air dose rates.
Monte Carlo Methods
Wednesday, April 22, 2015
Chair: Dr. Ahmad M. Ibrahim
226
3:40 PM
Hermitage C
Monte Carlo Application Toolkit (MCATK): Advances for 2015
Jeremy Sweezy, Steve Nolen, Terry Adams, Travis Trahan, and Lori Pritchett-Sheats
Los Alamos National Laboratory, Los Alamos, New Mexico
The Monte Carlo Application ToolKit (MCATK) is a modern Monte Carlo particle transport software library developed at Los Alamos National Laboratory. It is designed
to provide new component-based functionality for existing software as well as provide the building blocks for specialized applications. We will describe the latest
capabilities developed in MCATK, including probability of initiation (POI), multi-temperature cross-sections, surface source read and write, and 3-D computational solid
body geometry.
229
Using fractional cascading to accelerate cross section lookups in Monte Carlo neutron transport calculations
Amanda L. Lund, Andrew R. Siegel (1), Benoit Forget, Colin Josey (2), and Paul K. Romano(3)
(1) Mathematics and Computer Science Division, Argonne National Laboratory, Lemont, IL, (2) Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA,
(3) Bechtel Marine Propulsion Corporation, Knolls Atomic Power Laboratory, Schenectady, NY
We describe and test a technique for carrying out energy grid searches in continuous-energy Monte Carlo (MC) neutron transport calculations that represents an
optimal compromise between grid search performance and memory footprint. The method, based on the fractional cascading technique and referred to as the
cascade grid, is tested within the OpenMC Monte Carlo code, and performance results comparing the method with existing approaches are presented for the
Hoogenboom-Martin reactor benchmark. The cascade grid achieves significant speedups in calculation rate with negligible initialization overhead while not increasing
the memory footprint by more than 2 times.
240
An interim strategy for the treatment of temperature-dependent nuclear data in Monte Carlo particle transport codes
Paul K. Romano, Timothy H. Trumbull, and Thomas M. Sutton
Bechtel Marine Propulsion Corporation, Knolls Atomic Power Laboratory, Schenectady, NY
The present work gives an overview of methods used for treating the temperature-dependence of nuclear data used in Monte Carlo neutral particle transport
simulations as implemented in the MC21 Monte Carlo code and NDEX data processing system. These methods are based on storing cross sections at discrete
temperatures and using interpolation, resulting in higher memory requirements than more recent on-the-fly methods. However, they offer a near-term solution for
carrying out high fidelity coupled neutronic-thermal hydraulic simulations until more experience is gained with on-the-fly methods. Methods discussed are categorized
by the pertinent energy range: thermal cross sections, epithermal cross sections, free-atom fast cross sections, and unresolved resonance range probability table
data. A new method for approximating the Doppler broadened neutron and photon energy release given data at a single temperature is also developed. Finally,
comparisons of both differential and integral data show that the temperature interpolation methods used in NDEX/MC21 produce accurate results in the models
considered.
Reactor Physics
Wednesday, April 22, 2015
3:40 PM
Chairs: Dr. Alain Hebert, Dr. Germina Ilas
105
Hermitage D
Preparation of Pin-By-Pin Nuclear Libraries with Superhomogenization Method for Ntracer and Dort Core Calculations
P. Mala (1,2), S. Canepa, H. Ferroukhi (1), A. Pautz (1,2)
1) Paul Scherrer Institute, Villigen, Switzerland, 2)Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland
This paper presents a study on the behavior of superhomogenization (SPH) factors and on the effect they have on the pin power prediction and on the multiplication
factor for the pin-by-pin codes nTRACER and DORT. An SPH module was developed specifically to work with these codes. Improvements in the multiplication factor
and in the pin power prediction are experienced for single assembly and consequently for the 3x3 cluster calculations using also the single assemblies cross sections
libraries. The largest improvement can be seen for the power of fuel rods with burnable absorbers for which the absorption reaction rates are generally overestimated
by using simple flux-volume weighted homogenized cross sections. In addition, sensitivity studies on the SPH factors were performed in terms of fuel temperature,
boron concentration, and burnup. The dependences on these variables have been found to be small and they can be approximated by linear or low order polynomials.
In conclusion a preliminary study on different approaches for the preparation of the pin-cell cross section preparation has been evaluated. Since the computation time
and memory requirements of the pin-by-pin calculations are considerable, possibility of grouping of similar pin cells together and its impact on calculation time and on
pin power prediction accuracy was also studied.
106
The Library Approximation Method for the future EDF chain - ANDROMEDE
A. Calloo, H. Leroyer (1), X. Shang (2)
1) EDF R&D/SINETICS, Clamart, France, 2)Department of Engineering Physics, Tsinghua University, Beijing, China
The goal of this paper is to define and apply a methodology to reduce the number of libraries required for core calculations to account for technological uncertainties or
cooling. Given that the enrichments for UOX assemblies are given up to a 0.05% tolerance, for a nominal enrichment, the number of assemblies to be computed may
be potentially high. Similarly, MOX assemblies from a given batch can be loaded at different dates in the core and today, assemblies are computed for every sixmonth period between the first loading and the subsequent ones (cooling of these assemblies leads to the formation of americium which impacts the reactivity). This
methodology has been achieved by using the library approximation method which is based on the microscopic depletion solver in the new EDF core code, C
OCAGNE . The library approximation method is used to treat enrichment heterogeneities for UOX assemblies or for MOX assemblies which are loaded at different
times. This article presents the methodology which has been developed and the associated validation. The cases studied include assemblies in infinite lattice and 3x3
clusters. The results are very satisfactory for future use in the future neutronics calculation chain, ANDROMEDE. The gain of this method is a significant reduction in
calculation times as less libraries are computed while the precision is maintained at a good level.
201
NJOY Based Multigroup Cross Section Generation for Fast Reactor Analysis
Chang Hyun Lim and Han Gyu Joo (1), Won Sik Yang (2)
(1) Seoul National University Department of Nuclear Engineering, Gwanak-gu, Seoul, Korea, (2) Purdue University School of Nuclear EngineeringWest Lafayette, IN USA
The methods and performance of a fast reactor multigroup cross section generation code, EXUS-F, is presented that is capable of directly processing the ENDF data
files. The NJOY modules are called in the code to process the resonance data contained in the ENDF/B file. The functions to generate self-shielded ultrafine group
XSs, fission spectrum matrices and scattering transfer matrices directly from the ENDF files are realized. The extended transport approximation is used in the zerodimensional (0D) calculation to obtain higher order moment spectra and the MOC method with the higher order scattering source is applied in EXUS-F for onedimensional (1D) calculations. The verification results for homogenized problem obtained by the composition with the McCARD Monte Carlo results show that the
cross section preparation and 0D ultrafine group spectrum calculation works properly. Significant difference was observed in 1D verification results, but it was
improved greatly by introducing the escape XS.
- 48 -
MC2015 : M&C + SNA + MC 2015
Sensitivity and Uncertainty Analysis
Wednesday, April 22, 2015
3:40 PM
Hermitage A-B
Chairs: Dr. Christopher M. Perfetti, Dr. Brian C. Kiedrowski
156
Construction Of A Response Surface For A Reactor-Like Problem With Realistic Cross Section Uncertainties
Don E. Bruss, Ryan G. McClarren, Marvin L. Adams, and Jim E. Morel
Department of Nuclear Engineering, Texas A&M University, College Station, TX
In this paper we construct a response surface for a simple problem with a high-dimensional input space common to nuclear reactor analysis. Response surfaces are
challenging to build for these problems due to the extreme dimensionality of the uncertain input space. Deterministic neutron transport calculations commonly
discretize the continuous energy variable into energy groups across which material properties can be averaged. This ``multigroup' approximation yields thousands of
cross sections each with an associated uncertainty. In this analysis a thirty energy-group discretization is chosen yielding 1,440 uncertain input parameters. The
variances and covariances between the cross sections are determined with the cross section preparation tool NJOY. A problem of interest is chosen and an equation
derived for a quantity of interest (QoI). An adjoint equation to determine sensitivity coefficients for the QoI with respect to individual cross sections is derived and the
sensitivity coefficients are used to screen the high dimensional input space. A response surface is constructed for the QoI using data from the solutions of the forward
and adjoint problems. The problem of interest chosen for this report serves as a surrogate model for the kinds of problems for which this methodology will prove be
most useful. The problem examined in this paper requires the same high-dimensional cross section data as the largest-scale deterministic nuclear reactor
calculations, but can be solved in a matter of seconds on a personal computer. We successfully construct a response surface to quantify the uncertainty in the QoI
and serve as a predictive model for this problem to demonstrate that this method is applicable to more challenging physical problems that require significant resources
to solve.
160
Monte Carlo Sensitivity and Uncertainty Analysis With Continuous-Energy Nuclear Covariance Data
Dong Hyuk Lee, Hyung Jin Shim and Chang Hyo Kim
Seoul National University, Seoul 151-744, Korea
The conventional nuclear data sensitivity and uncertainty (S/U) analysis has been conducted using the multi-group covariance data. In order to directly utilize the
continuous-energy covariance data given in the evaluated nuclear data files for the Monte Carlo (MC) S/U analysis, we present an uncertainty quantification
formulation based on the multi-group relative covariance tally in the continuous-energy MC calculations. The proposed method accompanied with the adjoint
estimation in the MC Wielandt calculations for the sensitivity calculations has been implemented in a Seoul National University MC code, McCARD. Isotope-wise and
reaction-type-wise k-uncertainties of Godiva calculated by the proposed method are compared with those by the conventional MC S/U analysis with the multi-group
covariance data.
172
A Model for Fission Yield Uncertainty Propagation based on the URANIE platform
N. Terranova and M. Sumini (1), P. Archier, O. Serot, D. Bernard, and C. De Saint Jean (2)
(1) Industrial Engineering Department (DIN) University of Bologna, Bologna, Italy (2) CEA, DEN, DER, SPRC, Paul-lez-Durance, France
In the present work, we show how fission yields uncertainties can be propagated in a burn-up calculation. The first part of the work is dedicated to the fission yield
covariances generation in CONRAD (COde for Nuclear Reaction Analysis and Data Assimilation) to be used in the neutronic code APOLLO2. Fission yield covariance
files are in fact unavailable in present nuclear databases such as JEFF-3.2 and ENDF/B-VII. To propagate such uncertainties, we adopted a statistical method which
has a solid theoretical base and a relatively simple implementation. Fission yields have been therefore treated as random variables to be sampled from a normal input
parameter multivariate distribution, taking into account correlations. Successively, a statistical representative number of calculations are carried out with the different
sampled input data. An output multivariate probability distribution for core characteristics is then inferred. Random variable sampling and statistical post-processing
has been performed using URANIE, a sensitivity and uncertainty analysis platform based on ROOT. This methodology is applied on a simplified geometry, leaving
further developments for more complicated layout to future works.
Multiphysics and Transient Analysis
Wednesday, April 22, 2015
3:40 PM
Two Rivers
Chairs: Dr. John A. Turner, Dr. David P. Griesheimer
103
Transient Methods For Pin-Resolved Whole Core Transport Using The 2D-1D Methodology In MPACT
Ang Zhu, Yunlin Xu, Aaron Graham, Mitchell Young and Thomas Downar (1), Liangzhi Cao (2)
1) Department of Nuclear Engineering & Radiological Science, University of Michigan, Ann Arbor, USA, 2) School of Nuclear Science and Technology, Xi'an Jiaotong University
This paper presents the development and preliminary validation of the transient transport capability within the framework of the pin resolved, 2D-1D method in the core
neutronics code MPACT. A description of the transient methodology developed in MPACT is first provided and then two alternative transient CMFD acceleration
techniques are described, a one group (1G) and a multigroup (MG) CMFD. Results show that the MG CMFD is more effective for practical transient problems. The
NEM nodal transient method is then presented as the 1D axial solver for the 2D-1D method in MPACT. Numerical results are then presented for the 2D TWIGL and
3D SPERT benchmarks. The TWIGL results from MPACT are shown to agree well with the DeCART transport code and other reference solutions. Preliminary results
are then shown for the SPERT III test 86 case and the MPACT result is shown to be in reasonable agreement with the experimental data.
176
COBRA-TF Parallelization and Application to PWR Reactor Core Subchannel DNB Analysis
Vefa Kucukboyaci and Yixing Sung (1), Robert Salko (2)
(1) Westinghouse Electric Company, Cranberry Woods, PA (2) Oak Ridge National Laboratory, Oak Ridge, TN
COBRA-TF (Coolant Boiling in Rod Arrays – Two Fluid) or CTF is a transient subchannel code, selected to be the reactor core thermal hydraulic (T/H) simulation tool
in the multi-physics code, Virtual Environment for Reactor Applications Core Simulator (VERA-CS) development project of the Consortium for Advanced Simulation of
Light water reactor (CASL) sponsored by the US Department of Energy. CTF has been improved and parallelized by CASL as part of its multi-physics software
package to help the nuclear industry address operational and safety challenge problems, such as departure from nucleate boiling (DNB). In this paper, CTF’s
performance and capability are evaluated by modeling and analyzing full core 3D models of a 3- loop and a 4-loop pressurized water reactor (PWR) with all fuel
assemblies modeled in subchannels. Calculations have been performed for DNB ratio (DNBR) predictions in complete loss-of-flow, low-flow main steam line break,
and rod ejection at full power transients. Those applications demonstrate CTF’s capabilities for modeling the entire reactor core in subchannels and simulating
challenging PWR transients in preparation for coupled multi-physics analysis using the VERA-CS neutronic and T/H code system.
107
Assessment of Thermal Hydraulic Feedbacks Effects during Modeling of a RIA using Direct Whole Core Transport
Solution
M. Hursin (1), H. G. Joo (2), T. J. Downar (1)
1) Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI, 2) Seoul National University, Seoul, Korea
In the present work, a transient initiated by the ejection of a control rod in a 2D mini core, has been modeled with the MOC based transport solver of the DeCART
code. Thermal hydraulic feedbacks are considered and obtained through the simplified internal solver of DeCART. Various DeCART options to incorporate the thermal
hydraulic feedbacks during the transient are assessed in terms of global parameters (maximum power and energy deposited in the fuel) and local parameters
(pinpower). It is found that generating thermal hydraulic feedbacks at the assembly level by a representative fuel rod or at the fuel rod level for every fuel rods, leads to
only small discrepancies, 0.15% RMS in terms of pinpower prediction, with no impact in terms of maximum power or deposited energy (the parameter with safety
implication for safety analysis during RIA). However the incorporation of local thermal hydraulic feedbacks on the cross sections and more precisely, taking into
account the temperature profile within a fuel rod during the self-shielding calculation, is found to have a much bigger impact, both globally (7% in terms of maximum
core average power and deposited energy) and locally (1.5 % RMS in terms of pinpower).
- 49 -
MC2015 : M&C + SNA + MC 2015
Radiation Detection
Wednesday, April 22, 2015
Chair: Dr. Eva E. Sunny
266
3:40 PM
Belmont
Modernization Strategies for SCALE 6.2
B. T. Rearden, R. A. Lefebvre, J. P. Lefebvre, K. T. Clarno, M. A. Williams, L. M. Petrie, U. Mertyurek, B. R. Langley, and A. B. Thompson
Reactor and Nuclear Systems Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for
criticality safety, reactor physics, radiation shielding, and sensitivity/uncertainty analysis. For more than 30 years, regulators, industry, and research institutions around
the world have used SCALE for nuclear safety analysis and design. However, the underlying architecture of SCALE is based on a 40-year-old design with dozens of
independent functional modules and control programs, primarily implemented in the Fortran programming language, with extensive use of customized intermediate
files to control the logical flow of the analysis. Data are passed between individual computational codes using custom binary files that are read from and written to the
hard disk. The SCALE modernization plan provides a progression towards SCALE 7, which will provide an object-oriented, parallel-enabled software infrastructure
with state-of-the-art methods implemented as reusable components. This paper provides a brief overview of the goals of SCALE modernization and details some
modernized features available with SCALE 6.2.
1
A Monte Carlo Model of Elemental Analysis Using a Natural Gamma-Ray Spectroscopy Tool
Qiong Zhang, Freddy Mendez, John Longo, Alberto Mezzatesta, Maxim Vasilyev, Steve Bliven (1), Artur Safin (2)
1) Baker Hughes Incorporated, Houston, TX, 2)Department of Applied Mathematics, University of Texas at Dallas
An innovative computational approach for obtaining elemental standards for isotopes was developed with the potential to replace experimental measurements by
Monte Carlo simulations. Elemental standards are essential for spectral unfolding in formation evaluation applications commonly used for nuclear well logging tools.
Typically, elemental standards are obtained by standardized measurements. However, such experiments are expensive, subject to constraints. e g. impurity of the test
formations and lack the flexibility to address different logging environments because of its time-consuming nature. In contrast, computer-based Monte Carlo
simulations provide an accurate and much more flexible approach to obtain elemental standards for formation evaluation. Given that Monte Carlo modelling provides a
much lower cost and more dimensions of flexibility, the processing of nuclear tool logging data can be enhanced to a new level by employing Monte Carlo modelled
standards. In the scope of this paper, a natural gamma ray spectroscopy tool is selected as an example and the procedure of obtaining elemental standards through
Monte Carlo modelling is demonstrated. A case study is presented from actual well logging data to compare two sets of elemental standards and plans for future work
are discussed.
10
An Analytical Model to Evaluate Gamma-Ray Attenuation Effects in Cased Hole Logging Environment
Qiong Zhang, Freddy Mendez, Alberto Mezzatesta, Steve Bliven (1), Artur Safin (2)
1) Baker Hughes Incorporated,Houston, TX,2) Department of Applied Mathematics, University of Texas at Dallas
Spectral gamma ray tools provide insights into the mineral composition of formations and such data can be used to distinguish important features of the formation
around the wellbore. However, it is a challenge in cased-hole scenarios to anticipate the attenuation effects of casing within various conditions of density and
thickness in an efficient manner. In this paper, an innovative analytical model is presented for cased-hole environment spectrum analysis and, specifically, to
numerically determine the attenuation effects of gamma ray transmission using various materials. This model is highly efficient and widely applicable for analyzing the
attenuation of gamma rays in a great variety of cased-hole scenarios. The model computes the cased-hole spectrum using the counts of photons that scatter no more
than once in the casing. Taking into account the geometry and the composition of casing, the model mathematically combines Lambert law with the Klein-Nishina
differential cross-section to derive an analytical solution.
- 50 -
MC2015 : M&C + SNA + MC 2015
Monte Carlo Methods
Thursday, April 23, 2015
8:30 AM
Hermitage C
Chairs: Dr Christopher M. Perfetti, Dr. Mathew A. Cleveland
285
Metrics for Diagnosing Undersampling in Monte Carlo Tally Estimates
Christopher M. Perfetti and Bradley T. Rearden
Oak Ridge National Laboratory Reactor and Nuclear Systems Division Oak Ridge, TN, USA
This study explored the potential of using Markov chain convergence diagnostics to predict the prevalence and magnitude of biases due to undersampling in Monte
Carlo eigenvalue and flux tally estimates. Five metrics were applied to two models of pressurized water reactor fuel assemblies, and their potential for identifying
undersampling biases was evaluated by comparing the calculated test metrics with known biases in the tallies. Three of the five undersampling metrics showed the
potential to accurately predict the behavior of undersampling biases in the responses examined in this study.
251
Momentum Deposition in Curvilinear Coordinates
M.A. Cleveland, A.B. Wollaber, R.B. Lowrie, K.G. Thompson, and G.M. Rockefeller
Los Alamos National Laboratory, Los Alamos, NM
The momentum imparted into a material by thermal radiation deposition is an important physical process in astrophysics and inertial confinement fusion (ICF)
simulations. Momentum deposition in curvilinear coordinates cannot be evaluated analytically for Monte Carlo simulations that use continuous energy deposition
models. In this work we present a new method of evaluating momentum deposition that relies on the combination of a time-averaged approximation and a numerical
integration scheme. We compare this approach to previous momentum deposition methods for a simple numerical and an analytic Marshak wave benchmark.
261
Correlated Sampling Monte Carlo for Critical Boron Search
Aaron M. Bevill and William R. Martin
University of Michigan Department of Nuclear Engineering and Radiological Sciences, Ann Arbor, MI
This paper compares two methods for critical boron search using eigenvalue Monte Carlo. The first method interpolates independent eigenvalue calculations with
generalized least squares to infer the critical boron concentration. The second method uses a modified interpolation scheme to infer the critical boron concentration
from one correlated sampling calculation. The correlated sampling method is accurate to at least 5 ppm and typically 10× to 20× faster.
286
Tracking to Nonaligned and to Distant Tori
Kenneth A. Van Riper
White Rock Science, Los Alamos, NM
Tracking algorithms that find the distance to a surface from a point in a specified direction are used both in Monte Carlo transport codes and in graphics programs,
such as Sabrina, that make pictures by ray tracing. Torus surfaces in MCNP are restricted to aligned tori with the major axis parallel to a coordinate axis. Because
Sabrina is based on the MCNP tracking code, this restriction was originally present in Sabrina. We show the modifications to Sabrina that permit tracking to
nonaligned tori. The position and direction vectors are transformed by a matrix based on the vector of the nonaligned torus’ major axis. Ray traced pictures of a torus
far from the viewpoint show artifacts caused by numerical imprecision in the quartic solver used to calculate the intersection of a ray with a torus. Tracking first to a
sphere enclosing the torus and then to the torus removes the artifacts.
Deterministic Transport Methods
Thursday, April 23, 2015
8:30 AM
Hermitage D
Chairs: Dr. Tara M. Pandya, Dr. Teresa S. Bailey
77
Reconstruction of Neutronic Macroscopic Cross-Sections Using Tucker Decomposition
Thi Hieu Luu, Matthieu Guillo, Pierre Guérin (1), Yvon Maday (2)
1) Department of SINECTICS, EDF/R&D, Clamart, France; 2) Sorbonne Universités, UPMC Univ Paris 06, Paris, France; 3) Institut Universitaire de France and 4) Division of Applied Mathematics, Brown
University, Providence, RI, USA
EDF/R&D is developing a new core code named COCAGNE. It requires as inputs the value of various cross-sections that depend on many feedback parameters.
These are first computed using a lattice code (such as APOLLO2) for some values of the feedback parameters and stored in a library. When required by COCAGNE,
the cross-sections are estimated from the library by some interpolation process. Using multilinear interpolation method leads to a problem called "curse of
dimensionality", which means that the necessary data for the interpolation increases exponentially with the number of dimensions. We propose here a new method
based on Tucker decomposition. First a technique of Karhunen-Loève decomposition known as Higher-order singular value decomposition (HOSVD) is used to
determine for each parameter, a one-dimensional basis suitable with the considered cross-section. Then, the cross-sections are approximated as a low rank tensor
product of these bases expressed through a Tucker format. Using this technique, we avoid the "curse of dimensionality" problem. We show that with this method, even
though there is still a lot of possible improvements, results on accuracy are promising while the number of lattice calculation points decreases and the storage
efficiency increases. Key Words: cross-sections recontruction, Karhunen-Loève decomposition, Higher-order singular value decomposition (HOSVD), Tucker
decomposition, low rank approximation
85
Relaxation Schemes for the M1 Model with Space-Dependent Flux:Application to Radiotherapy Dose Calculation
Teddy Pichard and Martin Frank (1), Denise Aregba-Driollet, Stéphane Brull, and Bruno Dubroca (2)
1) Mathematics division, Center for Computational Engineering Science, Rheinisch-Westfälische Technische Hochschule, Aachen, Germany, 2) Institut de Mathématiques de Bordeaux, Université de
Bordeaux, Talence, France
Because of stability constraints, most numerical schemes applied to hyperbolic systems of equa- tions turn out to be costly when the flux term is multiplied by some
very large scalar. This problem emerges with the M 1 system of equations in the field of radiotherapy when considering heterogeneous media with very disparate
densities. Additionally, the flux term of the M 1 system is non-linear, and in order for the model to be well-posed the numerical solution needs to fulfill conditions called
realizabil- ity. In this paper, we propose a numerical method that overcomes the stability constraint and preserves the realizability property. For this purpose, we relax
the M 1 system to obtain a linear flux term. Then we extend the stencil of the difference quotient to obtain stability. The scheme is applied to a radiotherapy dose
calculation example.
213
Evaluation of PWR Simulation Benchmarks Identified in U. S. NRC Regulatory Guide 1.190 Using the TRANSFX Nuclear
Analysis Software
B. P. Richardson
TransWare Enterprises Inc., Sycamore, IL
Evaluations have been performed for two of the experimental benchmarks identified in Nuclear Regulatory Commission Regulatory Guide 1.190 using the TRANSFX
Nuclear Analysis Software. TRANSFX uses a deterministic, three-dimensional, multigroup nuclear particle transport theory code (TRANSRAD) that performs neutron
and gamma flux calculations. TRANSFX couples the nuclear transport method with a general geometry modeling capability to provide a flexible and accurate tool for
determining fluxes for any light water reactor design. TRANSFX supports the mtehod of characteristics solution technique, a three-dimensional ray-tracing method
based on combinatorial geometry, a fixed source iterative solution with anisotropic scattering, thermal-group upscattering treatments, and a nuclear cross-section data
library based upon the ENDF/B-VI data file. The benchmarks evaluated include the VENUS-3 benchmark and the H. B. Robinson-2 Pressure Vessel Benchmark.
These benchmarks are intended to qualify a methodology for performing reactor pressure vessel fast fluence calculations. The overall comparison to measurements
results in a calculated to measured ratio of 1.03 with a standard deviation of 0.05. This is within the uncertainty associated with the measured values, and well within
the 20% uncertainty allowed by Reg. Guide 1.190, demonstrating that the TRANSFX Software is capable of performing neutron transport calculations for evaluating
RPV neutron fluence.
- 51 -
MC2015 : M&C + SNA + MC 2015
162
Property Analysis on the Spatially Dependent Resonance Self-shielding Method
Hongbo Zhang, Chuntao Tang, Bo Yang, Weiyan Yang and Guangwen Bi
Shanghai Nuclear Engineering Research and Design Institute, Shanghai, China
Some basic properties of an existing spatially dependent resonance self-shielding method, the Spatially Dependent Dancoff Method (SDDM), are studied based on an
implementation of the algorithm. A quantity of testing cases are designed considering several factors influencing the effective resonance cross-sections, including
heavy metal enrichments, temperatures, spatial heterogeneity, the fuel types and so on. Some technical details of the algorithm, such as resonance interaction
iteration, removal cross-section modification, and plutonium thermal resonance are also discussed. The effects are quantitative and some algorithm property analyses
are attempted, which could be referred by some later work on engineering calculations and algorithm improvements.
Reactor Physics
Thursday, April 23, 2015
8:30 AM
Hermitage A-B
Chairs: Dr Ugur Mertyurek, Dr. Germina Ilas
62
Analysis of Temperature Effects on Reactivity of Light Water Reactor Fuel Assemblies by Using MVP-2 Adopting an
Exact Resonace Elastic Scattering Model
Toru Yamamoto and Tomohiro Sakai
Division of Research for Reactor System Safety, Regulatory Standard and Research Department, Secretariat of Nuclear Regulation Authority, Minato-ku, Tokyo, Japan
The Japan Atomic Energy Agency has installed a resonace elastic scattering model coupled with the thermal motion of target nucleuses (the exact model) to the
continuous energy Monte Carlo code MVP-2 and also prepared the libraries of elastic scattering cross sections at 0 K for 235U, 238U, 238Pu, 239Pu, 240Pu, 241Pu,
242Pu and 241Am, which are necessary to apply the exact model. We applied the code and the libralies to analyze the neutron multiplication factors (kinf) and
Doppler reactivity of UO2 and MOX fuel assemblies for light water reactors and compared them with those obtained by using the conventional asymptotic slowing
down model (the asymptotic model). The base condition of the assemblies was a hot operating condition with an in-channel void fraction of 40% and a fuel pellet
temperature of 520 degree C (793 K) for the BWR fuel assemblies and a hot zero power condition with that of 284 degree C (557 K) for the PWR fuel assemblies.
From the base condition only fuel pellet temperatures was raised to 1500 degree C (1773 K). The calculated results showed that the difference in kinf between the
exact and the asymptotic models were -220 to -410 pcm (1 pcm=0.00001 dk) at the 1500 degree C condition, and the exact model made the Doppler reactivity
between the base and the 1500 C conditions more negative by 7 to 10%. The effect of the exact model for all the heavy isotopes other than 238U was to make the
Doppler reactivity less negative by a few %.
91
A Semi-Heterogeneous Method For On-The-Fly Environmental Correction of Nodal Equivalence Parameters
R. H. Prinsloo, O. M. Zamonsky, D. I. Tomasevic, B. Erasmus, S. A. Groenewald
Necsa, Pretoria, Gauteng, South Africa
An embedded calculational scheme for full core nodal diffusion calculations is proposed, with particular aim to correct nodal equivalence parameters with regard to the
so-called environmental effect.Generally, approximations such as spatial homogenization, energy condensation and the diffusion approximation have been handled
via available forms of equivalence parameter generation, but the residual environmental error resulting from the transport solution’s approximate boundary conditions
remains an area of concern. In this work a semi-heterogeneous embedded scheme is proposed as potential remedy, as opposed to utilizing fully heterogeneous
embedded transport solutions. In this approach, it is shown that a simplified embedded solution is still capable of providing on-the-fly environmental corrections to both
cross-sections and discontinuity factors. This semi-heterogeneous representation differs from the original heterogeneous transport problem, with regard to the level of
spatial heterogeneity, energy representation and order of the solution operator. This scheme is tested on representative benchmarks which exhibit environmental
errors, and while improving the accuracy of the solution, it largely retains the performance advantage of traditional nodal methods.
197
Utilization of LR-0 Reactor in MSR Research
Evžen Losa, Michal Košťál, Vojtěch Rypar, Martin Schulz, Bohumil Jánský, Evžen Novák
Research Centre Rez, Husinec-Řež, Czech Republic
Study of the reactivity effects, which are in many cases negligible, is difficult task. For example in the planed frame of the temperature reactivity coefficient
measurement in the FLIBE salt, the introduced reactivity can be as low as 10-20 pcm. With regard to these requirements the experiments with special core
configurations with high sensitivity to low introduced reactivity changes were performed at LR-0 reactor. The core configurations proved to be highly sensitive to the
small reactivity changes and the calculations showed some discrepancies in physical description of the compound containing fluorine. The results also imply this kind
of core configurations to be applicable for the planed precise measurement of the MSR FLIBE salt temperature reactivity coefficients. Another important task is the
study of neutron spectra in the inserted materials. The neutron spectra were measured after the layer of the FLINA salt and graphite and these measurements were
compared with calculations in different nuclear data libraries. The results are showing notable variations between various libraries results and experiment. Those
discrepancies are most probably caused by 19F, which description shows most notable variations from studied elements. This result might be important not only for
MSR research but also for criticality safety issues in fuel fabrication process, where the fluorine is used.
114
Temperature Limited Transient Calculations For The Transient Reactor Test Facility (TREAT) Using MCNP and the
Point Kinetics Code TREKIN
Dimitrios C. Kontogeorgakos, Heather M. Connaway, Keith L. Derstine, and Arthur E. Wright (1), Sean R. Morrell (2)
1) Argonne National Laboratory, Argonne, IL, 2) Idaho National Laboratory, Idaho Falls, ID
The Transient Reactor Test Facility (TREAT) is an experimental reactor located at Idaho National Laboratory. TREAT was built to conduct transient reactor tests
simulating accident conditions, and to test fast reactor and light water reactor fuel as well as other special purpose fuels. TREAT operated from 1959 until 1994 when
it was placed on non-operational standby. Recently, the US Department of Energy made the decision to pursue the resumption of transient testing utilizing TREAT.
Efforts are also underway by DOE to convert the reactor from its current highly-enriched uranium (HEU) fuel to low enriched uranium (LEU) fuel. Experiment planning
requires transient simulation capabilities to ensure safe operations. Transient simulations are performed with the point kinetics code TREKIN, which was specifically
designed for TREAT and was extensively used in planning experiments during reactor operations. TREKIN uses neutronics input data (temperature reactivity
feedback, prompt generation lifetime, and delayed neutron effective fraction) that, for this study, was produced with the Monte Carlo code MCNP. This paper
describes the methodology used to perform transient calculations as well as its validation by comparison with temperature-limited transient experiments. The
application of the method to analyze the feasibility of the core conversion from HEU to LEU fuel is also discussed and preliminary results for the current LEU fuel
design concept are presented.
Multiphysics and Transient Analysis
Thursday, April 23, 2015
8:30 AM
Two Rivers
Chairs: Dr John A. Turner, Dr. David P. Griesheimer
123
Modelling of Stationary Fluctuations in Nuclear Reactor Cores in the Frequency Domain
C. Demazière, V. Dykin, A. Hernández-Solís and V. Boman
Chalmers University of Technology, Department of Applied Physics, Division of Nuclear Engineering, Gothenburg, Sweden
This paper presents the development of a numerical tool to simulate the effect of stationary fluctuations in Light Water Reactor cores. The originating fluctuations are
defined for the variables describing the boundary conditions of the system, i.e. inlet velocity, inlet enthalpy, and outlet pressure. The tool calculates the threedimensional space-frequency distributions within the core of the corresponding fluctuations in neutron flux, coolant density, coolant velocity, coolant enthalpy, and fuel
temperature. The tool is thus based on the simultaneous modelling of neutron transport, fluid dynamics, and heat transfer in a truly integrated and fully coupled
manner. The modelling of neutron transport relies on the two-group diffusion approximation and a spatial discretization based on finite differences. The modelling of
fluid dynamics is performed using the homogeneous equilibrium model complemented with pre-computed static slip ratio. Heat conduction in the fuel pins is also
accounted for, and the heat transfer between the fuel pins and the coolant is modelled also using a pre-computed distribution of the heat transfer coefficient. The
spatial discretization of the fluid dynamic and heat transfer problems is carried out using finite volumes. The tool is currently entirely Matlab based with input data
provided by an external static core simulator. The paper also presents the results of dynamic simulations performed for a typical pressurized water reactor and for a
typical boiling water reactor, as illustrations of the capabilities of the tool.
- 52 -
MC2015 : M&C + SNA + MC 2015
145
Solving the Time-Dependent Neutron Diffusion Equation Using Moving Meshes
A. Vidal-Ferrandiz, R. Fayez, and G. Verdú (1), D. Ginestar (2)
1) Instituto de Seguridad Industrial: Radiofísica y Medioambiental, Universitat Politècnica de València, València, Spain, 2) Instituto de Matemática Multidisciplinar, Universitat Politècnica de València,
València, Spain
To simulate the behaviour of a nuclear power reactor it is necessary to be able to integrate the time-dependent neutron diffusion equation inside the reactor core. In
particular, we will consider here VVER-type reactors which use the neutron diffusion equation discretized on hexagonal meshes. The spatial discretization of this
equation is done using a finite element method that permits h-p refinements for different geometries. Transients involving the movement of the control rod banks have
the problem known as the rod-cusping effect. The rod cusping is an unphysical behaviour of the keff of the reactor or the average power that appears on the
calculation results when the volume weighted method is used to interpolate the cross sections of a partially rodded node. Previous studies have usually approached
the problem using a fixed mesh scheme defining averaged material properties and many techniques exist for the treatment of the rod cusping problem. The present
work proposes the use of a moving mesh scheme that uses spatial meshes that change with the movement of the control rods avoiding the necessity of using
equivalent material cross sections for the partially inserted cells. The performance of the moving mesh scheme is tested studying two different three-dimensional
benchmark problems.
70
Efficient, Three-Temperature, Thermal Radiative Transfer Solution via a High-Order, Low-Order Method
H. Park, A. B. Wollaber, R. M. Rauenzahn, and D. A. Knoll
Los Alamos National Laboratory, Los Alamos, NM
We discuss an efficient solution method for coupled three-temperature thermal radiative transfer problems. Our approach is based on a previously developed momentbased scale-bridging algorithm. In this setting, the coupling terms between the electrons, ions, and radiation can be treated in the low-order system with an efficient
nonlinear solver. We also discuss an effective preconditioner can be easily created using combination of operator-splitting and linearization.
104
The Implementation and Analysis of the MOC And CMFD Adjoint Capabilities in the 2d-1d Code MPACT
Ang Zhu, Yunlin Xu, Thomas Saller and Thomas Downar
Department of Nuclear Engineering & Radiological Science, University of Michigan, Ann Arbor, MI, USA
The objective of the work presented in this paper was to develop both the Method of Characteristics (MOC) and Coarse Mesh Finite Difference (CMFD) adjoint
capabilities within the framework of the pin resolved, 2D-1D method in the core transport neutronics code MPACT. The first section of this paper provides a description
of the algorithms developed in MPACT to solve the adjoint flux for both the MOC and CMFD equations. The computational complexity and efficiency of the MOC
based and CMFD based adjoint flux calculations are then compared, and numerical results are presented. The MOC and CMFD adjoint flux solutions are compared
for a simple pin cell case which shows good agreement in the non-resonance energy region. This suggests that the use of the CMFD-based adjoint flux is sufficiently
accurate for cases in which pin-resolution is not important such as core reactivity edits. A SPERT transient rod assembly case is then used to compare MPACT spatial
transient results to the Exact Point Kinetic Equation (EPKE) results using the CMFD adjoint flux. Good agreement is observed between the EPKE and the MPACT
spatial transient results which provides confidence in CMFD adjoint flux calculation for practical core reactivity edits in MPACT.
Advanced Solvers in Nuclear Technology
Thursday, April 23, 2015
8:30 AM
Chair: Dr. Thomas M. Evans
211
Belmont
Analysis of an Adaptive Time Step Scheme for the Transient Diffusion Equation
Justin M. Pounders and Joseph Boffie
Nuclear Engineering Program, Department of Chemical Engineering University of Massachusetts Lowell, Lowell, MA
The stability and accuracy of a new adaptive time step selection scheme is investigated for the transient diffusion equation. This adaptive time step scheme is based
on the commonly-implemented backward difference discretization of the diffusion equation and recommends optimal time steps based on constraints applied to
estimates of the local truncation error. Methods are derived for both error estimation and error control, each of which potentially impacts the stability of the scheme and
the global accuracy of the solution. Asymptotic stability and convergence of the recommended time steps are investigated theoretically and demonstrated numerically
to identify optimal realizations of the new method. This adaptive time stepping scheme requires no solution evaluations or operator inversions beyond those already
performed in the adaption-free solution and requires no modifications to the numerical solution algorithm. As such, this adaptivity scheme can be easily implemented
in virtually any reactor physics simulation code based on a backward difference discretization of transient neutronics.
242
Rayleigh Quotient Iteration with a Multigrid in Energy Preconditioner for Massively Parallel Neutron Transport
R.N. Slaybaugh(1), T.M. Evans, G.G. Davidson (2), and P.P.H. Wilson(3)
(1) Department of Nuclear Engineering, University of California, Berkeley, CA, (2) Radiation Transport Group, Oak Ridge National Laboratory, Oak Ridge, TN, (3) Department of Nuclear Engineering and
Engineering Physics, University of Wisconsin, Madison, WI
Three complementary methods have been implemented in the code Denovo that accelerate neutral particle transport calculations with methods that use leadershipclass computers fully and effectively: a multigroup block (MG) Krylov solver, a Rayleigh quotient iteration (RQI) eigenvalue solver, and a multigrid in energy
preconditioner. The multigroup Krylov solver converges more quickly than Gauss Seidel and enables energy decomposition such that Denovo can scale to hundreds
of thousands of cores. The new multigrid in energy preconditioner reduces iteration count for many problem types and takes advantage of the new energy
decomposition such that it can scale efficiently. These two tools are useful on their own, but together they enable the RQI eigenvalue solver to work. Each individual
method has been described before, but this is the first time they have been demonstrated to work together effectively. RQI should converge in fewer iterations than
power iteration (PI) for large and challenging problems. RQI creates shifted systems that would not be tractable without the MG Krylov solver. It also creates illconditioned matrices that cannot converge without the multigrid in energy preconditioner. Using these methods together, RQI converged in fewer iterations and in less
time than all PI calculations for a full pressurized water reactor core. It also scaled reasonably well out to 275,968 cores.
244
A Modified Moving Least Square Algorithm for Solution Transfer on a Spacer Grid Surface
Stuart R. Slattery, Steven P. Hamilton, and Thomas M. Evans
Oak Ridge National Laboratory, Oak Ridge, TN
Solution transfer operators constructed from a moving least square basis have recently been researched for multiphysics simulations of pressurized water reactors.
Useful for solution transfer between surfaces and volumes, moving least square schemes are attractive because they do not require a computational mesh but instead
use a point cloud representation of the discretized domain. In addition, they are often able to achieve a very accurate and conservative solution reconstruction and are
readily parallelized to scale on leadership class computing facilities. When studying the moving least square technique, we discovered that when the algorithm was
applied to solution transfers on a spacer grid surface, numerical instabilities in the singular value decomposition algorithm used to generate the moving least square
basis resulted in large errors after a few solution transfer iterations, rendering the method unusable in many situations. In this work we assess these instabilities and
show that using a truncated singular value decomposition and simply augmenting the threshold in the algorithm does not alleviate the instabilities without introducing
additional error into the solution transfer and adding an additional free parameter to the algorithm. We then modify the moving least square algorithm to ensure that
the terms of the spatial polynomials used to construct the least square problem are linearly independent, resulting in a full rank linear system which can be used to
calculate an optimal truncation threshold. We show that the new algorithm significantly improves the stability of repeated solution transfers on the spacer grid surface.
280
Analysis of Anderson Acceleration on a Simplified Neutronics/Thermal Hydraulics System
A. Toth, C.T Kelley (1), S. Slattery, S. Hamilton, K. Clarno (2), and R. Pawlowski (3)
(1) Department of Mathematics, North Carolina State University Raleigh, NC 27695, (2) Oak Ridge National Laboratory Oak Ridge, TN 37831, (3) Sandia National Laboratory Albuquerque, NM 87185
A standard method for solving coupled multiphysics problems in light water reactors is Picard iteration, which sequentially alternates between solving single physics
applications. This solution approach is appealing due to simplicity of implementation and the ability to leverage existing software packages to accurately solve single
physics applications. However, there are several drawbacks in the convergence behavior of this method; namely slow convergence and the necessity of heuristically
chosen damping factors to achieve convergence in many cases. Anderson acceleration is a method that has been seen to be more robust and fast converging than
Picard iteration for many problems, without significantly higher cost per iteration or complexity of implementation, though its effectiveness in the context of multiphysics
coupling is not well explored. In this work, we develop a one-dimensional model simulating the coupling between the neutron distribution and fuel and coolant
properties in a single fuel pin. We show that this model generally captures the convergence issues noted in Picard iterations which couple high-fidelity physics codes.
We then use this model to gauge potential improvements with regard to rate of convergence and robustness from utilizing Anderson acceleration as an alternative to
Picard iteration.
- 53 -
MC2015 : M&C + SNA + MC 2015
Monte Carlo Methods
Thursday, April 23, 2015
10:40 AM
Chair: Dr. Thomas M. Sutton
157
Hermitage C
Uncertainty Underprediction in Coupled Time-Dependent Monte Carlo Simulations with Serpent 2
Ville Valtavirta
VTT Technical Research Centre of Finland, Espoo, Finland
This paper studies the possibility to use the traditional batch-based estimate for statistical uncertainty in uncoupled and coupled transient calculations with the Monte
Carlo code Serpent 2. Such a study is needed due to a fundamental difference in the way neutrons are divided into batches between the criticality source calculations
and these time-dependent calculations as well as new batch-to-batch correlations arising from the coupled solution. The uncertainty estimate given by Serpent 2 is
compared to the true uncertainty calculated from several independent simulations to calculate the uncertainty underprediction factor. The uncoupled transients were
calculated for the Flattop (fast spectrum) and STACY-30 (thermal spectrum) experiments. The results show a nice agreement with the batch-wise uncertainty estimate
and the true uncertainty. The case chosen for the coupled transient is a very short prompt super-critical power peak in a PWR pin cell disregarding the effect of
delayed neutrons. The results show that a too small population size leads to underestimation of the uncertainty by the traditional batch-wise estimate throughout the
whole transient, most likely due to undersampling effects. An interesting effect is observed in the time frame of strong coupling between fission power and temperature
solution: Whereas the uncertainty prediction of the batch-wise estimate is very accurate before this period of strong coupling, the batch-wise estimate actually
overestimates the uncertainty in the fission power at this period. Finally, the relation between the relative standard deviation in fission power and the fuel behavior
solution was studied during the coupled transient. The main results from this part indicate, as one could expect, that the relative standard deviation in the neutronics
solution was not transmitted into the fuel behavior solution in a simple manner, but through the corresponding physics governing the fuel behavior.
159
Adjoint Sensitivity and Uncertainty Analysis in Monte Carlo Wielandt Calculations
Sung Hoon Choi and Hyung Jin Shim
Department of Nuclear Engineering, Seoul National University, Seoul, Korea
The Monte Carlo (MC) adjoint-weighted perturbation (AWP) method for the nuclear data sensitivity and uncertainty (S/U) analysis requires an enormous amount of
memories to store history-wise tallied outputs during convergences of the adjoint flux. Because of this memory problem, the number of histories per cycle could not be
allowed more than a certain number in the conventional MC S/U analysis. In order to reduce the memory consumption to a negligible amount in the MC S/U analysis,
we present a sensitivity estimation method in the MC Wielandt eigenvalue calculations. The new method has been implemented in a Seoul National University MC
code, McCARD and its effectiveness is demonstrated via the S/U analyses for Godiva.
276
Stabilization Technique of Modified Power Iteration Method for Monte Carlo Simulation of Neutron Transport
Eigenvalue Problem
ZHANG Peng, Hyunsuk Lee and Deokjung Lee
Ulsan National Institute of Science and Technology, Ulsan 689-798, Republic of Korea
Power iteration method is widely used in nuclear criticality calculations to get the dominant eigenvalue and the corresponding eigenfunction, both with deterministic
methods and Monte Carlo methods. In the last few years a modified power iteration method has been proposed to obtain the first two eigenvalues and eigenfunctions,
and it can accelerate the convergence of the first eigenvalue and eigenfunction by subtracting the second mode from the first mode. It has been tested with
deterministic method that the convergence rate of the first eigenfunction is |k3|/k1 instead of |k2|/k1. However, there are some difficulties with the Monte Carlo
implementation of the modified power iteration method. One difficulty is that both positive and negative weights should be maintained and some weight cancellation
scheme should be applied to get the second eigenfunction. Another difficulty is that for some cases the Monte Carlo implementation of the modified power iteration
method may collapse due to some stability problem. We think the stability problem is caused by the standard deviation and the high dominance ratio of the system,
and we proposed some techniques to deal with this problem.
Accelerators and Subcritical Systems
Thursday, April 23, 2015
Chair: Dr. Alberto Talamo
46
10:40 AM
Hermitage D
Benefits of Multi-Level Rebalancing for Close-to-Critical Subcritical System Computations
René van Geemert
AREVA GmbH, Erlangen, Germany
A recent development in AREVA’s core simulation code ARTEMIS™ has been the harmonisation of subcritical system numerics with the pre-existing multi-level
rebalancing concept in the embedded flux solver module. Within this framework, particularly favorable properties have been achieved for computations featuring very
small negative reactivities. The latter are close-to-critical and numerically not that far from being singular, which means that their unaccelerated iterative solution would
require enormously many steps for reaching sufficient convergence. However, one can delegate these slightly subcritical iterations to computationally cheaper multihierarchical rebalancing levels. By doing this, a favorable invariance of convergence behavior (and thereby computational effort) arises in terms of the total numbers of
required multi-level cycles being practically independent of the negative reactivity level. This enables a very fast iterative solution of the subcritical system equations,
with run times that are practically independent of the subcritical system’s proximity to criticality. This paper provides an extensive analysis and verification of the
approach.
65
Reflector Effects on the Kinetic Response in Subcritical Systems
S. Dulla, M. Nervo, P. Ravetto (1), P. Saracco (2), G. Lomonaco (3), and M. Carta (4)
for Nuclear Physics, Genova, Italy, 3) GeNERG-DIME/TEC, University of Genoa, Genova, Italy, 4) ENEA C.R. CASACCIA , S. Maria di Galeria, Italy
The interpretation of kinetic experiments for source-driven systems is often based on the relationship between the prompt time response and the multiplicativity of the
system. The presence of a reflector may modify the physical features of such response. For systems in which the neutron histories are dominated by the presence of
the reflector, it becomes difficult to determine the subcriticality value through the observation of the flux response. In this work the characteristics of this phenomenon
are analysed, starting from a simple diffusion model that allows to easily get a physical insight. The modification of the time constant is interpreted in terms of
modifications to the effective mean prompt neutron generation time, which is increased by the presence of the reflector. Afterwards, more realistic evaluations are
carried out by using Monte Carlo simulations of pulsed experiments in source-driven systems. The results confirm the conclusions that may be drawn from the
analysis of the simple cases.
234
Evaluation of the Pool Critical Assembly Benchmark Using the Transfx Nuclear Analysis Software
V. G. Smith and B. P. Richardson
TransWare Enterprises Inc., Sycamore, IL
The Pool Critical Assembly (PCA) Benchmark identified in U. S. Nuclear Regulatory Commission Regulatory Guide 1.190 has been evaluated using the TRANSFX
Nuclear Analysis Software. The PCA benchmark is an experimental benchmark beased on measurements taken from a material test reactor facility with a core
composed of curved-plate fuel elements. This benchmark is intended to qualify methodologies to perform reactor pressure vessel fast neutron fluence calculations.
TRANSFX uses a deterministic, three-dimensional, multi-group nuclear particle transport theory code (TRANSRAD) that performs neutron and gamma flux
calculations. TRANSFX couples the nuclear transport method with a general geometry modeling capability to provide a flexible and accurate tool for determining
fluxes for any light water reactor design. TRANSFX supports the method of characteristics solution technique, a three-dimensional ray-tracing method based on
combinatorial geometry, a fixed source iterative solution with anisotropic scattering, thermal-group upscattering treatments, and an ENDF/B-VI based nuclear crosssection library. The results of evaluating this benchmark using homogenized and discrete fuel element models with the TRANSRAD code are presented in this paper.
The TRANSRAD homogenized fuel element model’s calculated to measured ratio is 1.01 with a standard deviation of 0.06. The TRANSRAD discretized fuel element
model’s calculated to measured ratio is 1.01 with a standard deviation of 0.05. Both of these comparisons are within the uncertainty associated with the measured
values, and well within the 20% uncertainty allowed by Reg. Guide 1.190, demonstrating that the TRANSFX Software is capable of accurately determining RPV
neutron fluence utilizing either homogenized or discrete fuel geometries.
- 54 -
MC2015 : M&C + SNA + MC 2015
Reactor Physics
Thursday, April 23, 2015
10:40 AM
Hermitage A-B
Chairs: Dr. Aarno Isotalo, Dr. Shane Stimpson
93
Interior Penalty Discontinuous Galerkin Method for a Homogenized Diffusion Equation in Reactor Simulations
S. González-Pintor (1), A. Vidal-Ferràndiz (2), D. Ginestar (3), C. Demazière (1), M. Asadzadeh (4), and G. Verdú (2)
1) Division of Nuclear Engineering, Department of Applied Physics, Chalmers University of Technology, Gothemburg, Sweden, 2) Instituto de Seguridad Industrial, Radiofísica y Medioambiental,
Universitat Politècnica de València, València, Spain, 3) Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, València, Spain, 4) Department of Mathematics,
Chalmers University of Technology, SE-412 96 Gothemburg, Sweden
Full core reactor simulations are generally based on a (at least) two-scales process, the first one consisting of a fine-mesh lattice calculation, and after an
homogenization process the homogenized data are used for coarse-mesh full core reactor calculation. The discontinuity factors can be considered as part of these
homogenized data, widely used to minimize the error due to spatial homogenization of the cross sections. Thus, the implementation of the discontinuity factors in
Finite Element Methods is necessary in order to use these methods for homogenized core calculations. Here we proposed a variation of an Interior Penalty
Discontinuous Galerkin Finite Element method to allow forcing the discontinuity of the neutron flux determined by the discontinuity factors. The proposed method is
tested solving different one-dimensional benchmark problems, showing that the discontinuity factors technique can be successfully introduced in the Interior Penalty
Discontinuous Galerkin Finite Element Method.
15
Neutronic Feasibility Study for Conceptual BWR Mo Fuel Cladding Assembly Designs
Ren-Tai Chiang (1), Bo-Ching Cheng (2)
1) Energy Engineering Service, San Jose, CA, 2) Electric Power Research Institute 3420 Hillview Avenue, Palo Alto, CA
The conceptual natural Mo and depleted Mo fuel cladding assemblies are studied to evaluate their benefit and/or penalty on BWR fuel utilization against the
corresponding conventional Zr fuel cladding assembly. The Gd-rod reduced and/or the enrichment-increased fuel assembly design with natural Mo fuel cladding
appears more attractive than the depleted-Mo fuel assembly design due to the fact that the natural Mo fuel cladding assembly reactivity is much improved by reducing
Gd fuel rod numbers and/or by increasing lattice enrichment and that the Mo depletion cost is very high. There should be no problem to design a 24-month fuel cycle
core using the natural Mo fuel cladding assemblies with less Gd rods or slightly higher lattice averaged enrichment with the same number of Gd rods, since the
neutron multiplication factor of either design can be close to that of the corresponding Zr fuel cladding assembly. Although there should be no problem to design a 24month fuel cycle core using the depleted Mo fuel cladding assemblies, it is less desirable to use the depleted Mo fuel cladding assemblies because of its high
depletion cost. The 10 mil Mo fuel cladding assembly design with the same Zr cladding inner diameter appears to be a better alternative since the additional fuel in the
Mo fuel cladding assembly design with the same Zr cladding outer diameter bears additional fuel cost but without significantly improving the assembly reactivity.
Similar results can be obtained by studying analogous conceptual PWR Mo fuel cladding assembly designs.
47
Implementation of CRAM Depletion Solverwith External Feed and Improved Accuracy into ORIGEN
A. E. Isotalo (1,2), W. A. Wieselquist (1)
1) Oak Ridge National Laboratory, Oak Ridge, TN, USA, 2) Aalto University, AALTO, Finland
The ORIGEN module of SCALE has been updated with a Chebyshev Rational Approximation Method (CRAM) depletion solver. To allow the new solver to model
external feed, a method has been developed for including a source term with polynomial time dependency to CRAM. It is also shown that while the accuracy of CRAM
is only weakly affected by step lengths, it improves greatly over subsequent steps with equal lengths and coefficient matrices. An internal substepping method that
allows this behavior to be efficiently exploited to improve the accuracy of CRAM is presented. In addition to being able to handle time-dependent feed rates and adjoint
calculations, the new CRAM solver is generally faster and, even without substeps, more accurate than the original depletion solver of ORIGEN.
Computational Geometries
Thursday, April 23, 2015
Chair: Mr. Rajeev Jain
135
10:40 AM
Two Rivers
OpenCG: A Combinatorial Geometry Modeling Tool for Data Processing and Code Verification
William Boyd, Benoit Forget, and Kord Smith
Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, Cambridge, MA
Combinatorial Geometry (CG) is one formulation for computational geometric models that is commonly used in many neutron transport simulation codes. The use of
CG is advantageous since it permits an accurate yet concise representation of complex reactor models with a nominal memory footprint. OpenCG is a Python
package for combinatorial geometry models being developed at the Massachusetts Institute of Technology. The goal for OpenCG is to provide an easy-to-use,
physics agnostic library to build geometry models of nuclear reactor cores. OpenCG is a free, open source library with an easy-to-use Python interface to provide
nuclear engineers a single, powerful framework for modeling complex reactor geometries. Compatibility modules for commonly used nuclear reactor physics codes,
such as OpenMC, OpenMOC, and Serpent, are being concurrently developed for rapid and easy exportation of an OpenCG model directly into the relevant input file
format for each code of interest. The present work describes OpenCG and describes some of the novel and useful algorithms included with the software package.
307
Simplifying Workflow for Reactor Assembly and Full-Core Modeling
Rajeev Jain and Vijay Mahadevan (1), Robert O’Bara (2)
(1) Argonne National Laboratory Argonne, IL 60439 (2) Kitware Inc., Albany, NY
The evolution of efficient scalable solvers and multi-physics codes in the past decade has considerably increased the requirements on computational needs for highfidelity simulations of nuclear reactors through resolution of more heterogeneity in the physical models. In this paper, we present the Reactor Geometry (and mesh)
Generator (RGG) toolkit, which is a part of MeshKit library developed at Argonne National Laboratory (ANL). The extensions of the RGG toolkit have been used to
create PWR, ABTR, VHTR, MONJU and several other types of reactor geometry and mesh models for consumption in state-of-art physics solvers. RGG uses a lattice
hierarchy-based approach to create these reactor core models and has been designed to scale on even large models with up to 1 billion hexahedral elements. Details
on the RGG methodology and description of the parallel aspects of the tool are provided. Several model full core reactor problems (PWR, ABTR) are also presented
along with openly accessible mesh files for reproducibility. A GUI for RGG tools (AssyGen and CoreGen) is available currently in the open-source domain for all
popular operating systems and can significantly simplify the complexity in mesh generation for nuclear reactors.
184
Theoretical Analysis of Track Generation in 3D Method of Characteristics
Samuel Shaner, Geoffrey Gunow, Benoit Forget, and Kord Smith
Department of Nuclear Science & Engineering, Massachusetts Institute of Technology, Cambridge, MA
Generating the tracks to use in a 3D Method of Characteristics (MOC) simulation is not a trivial task. The method used to generate tracks has significant implications
on the memory and compute requirements for a problem and the current track generation methods have shortcomings. In this study,we provide a detailed description
and analysis of the current state of the art method to generate tracks for direct 3D MOC, the Modular Ray Tracing (MRT) method. Additionally, a new global method
for generating tracks is presented that is generalizable to many geometries, domain decomposition schemes, and quadrature sets. The main difference between the
global and modular track generation approaches is that the global approach does not require any knowledge of the underlying geometry discretization and is therefore
more flexible in domain decomposing the geometry. Some considerations with memory requirements and general applicability that we and others have found are
discussed.
- 55 -
MC2015 : M&C + SNA + MC 2015
Materials Science and Nuclear Data
Thursday, April 23, 2015
10:40 AM
Belmont
Chairs: Dr. Jeffrey J. Powers, Dr Markus HA. Piro
203
VPSC Implementation in Bison-CASL Code for Modeling Large Deformation Problems
Wenfeng Liu(1), Robert Montgomery(2), Carlos Tomé and Chris Stanek(3), Jason Hales(4)
(1) ANATECH Corp., San Diego, CA 92121, (2) Pacific Northwest National Laboratory, (3) Los Alamos National Laboratory, (4) Idaho National Laboratory
Fuel clad ballooning is an important issue since it can adversely impact the core coolable geometry in a Loss of Coolant Accident (LOCA). This paper describes a
multi-scale modeling approach to compute the large-strain deformation behavior of zirconium alloy cladding during LOCA conditions. A Visco-Plastic Self-Consistent
(VPSC) material model based on averaging the deformation in single crystals with different orientations to represent polycrystalline material behavior is used as the
constitutive law for modeling the deformation of α-phase zirconium. The VPSC material model is implemented in a finite element fuel code BISON-CASL to model the
clad deformation. A benchmark test case was prepared and compared to standalone VPSC code to verify the implementation of the material model. A test case
representing the geometry of a segment of cladding tube under constant rod internal pressure and imposed temperature ramp has been tested using BISON-CASL.
Results have demonstrated the modeling of large localized deformation (clad ballooning) due to temperature variation in a postulated LOCA condition.
127
Research on Resonance Parameters Evaluation based on R-Matrix Limited Formula and Code Development
Jiankai Yu, Wanlin Li, Ganglin Yu And Kan Wang
Department of Engineering Physics, Tsinghua University
It is one set of the most important nuclear data for the resonance parameters in resolved resonance range (RRR), used in core design and physical analysis for
nuclear applications based on thorium fuel, which simulation accuracy relies on the uncertainty and accuracy of resonance parameters in RRR. To meet the demands
of evaluated nuclear data especially the resonance parameters in RRR for the nuclides in thorium-uranium cycle in China, the research on Levenberg-Marquardt
nonlinear least squared fitting method is carried out by REAL team in Department of Engineering Physics at Tsinghua University. Therefore, a module named RRPE is
developed to evaluate resonance parameters in R-Matrix Limited format and then embedded in RXSP code, which is a nuclear reactor cross section processing code
also developed by REAL team. Through the evaluation of the resonance parameters in RRR for Th-232, the accuracy and efficiency of RRPE code is validated and it
can be utilized to form the evaluated resonance parameters, which can be applied in nuclear data processing and then in the Monte Carlo simulation with high fidelity
for thorium fueled nuclear applications.
255
Implementation of the Generalized Interaction Data Interface (GIDI) in the Mercury Monte Carlo Code
M. S. McKinley and B. R. Beck
Lawrence Livermore National Laboratory, Livermore, CA
Nuclear data formats that were defined 50 years ago are showing their age. The proposed replacement, Generalized Nuclear Data (GND), along with LLNL’s interface
library, Generalized Interaction Data Interface (GIDI), will replace the current nuclear data and collision package for LLNL’s Mercury Monte Carlo radiation transport
code. GND and GIDI are designed to support any project/target combination and more data types than the current formats and libraries. The initial implementation
offers improved accuracy with respect to the evaluated data. Additional features need to be enabled before GIDI can fully replace the current collision package in
Mercury.
- 56 -
M&C+SNA+MC 2015
April 19-23, 2015
Notes
- 57 -
M&C+SNA+MC 2015
April 19-23, 2015
Notes
- 58 -
M&C+SNA+MC 2015
April 19-23, 2015
Notes
- 59 -
M&C+SNA+MC 2015
April 19-23, 2015
Notes
- 60 -
MC2015 Program Outline
April 19 - 23, 2015
SUNDAY
8:00 AM -Noon
Registration open 1:00 to 5:00 PM at the Conference Registration Desk
Hermitage A-B
Belle Meade
Evergreen
WS: Geant4 version 10
WS: Computational and
WS: NESTLE 3D Nodal Core
Mathematical Challenges in
Simulator
Particle Therapy
WS: Mathematical techniques of
neutron fluctuations with
applications for reactivity
measurements and in…
1:00 - 5:00 PM
"
"
3:10 - 3:40 PM
3:40 - 4:45 PM
WS: Multi-stage, response
function radiation transport
methods for solving complex
problems in real-time
WS: MCNP6
Registration open 7:00 AM to 3:00 PM at the Conference Registration Desk
7:30 - 8:30 AM - Continental Breakfast at Tulip/Grove
Hermitage A-B
Hermitage C
Hermitage D
Two Rivers
Lunch at Tulip/Grove
Reactor Physics
Monte Carlo Methods
Deterministic Transport Methods M&S for Fusion Energy Systems
Transport in Stochastic Media
Coffee Break
Reactor Physics
Monte Carlo Methods
Deterministic Transport Methods M&S for Fusion Energy Systems
Mathematical Methods in
Nuclear Nonproliferation and
Safeguards Applications
5:30 - 7:30 PM
TECHNICAL POSTER SESSION @ Plantation Lobby
7:30 -9:30 PM
GELBARD SCHOLARSHIP FUNDRAIDING EVENT @ Tulip/Grove
TUESDAY
8:30 - 10:10 AM
10:10 - 10:40 AM
10:40 - 11:55 AM
Registration open 7:00 AM to 3:00 PM at the Conference Registration Desk
7:30 - 8:30 AM - Continental Breakfast at Tulip/Grove
Hermitage A-B
Hermitage C
Hermitage D
Next Generation Sn Mesh
Monte Carlo Methods
Reactor Physics
Sweeps
Coffee Break
Next Generation Sn Mesh
Sweeps
11:55 - 1:30 PM
1:30 - 3:10 PM
Lunch at Tulip/Grove
Deterministic Transport Methods
3:10 - 3:40 PM
3:40 - 4:45 PM
Coffee Break
Deterministic Transport Methods
5:30 - 7:30 PM
WEDNESDAY
8:30 - 10:10 AM
10:10 - 10:40 AM
10:40 - 11:55 AM
11:55 - 1:30 PM
1:30 - 3:10 PM
3:10 - 3:40 PM
3:40 - 4:45 PM
8:30 - 10:10 AM
10:10 - 10:40 AM
10:40 - 11:55 AM
11:55 - 1:30 PM
1:30 -5:30 PM
Two Rivers
Computational Thermal
Hydraulics and Fluid Dynamics
Belmont
Mathematical Methods in
Nuclear Nonproliferation and
Safeguards Applications
Monte Carlo Methods
Response Methods for Particle
Transport Modeling and
Simulation
Computational Methods using
HPC
Mathematical Methods in
Nuclear Nonproliferation and
Safeguards Applications
Next Generation Parallelism for
Monte Carlo
Validation, Verification, and UQ
Monte Carlo with CAD and
Complex Geometries
Whole-Core Modeling and
Simulation
Next Generation Parallelism for
Monte Carlo
Validation, Verification, and UQ
Monte Carlo with CAD and
Complex Geometries
Theoretical Topics in Neutron
Transport Theory
MONTE CARLO CODE POSTER SESSION @ Plantation Lobby
Registration open 7:00 AM to 3:00 PM at the Conference Registration Desk
7:30 - 8:30 AM - Continental Breakfast at Tulip/Grove
Hermitage A-B
Hermitage C
Hermitage D
Monte Carlo Criticality
Hybrid Monte
Improved Multigroup Cross
Calculations with ThermalCarlo/Deterministic Transport
Section Generation
Hydraulic Feedback
Two Rivers
HPC and Algorithms for
Advanced Architectures
Belmont
Computational Medical Physics
Coffee Break
Monte Carlo Criticality
Calculations with ThermalHydraulic Feedback
Next Generation Parallelism for
Monte Carlo
Improved Multigroup Cross
Section Generation
Advanced Angular
Discretizations for the Transport
Equation
Computational Medical Physics
Lunch at Tulip/Grove
Sensitivity and Uncertainty
Analysis
Monte Carlo Methods
Reactor Physics
Multiphysics and Transient
Analysis
Radiation Transport and
Shielding Methods
Coffee Break
Sensitivity and Uncertainty
Analysis
Monte Carlo Methods
Reactor Physics
Multiphysics and Transient
Analysis
Radiation Detection
Two Rivers
Multiphysics and Transient
Analysis
Belmont
Advanced Solvers in Nuclear
Technology
6:00 - 10:00 PM
THURSDAY
Belmont
PLENARY SESSION
8:30 - 11:45 AM
11:55 - 1:30 PM
1:30 - 3:10 PM
Hermitage C
WS: CAD-based Monte Carlo
and Ray Tracing for Radiation
Transport
SUNDAY NIGHT RECEPTION @ Hermitage A-B
6:00 - 8:00 PM
MONDAY
Two Rivers
WS: New Features in SCALE6.2
BANQUET @ Tulip/Grove
Registration open 7:00 AM to 9:00 AM at the Conference Registration Desk
7:30 - 8:30 AM - Continental Breakfast at Tulip/Grove
Hermitage A-B
Hermitage C
Hermitage D
Reactor Physics
Monte Carlo Methods
Deterministic Transport Methods
Coffee Break
Reactor Physics
Lunch at Tulip/Grove
Belmont & Ante
WS: ADVANTG Tutorial:
Automated Variance Reduction
for MCNP
Monte Carlo Methods
Accelerators and Subcritical
Systems
Computational Geometries
Materials Science and Nuclear
Data
Belle Meade
WS: Differential Equations of
Reactor Physics and Neutron
Transport Theory
Two Rivers
WS: PyNE: The Nuclear
Engineering Toolkit
Oaklands
WS: Attila4MC - CAD
integration, automated
deterministic variance reduction,
and GUI setup for MCNP
Evergreen
WS: SCALE6.2 Developers
Workshop
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement