New mathematical model aids Big Bang supercomputer research
- Date:
- January 6, 2010
- Source:
- Southern Methodist University
- Summary:
- Astrophysicists using supercomputers to simulate the Big Bang have a new mathematical tool to model the early universe. Researchers have built a computer model of the "Dark Ages." The model -- successfully tested on two supercomputers -- tightly couples physical processes present during cosmic reionization. Resulting simulations when scientists model various scenarios are highly accurate, numerically stable and computationally scalable to the largest supercomputers.
- Share:
Scientists have made many discoveries about the origins of our 13 billion-year-old universe. But many scientific mysteries remain. What exactly happened during the Big Bang, when rapidly evolving physical processes set the stage for gases to form stars, planets and galaxies? Now astrophysicists using supercomputers to simulate the Big Bang have a new mathematical tool to unravel those mysteries, says Daniel R. Reynolds, assistant professor of mathematics at SMU.
Reynolds collaborated with astrophysicists at the University of California at San Diego as part of a National Science Foundation project to simulate cosmic reionization, the time from 380,000 years to 400 million years after the universe was born.
Together the scientists built a computer model of events during the "Dark Ages" when the first stars emitted radiation that altered the surrounding matter, enabling light to pass through. The team tested its model on two of the largest existing NSF supercomputers, "Ranger" at the University of Texas at Austin and "Kraken" at the University of Tennessee.
The new mathematical model tightly couples a myriad of physical processes present during cosmic reionization, such as gas motion, radiation transport, chemical kinetics and gravitational acceleration due to star clustering and dark matter dynamics, Reynolds says.
The key characteristic of the model that differentiates it from competing work is that the researchers focused on enforcing a very tight coupling in the model between the different physical processes.
"By forcing the computational methods to tightly bind these processes together, our new model allows us to generate simulations that are highly accurate, numerically stable and computationally scalable to the largest supercomputers available," Reynolds says.
They presented their research at a Texas Cosmology Network Meeting at UT in late October. Reynolds' mathematical research also was published as "Self-Consistent Solution of Cosmological Radiation-Hydrodynamics and Chemical Ionization" in the October issue of the "Journal of Computational Physics."
Simulation models typically consist of a complex bundle of mathematical equations representing physical processes. The equations are integrated to reflect interaction of the physical processes. Only supercomputers can simultaneously solve the equations. Scientific intuition and creativity come into play by developing the base model with equations with the best parameters, Reynolds says. Variables can be altered to describe different scenarios that might have occurred. The objective is to develop a simulation model with results that most closely resemble telescope observations and that predict a universe that looks like what we have. If that happens, scientists have discovered the set of physical processes that existed at the birth of the universe as it was evolving from one instant to the next.
Physical processes include the heating of various gases, gravity, the conservation of mass, the conservation of momentum, the conservation of energy, expansion of the universe, the transport of radiation, and the chemical ionization of different species such as Hydrogen and Helium, the primary elements present at the beginning of the universe. An additional equation running in the background describes and models the dynamics of dark matter -- the majority of the matter in the universe -- which gives rise to gravity and is attributed with helping the universe form stars, planets and galaxies.
"Supercomputers are so big, they hold so much data, you can build models that work with many processes at one time," Reynolds says. "A lot of these processes behave nonlinearly. When they are put together, they inhibit each other, feed off each other, so you end up with many different processes when they are put together."
A direct consequence of the tight coupling that the researchers enforce in their model is that the resulting system of equations is much more complex than those that must be solved by other models, Reynolds says.
"This paper describes both how we form the coupled model, as well as the mathematical methods that enable us to solve the systems of equations that result. These include methods that accurately track the different time scales of each process, which often occur at rates that vary by orders of magnitude," he says. "However, perhaps the most important contribution of this paper is our description of how we pose the complex interaction of different models as a nonlinear problem with potentially billions of equations and unknowns, and solve that problem using new algorithms designed for next-generation supercomputers. We conclude by demonstrating that the new model lives up to the ideal, providing an approach that allows high accuracy, stability and scalability on a suite of difficult test problems."
Only recently have mathematics algorithms been invented to solve basic problems -- like diffusion of heat -- using resources as large as those available on modern supercomputers, Reynolds says. There have been simple analytical solutions to many problems from mathematical physics for hundreds of years. However, those analytical solutions only work when scientists simplify the problem in some way or another. For example, he says, they may approximate the shape of a planet as a sphere, instead of an ellipsoid, or may assume that ocean water is incompressible, which only works for very shallow water, or assume the Earth is homogeneous, instead of formed using widely differing layers of rock.
"Scientists have been able to approximate a great many physical processes in such idealized situations. But the true frontier nowadays is to let go of these simplifying approximations and treat the problems as they really are, by modeling all of the geometric structure and the in-homogeneity," Reynolds says. "To do that, you need to solve harder equations with lots of data, which is ideally suited to using supercomputers. The numerical methods that can allow us to use larger and larger computers have only just come out. The problems are getting more challenging and harder to solve, but the numerical methods are reaching greater capability, so you can really start moving them forward. These new computers make everything a new frontier."
Besides Reynolds, other researchers were John C. Hayes, Lawrence Livermore National Laboratory, Livermore, Calif.; Pascal Paschos, Center for Astrophysics and Space Sciences, University of California at San Diego, La Jolla, Calif.; and Michael L. Norman, Center for Astrophysics and Space Sciences, and physics department, the University of California at San Diego, La Jolla.
Story Source:
Materials provided by Southern Methodist University. Original written by Margaret Allen. Note: Content may be edited for style and length.
Cite This Page: