Researchers Achieve One Teraflop Performance With Supercomputer Simulation Of Magnetism
- Date:
- November 10, 1998
- Source:
- Lawrence Berkeley National Laboratory
- Summary:
- A team of scientists from two national laboratories reached a supercomputing milestone this weekend, getting their simulation of metallic magnetism to run at 1.002 Teraflops -- more than one trillion calculations per second.
- Share:
BERKELEY, CA -- A team of scientists from two nationallaboratories reached a supercomputing milestone this weekend,getting their simulation of metallic magnetism to run at 1.002Teraflops -- more than one trillion calculations per second.
The achievement, reached using a 1,480-processor Cray T3Esupercomputer at the manufacturer's facility in Minnesota, capsan already remarkable scaling up of the code to run onincreasingly powerful massively parallel supercomputers. Over thesummer, the team of scientists at Oak Ridge National Laboratoryworking with the National Energy Research Scientific ComputingCenter (NERSC) at the Lawrence Berkeley National Laboratoryperformed a 1,024-atom first-principles simulation of metallicmagnetism in iron which ran at 657 Gigaflops (billions ofcalculations per second) on a 1024-processor Cray/SGI T3Esupercomputer.
This success made them finalists for the Gordon Bell Prize,awarded annually to honor the best achievement inhigh-performance computing. The team, which also includescollaborators at the Pittsburgh Supercomputing Center and theUniversity of Bristol (UK), are finalists for the prize for theirparallel computer simulation of metallic magnetism.
Funded as one of the U.S. Department of Energy's GrandChallenges, the group developed the computer code to provide abetter microscopic understanding of metallic magnetism, which hasapplications in fields ranging from computer data storage topower generation and utilization.Given annually at SC98, the annual conference of high-performancecomputing and networking, the Gordon Bell Prize recognizes thebest accomplishment in high-performance computing. The OakRidge-NERSC group was nominated in the category for highestcomputer speed using a real-world application. The winner of thisyear's prize will be announced during the conference on Thursday,Nov. 12, in Orlando, Fla.
Although parallel supercomputers are the world’s fastestcomputers -- capable of performing hundreds of billions ofcalculations per second -- realizing their potential oftenrequires writing complex computer codes as well as reformulatingthe scientific approach to problems so that the codes scale upefficiently on these types of machines.
In developing this code for parallel computers the researcherswere forced to rethink their formulation of the basic physicalphenomena. The code was originally developed with Intel Paragonmachines at ORNL's Center for Computational Science (CCS) in mindand has exhibited linear scale up to 1024-processors on an IntelXPS-150.
"One of the goals of this project is to address criticalmaterials problems on the microstructural scale to betterunderstand the properties of real materials. A major focus ofour research is to establish the relationship between technicalmagnetic properties and microstructure based on fundamentalphysical principles," said Malcolm Stocks, a scientist in OakRidge’s Metals and Ceramics Division and leader of the project."The capability to design magnetic materials with specific andwell-defined properties is an essential component of the nation’stechnological future."
In May and June of this year, the research team ran successivelylarger calculations on a series of bigger and more powerful Craysupercomputers. After the simulation code attained a speed of 276Gflops on the Cray T3E-900 512-processor supercomputer at NERSC,the group arranged for use of an even faster T3E-1200 at CrayResearch Inc. and achieved 329 Gflops. They were then givendedicated time on a T3E600 1024-processor machine at the NASAGoddard Space Flight Center which allowed them to perform crucialcode development work and testing before the final run at 657Gflops on a T3E1200 1024-processor machine at a U.S. governmentsite.
"These increases in the performance levels demonstrate both thepower and the capabilities of parallel computers -- a code can bescaled up so that it not only runs faster but allows us to studylarger systems and new phenomena that cannot be studied onsmaller machines," said Andrew Canning, a physicist in NERSC’sScientific Computing Group who worked with the Oak Ridge team onthis project.
The Gordon Bell Award work was part of a larger Department ofEnergy Grand Challenge Project on Materials, Methods,Microstructure and Magnetism between ORNL, Ames Laboratory(Iowa), Brookhaven National Laboratory, NERSC and the Center forComputational Science and the Computer Science and MathematicsDivisions at ORNL.
"As the Department of Energy’s national facility forcomputational science, we see this achievement by the GrandChallenge team as a major breakthrough in high-performancecomputing," said NERSC Division Director Horst Simon. "Unlikeother recently published records, this is a real applicationrunning on an operational production machine and delivering realscientific results. NERSC is proud to have been a partner in thiseffort."
NERSC (http://www.nersc.gov) provides high performance computingservices to DOE’s Energy Research programs at nationallaboratories, universities, and industry. Berkeley Lab(http://www.lbl.gov) conducts unclassified research and ismanaged by the University of California. SCIENTIFIC BACKGROUND
Developing a microscopic understanding of metallic magnets hasproven to be an abiding scientific challenge. This originates inthe itinerant nature of the electrons that give rise to themagnetic moment, which are the same electrons that give rise tometallic cohesion (bonding). It is this dual behavior of theelectrons precludes the use of simple (Heisenberg) models.
The performance runs were performed during the development of anew theory of non-equilibrium states in magnets. The newconstrained local moment (CLM) theory places a recent proposalfor first principles Spin Dynamics (SD) from a group at AmesLaboratory on firm theoretical foundations. In SDnon-equilibrium 'local moments' (for example, in magnets abovethe Curie temperature, or in the presence of an external field),evolve from one time step to the next according to a classicalequation of motion. As originally formulated there werefundamental problems with SD. This stems from the fact that theinstantaneous magnetization states that are being evolved werenot properly defined within
Local Spin Density Approximation to the Density Functional Theory(LSDA), the framework of most modern quantum simulations ofmaterials. (Interestingly, this year’s Nobel prize in Chemistrywas awarded to Professor Walter Kohn for originating DensityFunctional Theory).
The CLM theory properly formulates SD within constrained densityfunctional theory. Local constraining fields are introduced, thepurpose of which is to force the local moments to point indirections required at a particular time step of SD. A generalalgorithm for finding the constraining fields has been developed.The existence of CLM states has been demonstrated by performingcalculations for large (up to 1024 atom) unit cell disorderedlocal moment models of iron above its Curie temperature. In thismodel the magnetic moments associated with individual Fe atomsare constrained to point in a set of orientations that are chosenusing a random number generator. This state can be thought of asbeing prototypical of the state of magnetic order at a particularstep in a finite temperature SD simulation of paramagnetic Fe.
These calculations represent significant progress towards thegoal of full implementation of SD and a first principles theoryof the finite temperature and non-equilibrium properties ofmagnetic materials.
The work was performed by: Balazs Ujfalussy, Xindong Wang,Xiaoguang Zhang, Donald M. C. Nicholson, William A. Shelton andG. Malcolm Stocks, Oak Ridge National Laboratory; Andrew Canning,NERSC, Lawrence Berkeley National Laboratory; Yang Wang,Pittsburgh Supercomputing Center; and B. L. Gyorffy, H. H. WillsPhysics Laboratory, UK.
Story Source:
Materials provided by Lawrence Berkeley National Laboratory. Note: Content may be edited for style and length.
Cite This Page: