Environmental Issues > Nuclear Energy, Nonproliferation, and Disarmament Main Page > All Nuclear Energy, Nonproliferation, and Disarmament Documents

Report page


A. Background

There are two standard methodologies for research in the sciences: experiment and theory. For some complex physical systems, such as a nuclear explosion, neither of these standard methodologies are adequate for answering important questions. Weather prediction and the global warming phenomena are other good examples. The behavior of an exploding nuclear warhead is particularly difficult to predict or measure with precision. The extreme physical conditions (temperatures, pressures, densities) produced in a nuclear explosion, as well as the very brief time over which it occurs (on the order of one-millionth of a second), made accurate, detailed physical measurements of a nuclear explosion difficult and expensive.[15] These extreme physical conditions, as well as the interplay of diverse, complex phenomena, make a fundamental, theoretical understanding of the nuclear explosive process likewise difficult.

Since their inception computer calculations have historically bridged the gap between partial experimental measurements and incomplete theoretical understanding to provide the working predictive capability required to design and produce generation after generation of U.S. nuclear weapons. Los Alamos National Laboratory's D. B. Henderson wrote in 1986, ". . . computation is the center of the [nuclear weapons] program, the nexus where everything is joined."[16]

A modern thermonuclear weapon is typically on the order of a meter in length and consists of two stages. These are structurally distinct and produce different effects, e.g., temperatures, pressures, and densities; rates of fission and fusion reactions; the numbers of photons and neutrons produced and their energies. During the explosion energy in the form of gamma rays and neutrons flows from the first stage (the "primary") to compress and ignite the second stage (the "secondary"), which typically produces on the order of 95 percent or more of the total weapon yield. The energy produced by a nuclear explosion is generally distributed among four forms: initial radiation, blast and shock, thermal radiation, and residual radiation. Some nuclear weapons have been designed to partition the energy among these forms in various ways depending on the desired effect.

The neutron bomb, for example, was designed to balance the fraction of total explosion energy released in the form of energetic neutrons relative to that converted into airblast and shock waves, or released as thermal radiation (light and heat), such that the lethal radius of the weapon's initial radiation for unshielded human beings exceeds its airblast, groundshock, and thermal effects on buildings.[17] Likewise, nuclear devices designed for 'Peaceful Nuclear Explosions' were designed to minimize the residual radiation produced for a given shock or blast effect. These generalities speak to the complexity of the explosion process, and the difficulty both in measuring all or even a significant fraction of the quantities of interest to nuclear weapon designers, for example, during an underground test, and the difficulty in modeling the performance of nuclear weapons.

Contemporary science encourages specialization, but the field of nuclear weapons design necessitates integrating numerous areas of science (and engineering), making the task of the nuclear weapon designer particularly challenging. Nuclear weapons codes provide a necessary integration of the diverse science and engineering relevant to nuclear weapons.

A [nuclear weapons] code is a compendium of lots of stuff, mostly physics. I look at it as books on a shelf: textbooks on chemical kinetics, detonations, fluid mechanics (we usually call them hydro codes because of this textbook), equations of state, radiation processes (emission, absorption, transport, and opacity), heat transfer by means other than radiation, and particle transport. Because we are building a nuclear device, nuclear processes and cross sections are needed. Finally, we include electricity, magnetism, and plasma physics. We collect all these things together with numerical analysis and some ad hoc rules -- and that makes a code. It is a big collection, this whole library of stuff . . .[18]

To function professionally, nuclear weapon designers must establish expertise in using their computer codes. Knowing how to use a code is functionally different from knowing the underlying physics and mathematics:

. . .people tend to think in terms of codes. They do this because the codes are the only integrated compendium of this complex set of stuff. In pursuing applications [i.e., designing weapons to meet specific military requirements] the confusion is an advantage. Designers, if you listen to them, really know their codes, and that means they can do their job very well. But that the two are confused [i.e., that the codes and the underlying physics and mathematics are confused] is a handicap sometimes, at least, if we don't understand the distinction or fail to remember it. The distinction certainly has important implications when we think about qualitatively new applications, bringing in new physics methods, or adapting the codes to new computers.[19]

Nuclear weapons codes establish linkages between experimental measurements (explosive tests and laboratory experiments), the state of knowledge about the basic physical processes occurring in a nuclear explosion, and the specific device design parameters. Thus the computer codes define the boundaries of nuclear weapon design work -- they are the designer's principal tool and object of study in the course of filling military orders for new weapons, or for sharpening ones design skills; and computer capabilities define the boundaries of code development.[20]

Historically, improvements in weapon design codes permitted nuclear tests to be better designed, diagnosed, and analyzed. Conversely, as the increased flow of data from improved diagnostics was understood and incorporated in the codes, the number of tests required to certify a nuclear weapon design for entry into the U.S. arsenal declined. In fact, "improvements" to nuclear weapons have historically been dependent on and enabled by increases in computer resources, as Henderson discussed in 1986:

. . . each qualitative jump in the [computer] technology has explicit dividends. We have certain new features in our [nuclear weapon] products because we had new classes of computers with which to pursue designs or with which to understand the physics. We have classified lists of specific improvements that we have achieved with the Class VI machines, the Cray machines, which simply would not be there or be available to the military today, if we had not had these machines over the last few years. Physics issues are understood better and the design envelope is correspondingly enlarged. In a similar way, we have rather specific lists of dividends that we can already anticipate from the Class VII machines.[21]

Even understanding which measurements to make during a test has required the guidance of the computer codes:

[a nuclear weapon] test -- each is really a whole set of exquisite experiments and measurements -- are also dependent upon numerical simulation, not only for design, but also for interpretation.[22]

With nuclear explosive testing no longer permitted, the primary roles of computer simulation and nuclear weapon experiments have been largely reversed. Weapon experiments are now often conducted primarily to improve the codes by improving the range and accuracy of the data used in them, and to validate their results; and with advanced diagnostic techniques, experiments are used to take measurements that were heretofore impossible.

Proton radiography of high explosive detonations offers an example of how such advanced experiments are used to validate warhead design codes. For most nuclear weapons in the U.S. arsenal, the fission chain reaction of the primary stage is initiated (a supercritical mass is formed) when its plutonium pit is imploded to greater than its normal metallic density by the explosive force of the surrounding spherical shell of high explosives. The high explosive must be cast and initiated in a manner that produces spherically convergent detonation waves.

The fission yield of the primary stage of the thermonuclear weapon depends strongly on the symmetry and force of compression experienced by the plutonium pit. Prior to the July 1945 Trinity nuclear weapon test, members of the Manhattan project had worked out experimental techniques -- using a hemispherical array of "shorting pins" -- to measure the symmetry of the implosion process to the level of sophistication required for this first generation of U.S. nuclear weapons, designed and built without using computational devices more sophisticated than hand calculators. Subsequently, pin shots were augmented by a photographic techniques utilizing X-rays.

The implosion process occurs over such a brief period of time, and detonation wave travels at such a high speed, that it is not possible with current technology to take a multiple sequence of images of the imploding pit with x-ray radiography that shows the full progression of the implosion. Thus computer models of the primary stage had been calibrated to the individual image or images obtained through the X-ray radiograph extrapolated to the unobserved portion of the implosion process. A goal of the Los Alamos proton radiography program is to produce a greater number of images of this spherical implosion over the very brief period during which it occurs by using protons from an accelerator, rather than X-rays. The burst of protons can be made short enough that the image of the detonation wave in the high explosive is not blurred by the motion (i.e., a high-speed nuclear weapons movie camera). "The idea, explains Christopher Morris, the project's chief scientist, is to make movies of the burning explosive, then to use those pictures to check the accuracy of supercomputer models."[23]

Finally, it should be noted that the rationale for the Comprehensive Test Ban Treaty had its roots in the fact that a nuclear weapon test was usually the single most important experiment in the development of a nuclear weapon -- and thus rightly viewed as a critical, enabling activity for the arms race among the superpowers and the further proliferation of nuclear weapons. It is less generally appreciated how important computers and computer codes are to nuclear weapons development. As noted by Henderson in 1986:

. . . each increase in computer power (and accompanying software) has resulted in improvement in our physics understanding, in broadening our useful design envelope, and in qualitatively improved products delivered to the military. These last improvements have been in military values -- yield, weight, size -- and in more social values -- yield selections, weapon security, and weapon safety.[24]

This quotations imply that an equivalent curb on the arms race could have been achieved, and in theory could still be achieved, by placing limitations on weapons computing. Clearly, however, verifying a comprehensive ban on nuclear explosive tests is far simpler than attempting to ban the use of computers for weapon simulations. However, this conclusion begs the question of the future of supercomputing and arms control. An order of magnitude increase in computing power during the 1960s reduced nuclear testing requirements by a large factor. What computing power would be sufficient to eliminate the need for nuclear testing while continuing the traditional research and development missions of the U.S. nuclear weapons program? The United States has postulated an answer to this question that is specifically embodied in the Accelerated Strategic Computing Initiative.

B. The Accelerated Strategic Computing Initiative (ASCI)

The functions that ASCI and associated nuclear above-ground experimental programs are designed to perform in U.S. nuclear weapons research and stockpile management are described in considerable detail in the so-called "Green Book," the DOE's comprehensive classified "Stockpile Stewardship and Management Plan."[25] Large portions of the February 1996 version of this document were declassified and released to NRDC pursuant to a 1997 lawsuit by a coalition of some 39 national and local organizations seeking to enforce DOE compliance with the National Environmental Policy Act. An August 1997 NRDC Report, End Run: The U.S. Government's Plan for Designing Nuclear Weapons and Simulating Nuclear Explosions under the Comprehensive Test Ban Treaty, reproduces and analyzes extensive extracts from the "Green Book" bearing on U.S. plans for sustaining and enhancing U.S. nuclear weapons and design capabilities under a CTBT, and contrasts the resulting posture with the restrictive purposes of the treaty as set forth in its Protocol.

DOE seeks to radically enhance its nuclear weapons computational capabilities under Stockpile Stewardship through ASCI. "ASCI will accelerate the development of simulation capabilities . . . far beyond what might have been achieved in the absence of a focused initiative."[26] In the ten year ASCI program, the DOE intends to produce codes:

  1. to calculate the explosion of a nuclear weapon in three instead of one or two dimensions;

  2. to calculate with finer resolution in time and space (e.g., calculating at every one millionth of a meter instead of every millimeter along the length of a nuclear warhead);

  3. to calculate the behavior of the full weapons system rather than individual parts or a set of parts; and

  4. to calculate with a fuller physics description rather than relying on adjustable parameters fixed by the results of nuclear tests.

Substantial ASCI work concerns acquiring the computers and other hardware required to run these new weapons codes and developing the operating systems and other software to facilitate their use.

Each of the four ASCI weapons simulation goals enumerated above would represent a substantial improvement in the U.S. capability to simulate nuclear explosions. Taken together, they represent capabilities termed "virtual prototyping" and "virtual testing." "Virtual prototyping" refers to the computer-based 3-D realization ("solid modeling") of all the components in a production-ready configuration of a nuclear weapon, whereas "virtual testing" refers to the simulation of their complete military performance in the "stockpile-to-target" sequence and nuclear explosion.

A June 11, 1997 newspaper article described such a simulation performed on the new Intel/Sandia supercomputer constructed for the ASCI program -- a simulation of the impact of a nuclear ballistic missile.[27] The calculation "tested whether the triggering components inside the warhead would survive the missile impact and would perform correctly in the microsecond between the time the weapon would be detonated and would be a dud." With the increased speed and memory of the new ASCI supercomputer, Sandia scientists were able to "reveal more details . . . than any previous simulation." The effects of the warhead impact was broken down by individual component and in time by millionths of a second. Other variables in the missile simulation included the velocity and angle of warhead impact. "Never before has the computing capability been available to model . . . the entire region of interest in the weapon." The simulation computed the ballistic missile impact over 100 microseconds (one ten-thousandths of a second) which required five days to execute. The spatial resolution was a millimeter.

Part of the motivation for simulating nuclear weapons in three dimensions rather than two is said to originate from concerns over the "aging" of nuclear weapons. Like most items of technology, nuclear weapons were designed and engineered for finite lifetimes. During the Cold War the United States replaced nuclear weapons in its stockpile at a pace such that DOE now claims that its database on weapons does not extend much past a twenty-year lifetime. Two points of concern often cited are the slow radioactive decay of the plutonium and the faster chemical decay of the high explosives in the primary.[28]

Nuclear weapon designers have been able to take advantage of the fact that nuclear warhead casings are typically cylindrically symmetric, with the primary and secondary components spherically or cylindrically symmetric, thus facilitating warhead performance calculations to be conducted in one or two dimensions. The aging process may perturb whatever symmetries have been built into the weapons, and compromise the validity of calculated warhead performance assessments under a CTBT to the extent that these calculations are dependent on these symmetries:

. . . today we have models of weapons behavior but no confident basis for anticipating changes in performance or failure modes over long periods of time due to material aging, contamination, or imperfections. These models, and the existing computer codes based on them for describing the development of an explosion, generally contain several empirical factors and simplifications (to 2-dimensional or 1-dimensional approximations) (emphasis added).[29]

Of course, this formulation of the problem begs the question of whether current weapons can be maintained over some "less-than-long" period of time without the need to "anticipate" (i.e., predict) changes in nuclear performance that may ensue from more advanced states of warhead deterioration.

The Department of Energy also claims that re-manufacturing old weapons to original specifications is troublesome because manufacturing techniques and materials change with time. Part of the ASCI effort is focused on coping with such "enforced" changes to the arsenal in the absence of nuclear explosive testing. An alternative, and more conservative, approach would be to assure the future U.S. capability to remanufacture existing, test-certified weapon types by using experienced nuclear weapon design teams now to specify acceptable materials, processes, and tolerances for future remanufacture. Aging effects on weapons need then only be detected through enhanced surveillance of the arsenal, which would then trigger, a component (or system) remanufacture based on a zero-tolerance policy for defects in the nuclear explosive package that are not already known to be insignificant based on previous test experience, or whose effects cannot be confidently ascertained through non-nuclear component testing.

Another, and in our view more significant, motivation for the ASCI goal of moving to three-dimensional calculations is that some of the physical processes involved in a nuclear explosion are inherently three dimensional, and thus predicting the evolution of these phenomena is important if new or significantly modified weapon designs are to be deployed without testing under a CTBT. Turbulence -- the chaotic, swirling fluid flow ubiquitous in nature -- is fundamental to the performance of nuclear weapons and a common area of study across the five Academic Strategic Alliances Program Centers. Limited, two-dimensional descriptions of turbulence completely miss key phenomena:

Computational fluid dynamics plays a central role in numerical simulations arising in many LLNL and DOE mission areas, such as weapons design, weapons effects, fusion, and energy production. Such computations have typically assumed physical symmetry to reduce the dimensionality of the computation. However, there is an increasing need to perform calculations in three spatial dimensions. A common feature of three-dimensional (3D) flows is the appearance of small-scale, localized phenomena; one routinely encounters features such as jets, vortex rings, and mixing layers, whose characteristic length scales are small relative to the global dimensions of the problem, but whose dynamics play an important role in the overall physics.[30]

Computer simulation of the turbulent fluid flow of plutonium, uranium, lithium, deuterium, tritium and other bomb constituents in highly ionized states requires not only an extension to three dimensions, but a greater spatial resolution in the calculation to capture the essential physical processes and thus achieve a "virtual testing" capability.

In the 1994 JASON[31] study of Stockpile Stewardship, grid sizes of 100 points in two spatial dimensions were cited as typical of current nuclear weapon codes:

Hydrodynamic problems in general, and the detonation of chemical high explosives in particular, require the solution of local partial differential equations (PDEs). One typically solves such problems by partitioning physical space into a discrete grid of cells or finite elements and approximating the physical state with a few variables per cell . . . Adequate spatial resolution typically requires ~ 100 cells in each dimension (although modelers would probably make good use of more cells if more powerful computers were available).[32]

Elsewhere in their 1994 report, the JASONs quote a figure of 10 (double-precision) variables per cell (such variables for each bomb constituent would likely include macroscopic thermodynamic variables such pressure, temperature and density; hydrodynamic variables such as fluid momentum; or microscopic variables such as ionization states) and 1,000 floating-point operations (flops) per cell per time step as typical of detonation codes. On the subject of grid spatial resolution, the JASON committee stated:

Due to limitations on computation power and bomb-modeling, the "state of the art" is still relatively primitive in the ability to model such phenomena in three dimensions, as opposed to reducing the analyses to two -- or even one -- dimension by geometric averaging. For such analyses there is a need for higher spatial resolution then presently achieved -- i.e., grid sizes of the order of mils as opposed to millimeters -- in order to model effects of interest.

If the calculation is expanded from 100 grid points in each of two dimensions to three dimensions with a mil [= 1/1000th of an inch = 0.0254 millimeter] rather than a millimeter grid, the total number of cells increases by a factor of 6 million -- from [1002 =] 10,000 to [((100)(1/0.0254))3 = 6.1x1010 (] 60 billion, even if the complexity of the underlying physics is held constant at 1000 flops per cell per time step.

Dona Crawford, Director of Sandia National Laboratory's Distributed Information Systems Division and chair of the organizing committee for the 1997 supercomputing conference (SC97), stated: "You couldn't conceive of this [ASCI program] 10 years ago because we didn't have the computers -- and we're still a long way from simulating reality. We can only approximate reality, not duplicate it." [33] The ASCI goal is to have by 2003 a 100 teraflop machine in place, whereas the peak computer speeds achieved in the near term will be 1-5 teraflops. (See Box 2.1: ASCI Hardware Acquisition) This near-term goal represents a 1000-fold increase in application code speed from computer hardware improvements with respect to 1996 capabilities.

Box 2.1: ASCI Hardware Acquisitions

Contracts have been awarded to Intel Corporation, Silicon Graphics Corporation (and its subsidiary, Cray), and IBM Corporation to construct the next generation of supercomputers based on parallel-processing architecture. Intel announced in late 1996 that it had broken the 1 teraflop (1 trillion calculations per second) barrier with a partially assembled machine now fully in place at Sandia National Laboratory (the average personal computer can perform one million calculations per second). This achievement represents a substantial increase in computational speed from that available to the nuclear weapons labs during the era of nuclear testing. The ultimate goal of ASCI is to reach 100 teraflops by about 2004. Some parameters of the current suite of ASCI supercomputers are given below.

ASCI Mountain Blue is not yet fully constructed, as testified by Los Alamos' director to Congress in March of 1997:

A major landmark for ASCI occurred in November when Los Alamos signed a contract with Silicon Graphics Inc./Cray Research Inc. for the ASCI Mountain Blue computer. This computer system, scheduled for delivery at the end 1998, will have a peak speed of 3 TFLOPS (3x1012 floating-point operations per second), and will deliver 1 TFLOPS sustained on a hydrodynamics code. The contract calls for early delivery of a 100 GFLOPS (100x109 floating-point operations per second) initial system, followed by a more powerful system in late 1997. Half of the initial system was delivered on December 22, 1996, well ahead of schedule. By Christmas Eve, one of the machines was already running a large hydrodynamics code. The remaining four machines were delivered in mid-February, bringing the initial system to its full complement of 256 processors.[34]

It is possible to view the up-to-the-minute status of ASCI Open Blue Mountain (presumably the unclassified component) via the Internet at http://www.lanl.gov/projects/asci/bluemtn/.

The procurement of these three initial ASCI supercomputers differs from the past practices of the national laboratories. ASCI Red, Mountain Blue, and Pacific Blue are amalgams of thousands of microprocessors, themselves produced largely for the commercial market. By contrast, supercomputer acquisitions for the nuclear weapons program had been characterized by one-of-a-kind machines with the only vendor options being Cray and Thinking Machines, Inc. The principal reason for this change in practice is that microprocessors have rapidly caught up with supercomputers in performance and cost between 1986 and the present.
Code NameLabVendorContract AwardNumber of ProcessorsSpeed
ASCI RedSandiaIntel$45 million9,2001.8 teraflops peak
ASCI Mountain BlueLANLSilicon Graphics Inc./Cray Research Inc.$110.5 million (including another, 1-teraflop machine)3,0723 teraflops peak; 1 teraflop sustained on a hydro code
ASCI Pacific BlueLLNLIBM$93 million5123-5 teraflops, peak

Supercomputers will no doubt be the most visible metric of progress towards fulfilling DOE's nuclear weapons simulation goals; however, other hardware developments -- including high-speed data storage and retrieval and high-speed, secure communication links between the national laboratories -- are required under the ASCI Program Plan. But the most crucial area of ASCI effort is in software development. The workload of writing the next generation of nuclear weapons software (with the simulation goals discussed above) is compounded by the problem of effectively using a relatively new kind of computer architecture: massively parallel processing. (See 2.2 Massively Parallel Processing).

ASCI projects a 100,000-fold increase in computing performance over the next decade, of which a tenfold increase is to come from taking full advantage of parallel processing capabilities.[35] The JASONs have gone further to claim that half of the increase in computational speed which the ASCI program intends to deliver by 2010 is anticipated to occur because of developments in the software, not the hardware:

Increased computer power has historically been matched (or even overmatched) by improvements in algorithms and models. Thus computers 100 times more powerful seem to elicit algorithms another 100 times more powerful, for a gain of 10,000.[36]

With a 100,000-fold increase in computer performance the 6 million-times more spatially resolved calculation outlined by JASON above would take 6 times longer than it did in 1994, allowing for an additional ten-fold increase in computer speed since 1994, the date of the JASON report. Finer gridding throughout the entire problem, however, is not necessary in most weapons applications. The technique of adaptive mesh refinement (AMR) examines the calculation as it progresses to determine where finer grids are required (usually at steep gradients in one or several variables: at a detonation wave front for example). A figure of one billion cells in an ASCI calculation -- consistent with the 100 teraflop ASCI goal and the statement that pre-ASCI codes contained typically 10,000 cells in two dimensions -- was given in a vu-graph presentation at the 5-6 December 1996 Academic Strategic Alliances Program Pre-proposal Conference held in Dallas, Texas.[37]

Box 2.2: Massively Parallel Processing

The concept of massively parallel processing (MPP) has been studied at least at the conceptual level for several decades. Only since the late 1980's have computers been constructed which effectively exploited this architecture. Consider a stack of 20 papers, each sheet containing a lengthy calculations such as adding a long list of numbers. An individual could work alone, summing each sheet separately until finished. Alternatively, if 19 equally capable co-workers were available, each could be given one sheet and the work could be accomplished in 1/20th of the time as everyone labored in parallel. This simple analogy illustrates the concept of massively parallel processing, and an idea known as Amdahl's Law. Since the work can't be subdivided into greater than 20 parts in this example, bringing in more than 20 co-workers would not increase the rate at which the job could get done.

Two concepts are central in writing software for massively parallel processing: scalability and portability. An MPP computer program is scalable if it can easily run on a parallel architecture with a different number of microprocessors, and thereby efficiently exploit the greater speed of a machine with more processors. All Accelerated Strategic Computing Initiative software is intended to be scalable to the successive generations of MPP supercomputers. Portability, a more general concept, refers to the capability of a program to run on more than one computer system. The ASCI procurement strategy thus far has been to acquire MPP supercomputers from three vendors, so clearly ensuring that nuclear weapons software is portable across these systems will be a requirement.

Not all problems which computer programs are written to solve, not all computer programs, can be written to exploit the capabilities of massively parallel processing. It is the judgment of the JASONs that the U.S. nuclear weapons program can transfer its computation base to parallel architectures:

Of the several types of physical computations relevant to nuclear weapons and Stewardship, most are well suited to some degree of parallelism with fairly obvious adaptations and extensions of present algorithms.[38]

We should emphasize, however, that this consistency assumes the 100 teraflop goal can be met, and it ignores increases in the number of calculations per cell resulting from additional complexity of the physics calculations. The ASCI goal of reaching 100 teraflops by 2003, or even by 2010, is an example of what the nuclear weapon laboratories proudly regard as technologically "aggressive" or "visionary" thinking:

We are already thinking ahead toward the generation-after-next system, a capability able to sustain operations of one hundred teraflops to one petaflop(a million billion floating point operations per second.

At these scales we are no longer talking about a "computer" in the traditional sense. A one-hundred teraflop facility will require on the order of 20 megawatts of electrical power for its ten thousand processors. Heat dissipation requirements may call for cooling towers similar to those at other major research and industrial facilities. The associated information workstations and visualization laboratories, will require inventions undreamed of today. And, to meet program requirements, all of this must occur within the next ten years.

. . . How will we understand the results of such calculations? Will we construct "virtual laboratories" that project the results as holographic images we can walk around in and query? Will we conduct a voice conversation with the machine to ask it what is happening while we are looking at the images (empahsis added)?[39]

A petaflop (1000 teraflops) machine is estimated to cost $50 billion today,[40] whereas the 1-5 teraflop platforms are now coming in at about $50 to $100 million. Therefore, a 100 teraflop machine constructed today would cost an estimated $5 billion (or roughly 5 times the projected total ASCI program budget). Clearly one significant, though unstated, assumption in the ASCI Program Plan is that the cost of the microprocessors and other hardware components for the 100 teraflop machine will have substantially fallen in the next decade so that the 100 teraflop acquisition will be feasible from a budgetary perspective.

Even if the ASCI computational speed goals are achieved in the next decade, one must also factor in the fact that "[t]oday's large weapon codes typically take approximately 5 years to reach the point where they can be used with confidence by our weapon designers.[41] The added complexity of the physics calculations beyond the roughly 1000 flops per-cell-per-time-step in today's detonation codes will further extend the time in which significant trade-offs will have to be made between dimensionality, resolution, and fuller physics description in a given application code.

C. A Lab-University Symbiosis for Simulation?

DOE Defense Programs officials hope that ASCI will foster a substantial symbiotic relationship between the nuclear weapons program and the universities. On the one hand:

The ASCI academic alliance program will help accelerate the preeminence of American universities in large-scale simulation, a methodology of rapidly increasing importance and enormous promise for leading-edge science. Moreover, we anticipate a ripple effect, as the DOE ASCI corporate and university partnerships enable the American science and technology community to understand complex physical systems.[42]

In return the weapons program hopes the computer code development at the universities and new insights into the physics and engineering will enable the laboratories to engage in "virtual testing." An example of this symbiotic relationship is found in the treatment of opacities. The opacity of a given path length of material is the degree to which light of a given frequency is attenuated -- absorbed -- as it passes through the material. The opacity of a substance depends on its thermodynamic state variables (pressure, temperature, density) and through these on the material's ionization state.

Opacities determine how radiation is transported within the bomb volume as the explosion progresses, as well as determine to a large extent whether the bomb constituents and explosion products are in local thermodynamic equilibrium. Opacities, particularly for elements lighter than iron, are important quantities for a host of astrophysical applications. Measuring and calculating opacities for other than very light materials or low temperatures is difficult, but some working values have always been necessary nuclear weapon code inputs in order to describe the transport of radiation and the partition of energy between particles (ions and electrons) and radiation.

Over the past several decades, opacities used in stellar models had been calculated (mostly at LANL) using a set of approximations, especially with respect to electronic structure. Many aspects of the life cycle of stars were correctly described by the models using the Los Alamos opacity tables[43], but puzzles remained.

Recently, the situation has been altered with the introduction of new calculations [at LLNL] with the OPAL opacity code that show much higher opacities than the Los Alamos results. These new opacities have led to the resolution of several long-standing puzzles in stellar pulsation properties of Cepheids, d Scutis, and b Cephei stars, and have made a favorable impact on solar models, the critical mass limit for pulsationally stable stars, convective core overshoot, and lithium depletion in the Hyades.[44]

These new Livermore opacities, indicating a substantially greater absorption of light for material at high temperatures (~100 eV), had a significant impact on astrophysics, explaining several long-standing disagreements between theory and data.

One might expect similar, immediate improvements in simulating nuclear weapon performance as a consequence of improved understanding of opacity data derived through university research sponsored by ASCI. However, this will not necessarily be the case in every instance. The recent OPAL opacity results demonstrate how the ASCI program might actually erode the predictive capability of nuclear weapon performance codes. When nuclear weapons designers put the "correct" LLNL opacities into the weapons codes the outputs disagreed with past underground test data.

This problem was also encountered when recently measured, more accurate and comprehensive equation-of-state data were fed into the weapons codes. Such a seemingly anomalous result stems from the fact that current nuclear weapons codes contain on the order of 1000 parameters whose values have been fixed by constraining the codes to reproduce nuclear test measurements. The four ASCI "virtual testing" goals represent a significant departure from this framework. It is probable that discoveries will be made in this process, but the most likely end result will be a host of questions that can only be answered with further underground tests.

Most would agree that, today, we do not have the ability to introduce any such changes [in the design of some components of nuclear weapons] without proof-testing. It will require considerable computational analyses of both primaries and secondaries in order to develop even a limited capability for redesign of warheads without proof-testing -- short of returning to very primitive devices as in the first-generation weapons stockpile.[45]

This was the conclusion of the JASONs in 1994.


15. A discussion of 1980's advances in obtaining quality data from nuclear tests is provided in "Imaging in the Nuclear Test Program," Energy and Technology Review, Lawrence Livermore National Laboratory, October 1988, p. 30.

16. Henderson, D. B., "Computation: The Nexus of Nuclear Weapon Development," in N. Metropolis, D. H. Sharp, W. J. Worlton, & K. R. Ames, eds., Frontiers of Supercomputing (University of California Press, 1986) p. 141.

17. Designers managed to satisfy this criterion only at low yields, working in the range of 0.5 to perhaps 2 kilotons.

18. Henderson, D. B., "Computation: The Nexus of Nuclear Weapon Development,".p. 142.

19. Ibid.

20. In addition, weapons codes are intended to be amenable to the practical needs of designers—having convenient, understandable input and output formats; accommodating the specific military design issues; and facilitating collaboration within teams of weapons workers. These and other human/task/computer interface issues define the requirements for the so-called "Problem Solving Environment." For the benefit of Academic Strategic Alliances Program participants, the DOE has stated that the Problem Solving Environment "encompasses the development of advanced computing environments on the desktops of engineers and decision-makers, to include solid geometry models, meshing tools, visualization, network information throughput and protocols." ASCI Academic Strategic Alliances Pre-Proposal Conference Breakout Session: Problem Solving Environment, http://www.llnl.gov/asci-alliances/sn/comp-phy.html ("Last Modified December 26, 1995"). The university research contracted through the Academic Strategic Alliances Program ranges from refining particular aspects of the physical models used in nuclear weapon design codes to the broader issues of interfaces between codes, advanced computational techniques, and high speed information networks that are encompassed by the term Problem Solving Environments.

21. Henderson, "Computation: The Nexus of Nuclear Weapon Development," p. 145.

22. Ibid., pp. 141-142.

23. W. Wayt Gibbs, "Boom," Scientific American, November 1997, p. 48.

24. Henderson, D. B., "Computation: The Nexus of Nuclear Weapon Development," pp. 150-151.

25. DOE, "Stockpile Stewardship and Management Plan," DOE Office of Defense Programs, February 29, 1996.

26. "Accelerated Strategic Computing Initiative: Program Plan," U. S. Department of Energy, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Sandia National Laboratory, September, 1996, p. v.

27. Lawrence Spohn, "Sandia Computer Boosts Detail in Simulated Nuclear-Bomb Impacts," Albuquerque Tribune, June 11, 1997, p. A9.

28. As a historical anecdote, at least some forms of high explosives emit an odor upon chemical deterioration. If a nuclear weapon smelled like rotten eggs, it was one indication that the high explosive lenses required replacing. A litany of warhead "aging" concerns was given by the JASONs in 1994: "These [aging] effects include formation of heavy-metal hydrides; the effects of He from Pu a-decay, such as swelling and embrittlement; cracks, voids, porosity, and gaps in both heavy metals (from the above effect and others) and in high explosives; stress and failure modes in welded parts; surface bonding and texture formation; and many others. All these are materials-science issues, and hence materials science assumes a particular importance for stockpile surveillance." JASON, "Science Based Stockpile Stewardship," The Mitre Corporation, JSR-94-345, November 1994, p. 57.

29. JASON, "Science Based Stockpile Stewardship," pp. 87-88.

30. "Adaptive Methods for Fluid Dynamics in Three Dimensions," Energy and Technology Review, Lawrence Livermore National Laboratory, July/August 1990, p. 97.

31. The JASONs are a group of senior, non-governmental scientists who offer advice to the U.S. government under a contract with the MITRE Corporation.

32. JASON, "Science Based Stockpile Stewardship," pp. 96-97.

33. "Supercomputing and more star at Sandia-chaired SC97 conference in Silicon Valley: Research-and-trade convention culminates in two years of planning with Dona Crawford," Sandia Lab News, 21 November 1997, p. 3.

34. Siegfried S. Hecker, Director, Los Alamos National Laboratory, Testimony before the Senate Armed Services Committee, Strategic Forces FY98 Budget, Defense & Energy Weapons Program, 19 March 1997.

35. "Accelerated Strategic Computing Initiative: Program Plan," pp. 10 and 14.

36. JASON, "Science Based Stockpile Stewardship," pp. 90-91.

37. This vu-graph is accessible on the world wide web at http://www.llnl.gov/asci-alliances/slide/adams/adams02.html.

38. JASON, "Science Based Stockpile Stewardship," p. 96.

39. Stephen Younger, "View from the Program Office: Stephen Younger, Program Director, Nuclear Weapons Technology," Weapons Insider, Los Alamos National Laboratory, Volume 4, Issue 8, November/December 1997, pp. 1-2, 4.

40. Bailey, David A. , "Onward to petaflops computing; includes related article on petaflops research questions," Communications of the ACM, No. 6, Vol. 40, June 1997, p. 90.

41. "Accelerated Strategic Computing Initiative: Program Plan," p. 16.

42. "DOE, National Labs Select Five Universities to Bolster Large-Scale Computer Simulation Effort," Lawrence Livermore National Laboratory Press Release, 31 July 1997.

43. A. Weiss, J. J. Keady, and N. H. Magee, "A Collection of Los Alamos Opacity Tables for All Temperatures," Atomic Data Nucl. Data 45, 209-238 (1990).

44. R. W. Lee, C. A. Iglesias, L. B. Da Silva, "Verificaiton of OPAL Opacity Code Predictions for Conditions of Astrophysical Interest," Inertial Confinement Fusion Quarterly Report, January-March 1993, Volume 3, Number 2, p. 77.

45. JASON, "Science Based Stockpile Stewardship," p. 89-90.

Back to contents page

last revised 1/22/1998

Get Updates and Alerts

See the latest issue >

NRDC Gets Top Ratings from the Charity Watchdogs

Charity Navigator awards NRDC its 4-star top rating.
Worth magazine named NRDC one of America's 100 best charities.
NRDC meets the highest standards of the Wise Giving Alliance of the Better Business Bureau.

Donate now >

Share | |
Find NRDC on