The Story of ITER

Aanand Joshi
November 18, 2023

Submitted as coursework for PH240, Stanford University, Fall 2022

Introduction

Fig. 1: A uranium-235 fission reaction. (Source: Wikimedia Commons)

Nuclear fusion is the engine at the core of our Sun, and all other stars in the universe, powering them for up to billions of years. It is the process of combining two atoms into a single, more massive atom, which releases significant energy. The act of harnessing the radiation released by the Sun on Earth in the form of solar energy is one feat. But a feat far more difficult is recreating the heat and pressure required to achieve nuclear fusion in a controlled environment, let alone harnessing the released energy for use as electricity. This is tantamount to constructing a manmade, miniature star core and turning it into a power plant.

How Does Fusion Differ From Fission?

Whereas fission reactions involve shooting neutrons at an atom to split it and generate energy, fusion reactions combine two atoms together to form a final atom which is slightly less massive than the initial two component atoms. Since the laws of physics tells us that mass and energy cannot be created or destroyed, when mass appears to be lost in a reaction, it is actually converted into energy and emitted as a byproduct in the form of radiation emission or imparting kinetic energy to the products of the reaction. [1]

Fig. 2: A deuterium-tritium fusion reaction. (Source: Wikimedia Commons)

Fission reactions release a considerable amount of energy, however fusion reactions release considerably more per unit of mass. A common reaction to use in a fission reactor is splitting U-235 with a neutron (see Fig. 1). One of these reactions releases about 200 MeV of energy. By comparison, deuterium-tritium (D-T) fusion (fusion between H-2 and H-3), the main reaction being considered for energy production, releases 17.6 MeV (see Fig. 2). However, U-235 and a neutron is 235+1 = 236 amu, whereas D-T fusion is only 5 amu of total mass (treating electron mass as negligible). So, the energy yield per unit mass is 200/236 = 0.847 MeV/amu for fission, compared to 17.6/5 = 3.52 MeV/amu for D-T fusion. Therefore, fusion produces 3.52/0.847 = 4.15 times more energy than fission per unit mass. [2,3]

However, as alluded to earlier, the energy cost to maintain the conditions necessary for fusion is very high compared to fission. Unlike fission, fusion requires overcoming electrostatic repulsion in order to force the deuterium and tritium nuclei together. This requires temperatures of over 108°C, heating the D-T gas mixture into a plasma, like the core of a star. [2] Only very recently, in December 2022, was an experiment at the National Ignition Facility (NIF) of the Lawrence Livermore National Laboratory (LLNL) able to produce net energy from fusion for the first time, producing 3.15 MJ from 2.05 MJ supplied. In other words, the ratio of power generated over power supplied, known as Q, had a value of 1.54. [2,4]

The Tokamak and ITER

The NIF achieved this breakthrough with its inertial confinement fusion setup. The method of inertial confinement uses high power lasers fired at a fuel capsule to generate a plasma and induce fusion. While this method was first theorized in the 1960s by physicist John Nuckolls along with other scientists at the LLNL, fusion research began in the 1920s. [4,5] Though there are a variety of designs for fusion reactors, the most-studied and oldest design is the tokamak. It is a Soviet-designed, toroidal (donut-shaped) reactor that uses powerful magnets lining the toroid to confine the plasma away from the walls and keep it thermally insulated. [3]

The tokamak begins burning plasma with the following steps: First, a low concentration of gas is emitted into the vacuum chamber inside the toroid after the magnetic field coils have been turned on. Then, an increasing current is sent through the central solenoid (see Fig. 3) which induces a current through the gas through electromagnetic induction with a transformer. This current heats the gas into a plasma, which generates a magnetic field. This field interacts with the field produced by the external poloidal coils, resulting in a helical field that confines the plasma. As temperature increases, the electrical resistivity of the plasma quickly drops. The induced current in the plasma from the transformer depends on an increasing current in the inner poloidal coils due to Faraday's Law, so the current driven through induction only heats the plasma to around one third of the required temperature to sustain fusion. Additional power is required for further heating, which must be provided to the plasma through collisions of beams of neutral atoms and microwaves. If deuterium and tritium are used in the reaction, they will begin fusion once the temperature hits approximately 108°C. At this point, He-4 nuclei produced in the D-T fusion heat the plasma further through collisions. [2]

Fig. 3: The design of a tokamak. [8] (Courtesy of the DOE)

The design for the tokamak was first theorized by I.E. Tamm and A.D. Sakharov in 1950. The first tokamak, the T-1, was operational by 1958 at the Kurchatov Institute in the USSR. Through the 1960s and early 1970s, new tokamaks were produced, iteratively improving upon the TM-1 and allowing for the study of plasma confinement under different parameters. [6] There was relatively little excitement or faith in fusion until the late 1960s, when tokamaks began showing drastic improvements in plasma confinement capabilities. [1] This involved experimentally disproving a previously held law known as Bohm's Law, which stated that with an increase in plasma temperature, plasma confinement would decrease. The results for plasma confinement time exceeded the prediction by Bohm's Law by an order of magnitude. The experiment was able to contradict Bohm's Law and thus prove that magnetic confinement had significant potential. As a result, the tokamak was established as the standard design for experiments on magnetic plasma confinement. [6]

Fusion programs around the world began to develop, encouraged by the success of the tokamak design in various experiments. The International Atomic Energy Agency (IAEA) responded to desire among members of the largest nuclear fusion programs internationally for a collective effort to develop fusion energy technology. After several IAEA meetings in 1987 in Vienna, a proposal for the International Thermonuclear Experimental Reactor (ITER) was developed by members of the four largest national fusion programs in the world. The conceptual design for the tokamak was completed between 1988 and 1990, and from 1992 to 2001, the engineering design was finalized. Today, this project has grown to include participation by the US, EU, India, China, Japan, South Korea and Russia. ITER seeks to dwarf the recent NIF experiment and generate 10 times the power required to initiate the reaction. It is currently being constructed in southern France, with an estimated first plasma set for December 2025. [7,8]

The Goals and Scope of ITER

Rather than specifically aiming to achieve electricity production from fusion, ITER aims to achieve groundbreaking milestones that will demonstrate its feasibility in the future. One of these goals is achieving fusion ignition. This refers to the point at which the heat of the plasma inside the tokamak is self-sustaining, meaning that the heat source of the plasma comes entirely from the fusion reactions themselves. [2] ITER will dwarf previous experiments in energy production, with 500 MW of output power from 50 MW of input. [2,8] ITER will also pioneer tritium breeding, which means producing tritium from the D-T fusion reaction itself. Because tritium is a radioactive isotope with a half life of 12 years, it is much scarcer than non-radioactive deuterium that can be found plentifully in water. Instead, it can be bred through reacting lithium with neutrons emitted in the D-T fusion reaction, producing tritium along with helium-4. This is accomplished with a lithium blanket lining the inner surfaces of the toroid chamber, from which the tritium can afterward be extracted. [2,8]

Though, ITER does not plan to immediately begin with D-T fusion once construction is complete. First, there will be a phase of burning only helium and hydrogen plasmas, in order to test for any technical issues before beginning full operations. This will also allow confirmation of plasma confinement capabilities in a non-nuclear environment before full operations begin. However, some issues require testing under more similar conditions to D-T plasma. For this, the next phase involves deuterium plasma, which will allow for conditions similar to that of D-T fusion. Deuterium plasma will also produce small amounts of tritium, so before the entire D-T setup is complete, smaller scale fusion power production may be demonstrated during this phase. This also allows for some testing under the conditions of D-T fusion, such as radiation shielding performance. Finally, with the D-T plasma, the burn length and power output will be gradually increased through subsequent tests until the operational goal of over 400 seconds burn time with 500 MW output and Q = 10 is reached. Continuous, steady-state operation (i.e. a constant flow of current in the plasma that does not depend on induction) will also be tested, with a goal of Q > 5. [8]

The greatest challenge is likely to be the issue of plasma confinement for extended periods of time. Sustaining high temperature plasma on its own is difficult, as many factors can cause it to become instability. Additionally, as explained earlier, induction can only heat the plasma so far, after which other means of driving current through the plasma is needed. Sustaining plasma on the ITER tokamaks unprecedented scale with good confinement may prove to be difficult. Additionally, building an inner surface that is resistant to high energy neutron collision to avoid damage is imperative. The lithium blanket that will be installed later on in ITER's D-T phase is also uncharted territory experimentally, and methods for extracting tritium from the blanket will be tested for the first time. Additionally, 14 MeV neutron fluxes are far higher than any fusion blanket has been tested on. [2]

Conclusion

Fusion reactors offer an opportunity to recreate the powerhouses of stars on Earth. Though the prospect of using them for energy production has only recently been demonstrated with the Livermore experiment in December 2022, ITER offers to significantly bolster this prospect. Though it will not itself power the electricity grid, it will offer substantial insight into design considerations for DEMO, the first planned electricity generating fusion power plant. [2]

© Aanand Joshi. The author warrants that the work is the author's own and that Stanford University provided no input other than typesetting and referencing guidelines. The author grants permission to copy, distribute and display this work in unaltered form, with attribution to the author, for noncommercial purposes only. All other rights, including commercial rights, are reserved to the author.

References

[1] J. Clark and G. MacKerron, "Great Expectations: A Review of Nuclear Fusion Research," Energy Policy 17, 49 (1989).

[2] C. L. Smith and S. Cowley, "The Path to Fusion Power," Philos. Trans. R. Soc. A 368, 1091 (2010).

[3] V. I. Kopeikin, L. A. Mikaelyan, and V. V. Sinev, "Reactor as a Source of Antineutrinos: Thermal Fission Energy," Phys. At. Nuclei 67, 1892 (2004).

[4] E. Cartlidge, "Physicists Plot Laser Fusion Path," Phys. World 36, 8 (2023).

[5] J. Chabolla, "International Thermonuclear Experimental Reactor," Physics 241, Stanford University, Winter 2017.

[6] V. P. Smirnov, "Tokamak Foundation in USSR/Russia 1950-1990," Nucl. Fusion 50, 014003 (2009).

[7] C. P. Barranca, "ITER Collaboration," Physics 241, Stanford University, Winter 2022.

[8] "Summary of the ITER Final Design Report," International Atomic Energy Agency, November 2001.