4

How digital simulations lead to real world fusion, with TAE’s Director of Computational Science Sean Dettrick

Sean Dittrich

Listen and Follow ‘Good Clean Energy’

Apple Podcasts | Spotify

Good Clean Energy is a podcast that tackles one of the most existential questions of our time: how to build a world with abundant, affordable, carbon-free electricity. This season we’re going to unpack all the things that TAE is working on to make fusion energy a reality.

In this episode, TAE Director of Computational Science Sean Dettrick explores the groundbreaking role of simulation in advancing commercial fusion. Since joining TAE in 2002, Dettrick has led efforts to build a “digital twin” of fusion reactors—high-fidelity simulations that mirror the physical machines under development, allowing researchers to predict and optimize reactor behavior without physically constructing every variation.

These simulations are not just digital prototypes—they’re essential tools for understanding the intricate physics of plasma behavior, validating experimental data, and informing future designs.

TAE’s sixth-generation fusion machine, Copernicus, is still in development but Dettrick and his team have already seen it “operate” in the virtual world. Through simulations, they analyze how plasma reacts under various conditions, tweak system parameters, and test designs far faster and more flexibly than physical experiments allow.

As computational power has grown from teraflops to petaflops and now to the exascale frontier, so too has the capacity to simulate the six-dimensional complexity of plasma physics. Dettrick emphasizes that reaching commercial fusion will require continued advances in both computing and collaboration between theoretical and experimental scientists.

Looking ahead, Dettrick believes simulations will be crucial not only in building the first fusion power plants but in optimizing them for mass production—ensuring they’re not just functional, but also manufacturable. 


Covered in this episode:

  • TAE has created high-fidelity digital twins of its fusion reactors.
  • These simulations allow testing and optimization without building physical prototypes.
  • Models are calibrated with real-world data to predict future reactor behavior.
  • Digital models can test design changes that would be physically impossible or too costly to implement in real experiments and provide quick feedback on potential improvements.
  • TAE’s sixth-generation machine is already running in virtual form.
  • There’s a healthy tension between simulation and physical testing—each validates and informs the other. Real-world results continue to refine and improve digital models.

    The following transcript has been edited for clarity.

    Jim McNiel: Welcome back to Good Clean Energy. This season, we’re really going to dive deep into how fusion works and what we have to do to get to commercial fusion power plants. I’m your host, Jim McNiel.

    I’m joined today by Sean Dettrick, who’s the Director of Computational Science at TAE and has been since 2002. His role is to figure out how to create a digital twin of a nuclear fusion reactor for the purpose of understanding how it’s going to behave under many, many different circumstances. Sean, thank you so much for joining us today. I’m really excited to kind of dive into this topic and understand what this means. And I’ve been struggling with a metaphor… I’m going to test this on you, you tell me if this works, but I kind of think of you as a composer, because you’re actually writing the code to tell the computer what to do under different circumstances. I figure your code is the score, the computer is the conductor, the processors are the instruments, and all of these different components within a reactor are part of that orchestra and you want all this stuff to work in perfect synchronization. Is that a fair way to look at this? 

    Sean Dettrick: Yeah, that’s a good metaphor. I like it. Thank you. I’ve always wanted to be a conductor though my hair is not wavy enough. 

    McNiel: So what role does simulation and digital twinning play in helping TAE get to commercial fusion?

    Dettrick: That’s a good question. So nuclear fusion has different components as a science. It has theoretical physics and experimental physics and engineering. And when you want to build a fusion reactor, we need all three of those disciplines. You need to predict how it’ll work, and then you need to build it, and you need to test it. And it’s the same with building any new, let’s say, widget. It could be like a car engine or a jet airplane. You know, there’s the kind of engineering design phase which is design, build, and then test. So we’re going through this, we’re building a widget, we first design it, we build it, and then we test it.

    But the widget in question for us is a fusion reactor. And TAE has been doing this since before I joined—I joined in 2002—but it started doing that on campus at the University of California, Irvine. Much smaller devices, and it’s an iterative design process. So we have built, depending on how you count it, we’ve built six different devices, one after another, each one reaching kind of more and more appropriate parameters for fusion.

    And the way it works is, we try to study the existing experiment as much as we can, and we run our simulations and we calibrate them on the present experiment to understand what is going on to help the experimentalists interpret the very, very complex phenomena that are coming out of the machine. And then we calibrate our simulation models and then once they’re well calibrated on the existing machine then we can use them to predict how future operating parameters might look like and to predict how we could build a future machine.

    That’s part of the design process. We can inform the design using simulations. Simulations are useful for the kind of experimental analysis, which in the design, build, test metaphor, that’s like the test stage, where basically you’re testing your model, you’re testing the existing machine, and you’re understanding it. And they’re also useful for the design phase, where you’re designing the next widget, or in this case, the next nuclear fusion reactor, the next step device.

    McNiel: But to be clear, the widget you’re building is a digital widget. It’s a virtual simulation of a physical machine. 

    Dettrick: That’s correct. Yes, we’re building a digital widget. And the idea is that it’s much easier. Let’s say if you build a very high-fidelity model of a fusion reactor. One thing you can do in the high-fidelity model is you can change the parameters arbitrarily. You can just change the shape of the thing, of the vessel walls. You can move all the magnets. You can change all these external actuators which we have on the system arbitrarily in a way which you can’t do with a physical experiment.

    Like imagine if you’re modeling an airplane. If you build a very complex and complete model of an airplane, it’s aerodynamic. But if you want to, you can, in the simulation, you can change the shape of it to be the shape of a brick. And you can see, does it fly better or does it fly worse?

    You can’t build an airplane that looks like a brick, right? 

    McNiel: Yeah. But you can bend up the tips of the wings and see if it improves fuel efficiency. 

    Dettrick: Yeah, things like that. So that’s the kind of purpose of the digital twin. If you can make it as accurate as possible, then you can actually make realistic modifications and see how they should change the realistic experiment.

    McNiel: So when you say design, build, and test, the design is very similar. You’re using all of the same drawings and parameters of what a machine is, all the physical dimensions of it. The build for you is code that represents that information. The fidelity is a function of the quality of the data that you have that exists in the real world from running real experiments, right?

    Dettrick: Yeah, so the real experiments are used to validate the models. Our challenge is to build a model which can be as comprehensive as possible to include all those very sophisticated, very complex interactions which are going on inside the plasma. Interactions between electrons and ions and background electrostatic and electromagnetic waves.

    So we need the challenges to faithfully reproduce all that internal detail. And then also have that kind of internal plasma model interact with the kind of external components of the actual machine, which are the source terms which we apply to the plasma. 

    McNiel: So, to make all this work, and to be clear, you can simulate an aircraft with kind of like 95 to 99 percent precision today. But you’re dealing at a whole different level. You’re dealing with the atomic world. You’re dealing with atoms and particles. Are you doing just components of a reactor or are you doing the whole reactor? 

    Dettrick: We are trying to do the whole reactor. Yeah, it is a lot more complex though than modeling an airplane. Well, there are many complexities in an airplane. I don’t want to say it’s not complex. It’s extremely complex because they have many moving parts. They’re modeling a turbine, for example, which is very, very complex, but what they’re modeling is air flowing through a turbine and perhaps some reactions like a burning reaction in there.

    But when they model airflow over a wing, for example, that’s all fluid physics. So there’s the dimensionality of it. It’s only three dimensions, basically it’s the three dimensions of space. Because everything can be represented by a fluid. You can represent a plasma as a fluid in certain cases. It’s a good approximation. But in our situation and interest, it’s not a good approximation, so we also need to represent it, instead of just being a fluid, as being a collection of particles. And that greatly increases the dimensionality of the system. So instead of having three spatial coordinates, you know, x, y, and z, you also have three velocity coordinates, vx, vy, and vz.

    So you can think of the system as instead of being three dimensional, it becomes six dimensional. So a regular simulation that we perform is kind of operating in this six dimensional space. So yes, it’s more complex. 

    McNiel: So to put this in the real world, TAE is currently working on its sixth generation fusion machine called Copernicus. Is it safe to say that you and your team are going to be the first to see Copernicus operating? 

    Dettrick: Well, in a sense, we already have seen it operate in the simulation. In the simulation, we can create a plasma which is kind of the goal plasma. It’s the state that we want to achieve. And then, once we build that in the simulation, as we have, we can then analyze its properties. We can see how it responds to inputs and outputs. You know, you can essentially prod the thing, like a little laboratory rat or something. You can kind of poke it and see how it responds to different stimuli. 

    McNiel: You’re not limited by the amount of power that we can put into the machine, so you can sustain plasma, I’m going to assume, indefinitely? 

    Dettrick:  Yes, in the simulation, yeah. And you can, in the simulation you can see how fast does plasma want to leave, and then you can inject plasma at the same rate. So you can, you can arbitrarily increase or decrease the source terms and change other properties in the system. So you can kind of self consistently change features like parameters, such as beam energy. And get a result which is kind of physical, and you should also expect to see on the experiment.

    McNiel: What temperatures have you achieved inside a Copernicus? Have you exceeded 100 million degrees C? 

    Dettrick: Yeah. We can change the temperature arbitrarily. So we can set it in the simulation arbitrarily. Now, can it be achieved on the experiment is a different story. That requires further analysis, right? So what we have right now, the temperatures we’ve selected are the ones which are the design points for the machine.

    So Copernicus is designed to reach a certain temperature, a certain density using a certain magnetic field. So we can queue up those parameters in the simulation to make the FRC in the simulation. And then we can see how that FRC performs, how it behaves, and then in a sense we’re predicting how Copernicus will be behaving once it’s built.

    McNiel: How long are these simulations running for in terms of real time? 

    Dettrick: We’re really running for just a few hundred microseconds because actually all the important things happen in a short period of time. And then when you maintain a plasma for, say, five seconds, all the processes are occurring on much shorter timescales.

    If there’s any waves going on in the system, the thing will be responding to the waves in a few microseconds timescale. So we only need to model a few hundred microseconds that will cover all that possible occurrences. 

    McNiel: How powerful is the computer you’re using to do 3D simulation of Copernicus? 

    Dettrick: Okay, can I just step back in time a little bit? So when I was in grad school. Right, ok, before that, when I was an undergrad, or just after undergrad….

    McNiel: When you did all this on a Casio calculator, is that what you are saying? 

    Dettrick: Well, no, what I was going to say is that the first big computer I used was a Cray XMP. 

    McNiel: Wow, you had access to a Cray, that was pretty serious.

    Dettrick: Yeah, I was working in geophysics at that time. And then I went to grad school for plasma physics, and then I started to use this thing called — you may have heard of this, it’s the Connection Machine CM5. That was in 1992. And what was cool for me about that in 1993 was the movie Jurassic Park. I was in the theater watching that movie and they had a Connection Machine supercomputer in the background in the control room in Jurassic Park, which I thought was pretty hilarious. So that computer back then in 93 was about one teraflop. That’s one, what’s that? 

    McNiel: One trillion floating point operations per second.

    Dettrick: Thank you. That’s one followed by 12 zeros. That’s a trillion. And that’s what was back then a supercomputer is comparable now to a laptop. A laptop that I have on my desk that I’m talking to you through has a similar level of compute power. 

    McNiel: Yeah. An iPhone has teraflops of performance.

    Dettrick: It’s incredible, right? Yeah. And then about the time I joined TAE, so around 2002, the fastest computer on the planet was the Earth simulator, which was in Japan. And that was 40 teraflops. So it’s 40 times quicker than the one I was using in grad school.

    Now the state of the art is the exaflop computer. So a 1 with 18 zeros after it floating point operations per second. 

    McNiel: Well, you skipped the interim petaflop-stage computer, which is probably what you’re using today, right? 

    Dettrick: Yeah. So from around 2010, the petaflop computers have been available, and right now we’re using a computer, which I think is 60 petaflops, which is a computer at NERSC, which is at Lawrence Berkeley National Laboratory. NERSC is the National Energy Research Supercomputer Center. Actually, interesting backstory is that NERSC was actually created, or the precursor to NERSC was actually created by the demand for computing from nuclear fusion research. 

    So we were kind of the first — well, not me personally, I may not have been born — but the first industries which needed supercomputing to that level.

    McNiel: And so to be clear, a petaflop is a quadrillion operations per second, floating point operations. So you’re at 60 quadrillion floating point operations per second. And then we get to go, as you were indicating, into the exascale realm, which is a quintillion floating point operations per second. At peak performance, the Frontier computer has hit 1.6 quintillion floating point operations per second. I think of it as a toddler in a sandbox. So that the sandbox, every grain of sand represents the calculating capability of an IBM PC. So that’s the toddler’s world. You know, he can form his roads and his hills with that world, and that’s an IBM PC.

    But the Frontier computer is all the sand on all the beaches in the world. So you can build a lot of futures in fantasy lands with that kind of capability. So right now you’re operating at petascale, what would be possible, or do you even need to go to Frontier scale, and if you did, what would you do?

    Dettrick: Oh, that’s a great question. So, yes, we do need to go to that scale, and we will go to that scale eventually. So, what we’re currently doing right now In the petascale is we’re looking at this global stability, which is basically looking at the interaction between the plasma and kind of long wavelength waves essentially, because the plasma can go unstable, kind of macroscopically. It can kind of change. The whole shape can change, the whole thing can wobble around.

    McNiel: So does the behavior of the plasma pretty much consume what you guys are looking at? Or is it just one aspect of so many different things? 

    Dettrick: The plasma is, the ions and electrons, they’re kind of confined by the magnetic fields, but the magnetic field lines, some of them are open and they can actually touch the walls. And some of the plasma moves along the field lines and will hit the wall. And so those particles that hit the wall, they will actually interact with the wall in different ways. Some of them become embedded in the wall, or they may actually penetrate into the wall and bounce around inside the wall and then come back out again. And then get released back into the system. So our models include that type of thing. We’re finding basically a macroscopically stable state. And once we get that state, it’s basically the state that we want our fusion reactor to operate in. We want to stay there. But then the big question with fusion is how much energy can you get out versus how much energy you put in. You need to get more energy out than you put in. 

    McNiel: Your Q greater than one. We need a Q greater than 10 really. It takes IQ to get high Q.

    Dettrick: So given some plasma density and temperature, you know how much fusion power you can create. But the question is how much fusion power do you need to put in to get that? Because there’s also a loss term, and the loss terms are called transport, in plasma physics parlance. It’s energy losses and particle losses. And those losses, getting at that question, what are those losses? That’s what you need the exascale computing for. 

    McNiel: So when I think about the value that you bring and your discipline brings to the company, obviously it feels like really quick turnaround on new ideas and concepts on the design of the machine without actually physically building it.

    You’re like the movie critic that gets to see the movie before anybody else. Then you could spoil it for us. You know, you could tell us if Norman’s going to achieve 75 million degrees and 40 milliseconds of confinement before anyone knows. 

    Dettrick: Yes, spoiler alert. But you know what? Experimentalists will never believe a theoretician or a simulation person, so they need to see the experiment. 

    McNiel: Yeah, the theoretical guys and the applied guys still don’t agree. 

    Dettrick: You always need both sides of the coin. Because if you don’t have the theory, you won’t know what to build. But then nobody’s going to completely trust simulation and theory until they build it and then they see it. And it’s an iterative process. Somebody predicts something, then somebody builds something and they check it and they’re like, no, that’s not quite right. But as we continue to analyze what we saw, we can give feedback into the models, the theory. The theory gets better and then it makes a prediction for next time.

    McNiel: Let me ask you this. In the near future, you’re going to be modeling Da Vinci and looking at how that machine is going to work. What role do you think simulation is going to play when we finally get to the part of commercializing fusion? Designing machines that are going to be built in a factory at scale and deployed. What role do you think simulation can play there? 

    Dettrick: Well I think the main goal of simulation is to help get to that goal of having a functioning reactor. In the sense of help us design the next step prototype, and then test that basically to analyze and understand the output of the experiments to try to understand the physics, which is extremely complex.

    There are so many different variables which are measured and which all interact. You really need a simulation and theory to understand it. So those are the main things that we contribute, but that goes all the way to the end goal when you have an actual functioning reactor.

    People don’t stop trying to understand it, right? Once you’ve got one working, you can build a better one. It’s like with engines nowadays. 

    McNiel: Yeah, we didn’t stop at the Model T, right? 

    Dettrick: No, that’s right. Yeah, once it started working. And then nowadays they spend a huge amount of super computing time developing an engine. And they’re looking at every single feature of combustion and how all the pistons move and exactly how the fuel is injected and how it’s removed. And we’ll be doing the same thing. So we’ll be optimizing. We’ll just be doing it at the atomic level as opposed to the molecular level. From the microscopic level up to the macroscopic. And we’ll be looking at different materials. I mentioned that we have this model where we actually look at the way particles actually propagate inside materials and then get ejected back out and they come back into the plasma.

    We’ll be looking at different materials, finding the most optimal materials to be the plasma facing walls. There are so many things. I think that the possible use cases of simulation to optimize even something which is already great, you know, even a reactor, which we don’t yet have in the world. Once we have a functioning fusion reactor, there’ll be so much scope for optimizing it and improving it and building the next step device. We’re trying to build a system of experiments in simulation where we scope out all the possible operating scenarios of a reactor. We can do that so much faster on the computer because if we change parameters in a simulation it’s easy. You just open a text file and change the numbers. But on the experiment, it could be impossible to do what you’ve just done on the computer. You need to build a new component or actually what really happens in engineering, when you go into a room with a bunch of engineers and say, I’d like to change this, they start laughing at you because no, you can’t do that because if you try to move that thing, it’s going to crash into that other thing, right?

    There’s all these physical bodies. You want to move a neutral beam. Well, you can’t move the neutral beam because it’s between two coils and then there’s a vacuum pump right next to it and there’s a diagnostic over here. You can’t do that. The space to move things around is really at a massive premium. 

    McNiel: Well, one day when you have a model and an AI system that’s really smart, it can reconfigure the whole machine and move that stuff around and figure out how that works.

    Dettrick: Yeah. And that’s part of it, right? You can optimize not just the plasma physics, but you can optimize the engineering as well on the outside, how everything is configured. Cause you don’t want to build a simulation. You don’t want to design an optimal fusion reactor which can never be built. That’s not useful. You want to build an optimal fusion reactor which can actually be built and be commercially realizable. It can actually be useful in the marketplace. That’s kind of what we’re trying to achieve. 

    McNiel: So are you ever surprised? When we actually build the machine and we flip the switch and it does what it’s supposed to do, are you ever surprised?

    Dettrick: I’m in a constant state of surprise. It’s incredible how we’re able to build what we build. I mean, our engineering and experimental teams are incredible. It’s such a complex system. And when you try to simulate it, you realize how complex it is. There’s so many things going in.

    McNiel: Sean, thank you so much. I reserve the right to call you back again, because I think this is fascinating. 

    Dettrick: Thanks, Jim. Thanks for inviting me. I really appreciate it. 

    McNiel: Thank you for listening to Good Clean Energy. This season, we’re going to unpack all the things that are going to make fusion a reality. The energy source that’s going to power the planet for the next 100,000 years. I’m pretty excited. I hope you are too. Thanks for listening. 

    Sign up for our newsletter.