How Deepfakes Could Lead to Doomsday
America’s Nuclear Warning Systems Aren’t Ready for AI
Erin D. Dumbacher
December 29, 2025
Nuclear missiles at a military parade in Beijing, September 2025
Tingshu Wang / Reuters
ERIN D. DUMBACHER is Stanton Nuclear Security Senior Fellow at the Council on Foreign Relations.
Since the dawn of the nuclear age, policymakers and strategists have tried to prevent a country from deploying nuclear weapons by mistake. But the potential for accidents remains as high as it was during the Cold War. In 1983, a Soviet early warning system erroneously indicated that a U.S. nuclear strike on the Soviet Union was underway; such a warning could have triggered a catastrophic Soviet counterattack. The fate was avoided only because the on-duty supervisor, Stanislav Petrov, determined that the alarm was false. Had he not, Soviet leadership would have had reason to fire the world’s most destructive weapons at the United States.
The rapid proliferation of artificial intelligence has exacerbated threats to nuclear stability. One fear is that a nuclear weapons state might delegate the decision to use nuclear weapons to machines. The United States, however, has introduced safeguards to ensure that humans continue to make the final call over whether to launch a strike. According to the 2022 National Defense Strategy, a human will remain “in the loop” for any decisions to use, or stop using, a nuclear weapon. And U.S. President Joe Biden and Chinese leader Xi Jinping agreed in twin statements that “there should be human control over the decision to use nuclear weapons.”
Yet AI poses another insidious risk to nuclear security. It makes it easier to create and spread deepfakes—convincingly altered videos, images, or audio that are used to generate false information about people or events. And these techniques are becoming ever more sophisticated. A few weeks after Russia’s 2022 invasion of Ukraine, a widely shared deepfake showed Ukrainian President Volodymyr Zelensky telling Ukrainians to set down their weapons; in 2023, a deepfake led people to falsely believe that Russian President Vladimir Putin interrupted state television to declare a full-scale mobilization. In a more extreme scenario, a deepfake could convince the leader of a nuclear weapons state that a first strike from an adversary was underway or an AI-supported intelligence platform could raise false alarms of a mobilization, or even a dirty bomb attack, by an adversary.
The Trump administration wants to harness AI for national security. In July, it released an action plan calling for AI to be used “aggressively” across the Department of Defense. In December, the department unveiled GenAI.mil, a platform with AI tools for employees. But as the administration embeds AI in national security infrastructure, it will be crucial for policymakers and systems designers to be careful about the role machines play in the early phases of nuclear decision-making. Until engineers can prevent problems inherent to AI, such as hallucinations and spoofing—in which large language models predict inaccurate patterns or facts—the U.S. government must ensure that humans continue to control nuclear early warning systems. Other nuclear weapons states should do the same.
CASCADING CRISES
Today, President Donald Trump uses a phone to access deepfakes; he sometimes reposts them on social media, as do many of his close advisers. As the lines become blurred between real and fake information, there is a growing possibility that such deepfakes could infect high-stakes national security decisions, including on nuclear weapons.
If misinformation can deceive the U.S. president for even a few minutes, it could spell disaster for the world. According to U.S. law, a president does not need to confer with anyone to order the use of nuclear weapons for either a retaliatory attack or a first strike. U.S. military officials stand at the ready to deploy the planes, submarines, and ground-based missiles that carry nuclear warheads. A U.S. intercontinental ballistic missile can reach its target within a half hour—and once such a missile is launched, no one can recall it.
Deepfakes could help create pretexts for war.
Both U.S. and Russian nuclear forces are prepared to “launch on warning,” meaning that they can be deployed as soon as enemy missiles are detected heading their way. That leaves just minutes for a leader to evaluate whether an adversary’s nuclear attack has begun. (Under current U.S. policy, the president has the option to delay a decision until after an adversary’s nuclear weapon strikes the United States.) If the U.S. early warning system detects a threat to the United States, U.S. officials will try to verify the attack using both classified and unclassified sources. They might look at satellite data for activity at known military facilities, monitor recent statements from foreign leaders, and check social media and foreign news sources for context and on-the-ground accounts. Military officers, civil servants, and political appointees must then decide which information to communicate up the chain and how it is presented.
AI-driven misinformation could spur cascading crises. If AI systems are used to interpret early warning data, they could hallucinate an attack that isn’t real—putting U.S. officials in a similar position to the one Petrov was in four decades ago. Because the internal logic of AI systems is opaque, humans are often left in the dark as to why AI came to a particular conclusion. Research shows that people with an average level of familiarity with AI tend to defer to machine outputs rather than checking for bias or false positives, even when it comes to national security. Without extensive training, tools, and operating processes that account for AI’s weaknesses, advisers to White House decision-makers might default to assuming—or at least to entertaining—the possibility that AI-generated content is accurate.
Deepfakes that are transmitted on open-source media are nearly as dangerous. After watching a deepfake video, an American leader might, for example, misinterpret Russian missile tests as the beginning of offensive strikes or mistake Chinese live-fire exercises as an attack on U.S. allies. Deepfakes could help create pretexts for war, gin up public support for a conflict, or sow confusion.
A CRITICAL EYE
In July, the Trump administration released an AI action plan that called for aggressive deployment of AI tools across the Department of Defense, the world’s largest bureaucracy. AI has proved useful in making parts of the military more efficient. Machine learning makes it easier to schedule maintenance of navy destroyers. AI technology embedded in autonomous munitions, such as drones, can allow soldiers to stand back from the frontlines. And AI translation tools help intelligence officers parse data on foreign countries. AI could even be helpful in some other standard intelligence collection tasks, such as identifying distinctions between pictures of bombers parked in airfields from one day to the next.
Implementing AI across military systems does not need to be all or nothing. There are areas that should be off-limits for AI, including nuclear early warning systems and command and control, in which the risks of hallucination and spoofing outweigh the benefits that AI-powered software could bring. The best AI systems are built on cross-checked and comprehensive datasets. Nuclear early warning systems lack both because there have not been any nuclear attacks since the ones on Hiroshima and Nagasaki. Any AI nuclear detection system would likely have to train on existing missile test and space tracking data plus synthetic data. Engineers would need to program defenses against hallucinations or inaccurate confidence assessments—significant technical hurdles.
It may be tempting to replace checks from highly trained staff with AI tools or to use AI to fuse various data sources to speed up analysis, but removing critical human eyes can lead to errors, bias, and misunderstandings. Just as the Department of Defense requires meaningful human control of autonomous drones, it should also require that each element of nuclear early warning and intelligence technology meet an even higher standard. AI data integration tools should not replace human operators who report on incoming ballistic missiles. Efforts to confirm early warning of a nuclear launch from satellite or radar data should remain only partially automated. And participants in critical national security conference calls should consider only verified and unaltered data.
In July 2025, the Department of Defense requested funds from Congress to add novel technologies to nuclear command, control, and communications. The U.S. government would be best served by limiting AI and automation integration to cybersecurity, business processes and analytics, and simple tasks, such as ensuring backup power turns on when needed.
A VINTAGE STRATEGY
Today, the danger of nuclear war is greater than it has been in decades. Russia has threatened to use nuclear weapons in Ukraine, China is rapidly expanding its arsenal, North Korea now has the ability to send ICBMs to the United States, and policies preventing proliferation are wavering. Against this backdrop, it is even more important to ensure that humans, not machines trained on poor or incomplete data, are judging the actions, intent, and aims of an adversary.
Intelligence agencies need to get better at tracking the provenance of AI-derived information and standardize how they relay to policymakers when data is augmented or synthetic. For example, when the National Geospatial-Intelligence Agency uses AI to generate intelligence, it adds a disclosure to the report if the content is machine-generated. Intelligence analysts, policymakers, and their staffs should be trained to bring additional skepticism and fact-checking to content that is not immediately verifiable, just as many businesses are now vigilant against cyber spear phishing. And intelligence agencies need the trust of policymakers, who might be more inclined to believe what their own eyes and devices tell them—true or false—than what an intelligence assessment renders.
Experts and technologists should keep working to find ways to label and slow fraudulent information, images, and videos flowing through social media, which can influence policymakers. But given the difficulty of policing open-source information, it is all the more important for classified information to be accurate.
AI can already deceive leaders into seeing an attack that isn’t there.
The Trump administration’s updates to U.S. nuclear posture in the National Defense Strategy ought to guard against the likely and unwieldy AI information risks to nuclear weapons by reaffirming that a machine will never make a nuclear launch decision without human control. As a first step, all nuclear weapons states should agree that only humans will make nuclear use decisions. Then they should improve channels for crisis communications. A hotline for dialogue exists between Washington and Moscow but not between Washington and Beijing.
U.S. nuclear policy and posture have changed little since the 1980s, when leaders worried the Soviet Union would attack out of the blue. Policymakers then could not have wrapped their heads around how much misinformation would be delivered to the personal devices of the people in charge of nuclear weapons today. Both the legislative and executive branches should reevaluate nuclear weapons posture policies built for the Cold War. Policymakers might, for example, require future presidents to confer with congressional leaders before they launch a nuclear first strike or require a period of time for intelligence professionals to validate the information on which the decision is being based. Because the United States has capable second-strike options, accuracy should take precedence over speed.
AI already has the potential to deceive key decision-makers and members of the nuclear chain of command into seeing an attack that isn’t there. In the past, only authentic dialogue and diplomacy averted misunderstandings among nuclear armed states. Policies and practices should protect against the pernicious information risks that could ultimately lead to doomsday.
Nuclear missiles at a military parade in Beijing, September 2025
Tingshu Wang / Reuters
ERIN D. DUMBACHER is Stanton Nuclear Security Senior Fellow at the Council on Foreign Relations.
Since the dawn of the nuclear age, policymakers and strategists have tried to prevent a country from deploying nuclear weapons by mistake. But the potential for accidents remains as high as it was during the Cold War. In 1983, a Soviet early warning system erroneously indicated that a U.S. nuclear strike on the Soviet Union was underway; such a warning could have triggered a catastrophic Soviet counterattack. The fate was avoided only because the on-duty supervisor, Stanislav Petrov, determined that the alarm was false. Had he not, Soviet leadership would have had reason to fire the world’s most destructive weapons at the United States.
The rapid proliferation of artificial intelligence has exacerbated threats to nuclear stability. One fear is that a nuclear weapons state might delegate the decision to use nuclear weapons to machines. The United States, however, has introduced safeguards to ensure that humans continue to make the final call over whether to launch a strike. According to the 2022 National Defense Strategy, a human will remain “in the loop” for any decisions to use, or stop using, a nuclear weapon. And U.S. President Joe Biden and Chinese leader Xi Jinping agreed in twin statements that “there should be human control over the decision to use nuclear weapons.”
Yet AI poses another insidious risk to nuclear security. It makes it easier to create and spread deepfakes—convincingly altered videos, images, or audio that are used to generate false information about people or events. And these techniques are becoming ever more sophisticated. A few weeks after Russia’s 2022 invasion of Ukraine, a widely shared deepfake showed Ukrainian President Volodymyr Zelensky telling Ukrainians to set down their weapons; in 2023, a deepfake led people to falsely believe that Russian President Vladimir Putin interrupted state television to declare a full-scale mobilization. In a more extreme scenario, a deepfake could convince the leader of a nuclear weapons state that a first strike from an adversary was underway or an AI-supported intelligence platform could raise false alarms of a mobilization, or even a dirty bomb attack, by an adversary.
The Trump administration wants to harness AI for national security. In July, it released an action plan calling for AI to be used “aggressively” across the Department of Defense. In December, the department unveiled GenAI.mil, a platform with AI tools for employees. But as the administration embeds AI in national security infrastructure, it will be crucial for policymakers and systems designers to be careful about the role machines play in the early phases of nuclear decision-making. Until engineers can prevent problems inherent to AI, such as hallucinations and spoofing—in which large language models predict inaccurate patterns or facts—the U.S. government must ensure that humans continue to control nuclear early warning systems. Other nuclear weapons states should do the same.
CASCADING CRISES
Today, President Donald Trump uses a phone to access deepfakes; he sometimes reposts them on social media, as do many of his close advisers. As the lines become blurred between real and fake information, there is a growing possibility that such deepfakes could infect high-stakes national security decisions, including on nuclear weapons.
If misinformation can deceive the U.S. president for even a few minutes, it could spell disaster for the world. According to U.S. law, a president does not need to confer with anyone to order the use of nuclear weapons for either a retaliatory attack or a first strike. U.S. military officials stand at the ready to deploy the planes, submarines, and ground-based missiles that carry nuclear warheads. A U.S. intercontinental ballistic missile can reach its target within a half hour—and once such a missile is launched, no one can recall it.
Deepfakes could help create pretexts for war.
Both U.S. and Russian nuclear forces are prepared to “launch on warning,” meaning that they can be deployed as soon as enemy missiles are detected heading their way. That leaves just minutes for a leader to evaluate whether an adversary’s nuclear attack has begun. (Under current U.S. policy, the president has the option to delay a decision until after an adversary’s nuclear weapon strikes the United States.) If the U.S. early warning system detects a threat to the United States, U.S. officials will try to verify the attack using both classified and unclassified sources. They might look at satellite data for activity at known military facilities, monitor recent statements from foreign leaders, and check social media and foreign news sources for context and on-the-ground accounts. Military officers, civil servants, and political appointees must then decide which information to communicate up the chain and how it is presented.
AI-driven misinformation could spur cascading crises. If AI systems are used to interpret early warning data, they could hallucinate an attack that isn’t real—putting U.S. officials in a similar position to the one Petrov was in four decades ago. Because the internal logic of AI systems is opaque, humans are often left in the dark as to why AI came to a particular conclusion. Research shows that people with an average level of familiarity with AI tend to defer to machine outputs rather than checking for bias or false positives, even when it comes to national security. Without extensive training, tools, and operating processes that account for AI’s weaknesses, advisers to White House decision-makers might default to assuming—or at least to entertaining—the possibility that AI-generated content is accurate.
Deepfakes that are transmitted on open-source media are nearly as dangerous. After watching a deepfake video, an American leader might, for example, misinterpret Russian missile tests as the beginning of offensive strikes or mistake Chinese live-fire exercises as an attack on U.S. allies. Deepfakes could help create pretexts for war, gin up public support for a conflict, or sow confusion.
A CRITICAL EYE
In July, the Trump administration released an AI action plan that called for aggressive deployment of AI tools across the Department of Defense, the world’s largest bureaucracy. AI has proved useful in making parts of the military more efficient. Machine learning makes it easier to schedule maintenance of navy destroyers. AI technology embedded in autonomous munitions, such as drones, can allow soldiers to stand back from the frontlines. And AI translation tools help intelligence officers parse data on foreign countries. AI could even be helpful in some other standard intelligence collection tasks, such as identifying distinctions between pictures of bombers parked in airfields from one day to the next.
Implementing AI across military systems does not need to be all or nothing. There are areas that should be off-limits for AI, including nuclear early warning systems and command and control, in which the risks of hallucination and spoofing outweigh the benefits that AI-powered software could bring. The best AI systems are built on cross-checked and comprehensive datasets. Nuclear early warning systems lack both because there have not been any nuclear attacks since the ones on Hiroshima and Nagasaki. Any AI nuclear detection system would likely have to train on existing missile test and space tracking data plus synthetic data. Engineers would need to program defenses against hallucinations or inaccurate confidence assessments—significant technical hurdles.
It may be tempting to replace checks from highly trained staff with AI tools or to use AI to fuse various data sources to speed up analysis, but removing critical human eyes can lead to errors, bias, and misunderstandings. Just as the Department of Defense requires meaningful human control of autonomous drones, it should also require that each element of nuclear early warning and intelligence technology meet an even higher standard. AI data integration tools should not replace human operators who report on incoming ballistic missiles. Efforts to confirm early warning of a nuclear launch from satellite or radar data should remain only partially automated. And participants in critical national security conference calls should consider only verified and unaltered data.
In July 2025, the Department of Defense requested funds from Congress to add novel technologies to nuclear command, control, and communications. The U.S. government would be best served by limiting AI and automation integration to cybersecurity, business processes and analytics, and simple tasks, such as ensuring backup power turns on when needed.
A VINTAGE STRATEGY
Today, the danger of nuclear war is greater than it has been in decades. Russia has threatened to use nuclear weapons in Ukraine, China is rapidly expanding its arsenal, North Korea now has the ability to send ICBMs to the United States, and policies preventing proliferation are wavering. Against this backdrop, it is even more important to ensure that humans, not machines trained on poor or incomplete data, are judging the actions, intent, and aims of an adversary.
Intelligence agencies need to get better at tracking the provenance of AI-derived information and standardize how they relay to policymakers when data is augmented or synthetic. For example, when the National Geospatial-Intelligence Agency uses AI to generate intelligence, it adds a disclosure to the report if the content is machine-generated. Intelligence analysts, policymakers, and their staffs should be trained to bring additional skepticism and fact-checking to content that is not immediately verifiable, just as many businesses are now vigilant against cyber spear phishing. And intelligence agencies need the trust of policymakers, who might be more inclined to believe what their own eyes and devices tell them—true or false—than what an intelligence assessment renders.
Experts and technologists should keep working to find ways to label and slow fraudulent information, images, and videos flowing through social media, which can influence policymakers. But given the difficulty of policing open-source information, it is all the more important for classified information to be accurate.
AI can already deceive leaders into seeing an attack that isn’t there.
The Trump administration’s updates to U.S. nuclear posture in the National Defense Strategy ought to guard against the likely and unwieldy AI information risks to nuclear weapons by reaffirming that a machine will never make a nuclear launch decision without human control. As a first step, all nuclear weapons states should agree that only humans will make nuclear use decisions. Then they should improve channels for crisis communications. A hotline for dialogue exists between Washington and Moscow but not between Washington and Beijing.
U.S. nuclear policy and posture have changed little since the 1980s, when leaders worried the Soviet Union would attack out of the blue. Policymakers then could not have wrapped their heads around how much misinformation would be delivered to the personal devices of the people in charge of nuclear weapons today. Both the legislative and executive branches should reevaluate nuclear weapons posture policies built for the Cold War. Policymakers might, for example, require future presidents to confer with congressional leaders before they launch a nuclear first strike or require a period of time for intelligence professionals to validate the information on which the decision is being based. Because the United States has capable second-strike options, accuracy should take precedence over speed.
AI already has the potential to deceive key decision-makers and members of the nuclear chain of command into seeing an attack that isn’t there. In the past, only authentic dialogue and diplomacy averted misunderstandings among nuclear armed states. Policies and practices should protect against the pernicious information risks that could ultimately lead to doomsday.
No comments:
Post a Comment