Wednesday, February 5, 2025

The GZERO team Is DeepSeek the next US national security threat? February 04, 2025

 The GZERO team

Is DeepSeek the next US national security threat?

February 04, 2025 

   

Before DeepSeek released its R1 model last month, America’s long-term AI dominance felt like a sure thing.


DeepSeek is a Chinese startup, born from a hedge fund, that claims to have used a fraction of the computing power of US competitors while making an artificial intelligence model that rivals the best that Northern California’s labs have to offer. Critics have alleged that the company has been dishonest about claims it only spent $6 million training the model. But for anyone taking DeepSeek at face value, it has been a revelation that sent shockwaves not only through Silicon Valley but also through Wall Street and Washington.


The Biden administration spent the past few years clamping down on powerful US-made chips flowing into China, but evidently, DeepSeek figured out how to build a great model with a dearth of high-tech resources.


“It shines a spotlight on the limits of the US export control system,” said Xiaomeng Lu, geo-technology director of Eurasia Group. “Technology has evolved in a way that regulators failed to anticipate.”


“Necessity is the mother of invention” – that’s how Jack Corrigan of Georgetown’s Center for Security and Emerging Technology put it. “US efforts to hobble China’s AI sector created a need for Chinese developers to innovate a more efficient approach to AI.”


But DeepSeek’s impact goes beyond its own efficiency. It’s an open-source model, meaning its code is available for anyone to use and modify. “Due to the open-source nature of their model, it will be much harder to restrict access to it entirely,” said Valerie Wirtschafter, a fellow at the Brookings Institution. “The other more pragmatic question is whether Congress has the appetite for more whack-a-mole-style tech regulations, given the chaos that has unfolded since the passage of the TikTok ban.”


US government agencies such as NASA and the Navy have banned DeepSeek models on their devices, as did Congress, but there’s been no US effort to try and ban it more widely among the public, as Italy did on Thursday, citing unresolved data privacy concerns. And America’s top cloud providers, including Microsoft Azure and Amazon Web Services, have already added access to R1.


Justin Sherman, founder of Global Cyber Strategies, says that the Trump administration has a toolbox to “screen, restrict, and even expel non-US tech from the US tech supply chain on national security grounds,” particularly through the Commerce Department’s Information and Communications Technology and Services. Still, he cautions against letting “stock market temperaments, reductive China panic in Washington, and media overinflation of industry AI claims” steer nuanced policy decisions.


DeepSeek’s true threat is likely strategic rather than technical. “DeepSeek’s latest model raises the question of what happens if China becomes the leader in providing publicly available, freely downloadable AI models,” said Sam Winter-Levy, a fellow at the Carnegie Endowment for International Peace. “While the US is obsessed with the race to see who can build the single biggest and most powerful model, perhaps even artificial general intelligence, the Chinese might win the race to see who can build really useful and cost-effective models that will be used by people and companies around the world.” At a minimum, China’s overnight success has quickly leveled the playing field for US-China competition over technology.


Perhaps then the answer to DeepSeek requires a rethinking of what American dominance in AI really means. Banning any specific app or model would just be a Band-Aid on a bullet wound


What We’re Watching: Britain outlaws CSAM deepfakes, OpenAI partners with US National Labs, AI regulation starts in Europe, AI awards


Britain unveils new child deepfake law

The United Kingdom is set to unveil the world’s first national law criminalizing the use of artificial intelligence tools for generating child sex abuse material, or CSAM.


Home Secretary Yvette Cooper said in a Sunday BBC interview that AI is leading to “online child abuse on steroids.” A series of four laws will, among other things, make it illegal to possess, create, or distribute AI tools designed to make CSAM, which would carry a maximum five-year prison sentence. The government will also criminalize running websites where abusers can share this material or advice about cultivating it.


The Internet Watch Foundation, which focuses on eliminating CSAM on the internet, issued a new report on Sunday showing that AI-generated CSAM found online has quadrupled over the past year.


The United States criminalizes CSAM, but there’s some gray area about whether AI-generated content is treated the same under federal law. In 2024, 18 states passed laws specifically outlawing AI-generated CSAM, but so far there’s no federal law on the books.


OpenAI strikes a scientific partnership with US National Labs

OpenAI announced a partnership with the US National Laboratories to lend its artificial intelligence models for national security and scientific research purposes. The Laboratories, overseen by the US Department of Energy, include Los Alamos National Laboratory in New Mexico and Lawrence Livermore National Laboratory in California.


OpenAI said that its models will be used to accelerate scientific research into disease prevention, cybersecurity, mathematics, and physics.


The agreement comes just days after OpenAI announced ChatGPT Gov, a version of the popular chatbot specifically designed for government personnel. The company is the face of the Project Stargate data center and AI infrastructure initiative heralded by President Donald Trump in January.


Europe’s AI Act starts to take effect

The first restrictions under Europe’s landmark artificial intelligence law just took effect.


As of Sunday, companies in Europe cannot legally use AI for facial recognition, emotion detecting, or scoring social behavior. The penalty is steep: up to $36 million in fines or 7% of global annual revenue.


The AI Act was formally adopted in September after years of deliberation in the European Union’s deliberative bodies. This is the first major government to bring comprehensive AI regulation to bear, and the rest of the law’s provisions will roll out over the next year and a half.


Europe had already gone much further on AI safety than the US federal government under Joe Biden, but with Donald Trump in office — who recently scrapped Biden’s safety-focused AI executive order — the gap in regulation between America and Europe promises to be even greater.


Trump has vowed to retaliate against the EU over the AI Act, which he previously called a “form of taxation,” but that threat wasn’t enough to deter Brussels from plowing ahead.


AI pioneers share prestigious engineering prize

Seven AI pioneers on Tuesday took home the 2025 Queen Elizabeth Prize for Engineering, a top award for groundbreaking innovations in science and engineering. Yoshua Bengio, Geoffrey Hinton, John Hopfield, Yann LeCun, Jensen Huang, Bill Dally, and Fei-Fei Li share this year’s prize for their contributions to the field of machine learning.


Bengio (Mila Quebec AI Institute), Hinton (Vector Institute), Hopfield (Princeton), and LeCun (NYU and Meta) won for their work on artificial neural networks, which help computers learn by mimicking the way the human brain works. These scholars have been awarded before. Hopfield and Hinton shared the 2024 Nobel Prize in Physics for this achievement; meanwhile, Bengio, Hinton, and LeCun shared the 2018 Turing Award.


Huang and Dally of Nvidia won for their graphics processing units, the computer chip architecture that enables machine learning models and applications. Li, a professor at Stanford, won for the database ImageNet that helped train computer vision models.


“This year’s winning innovation is a groundbreaking advancement that impacts everyone, yet the full extent of its underlying engineering remains largely unrecognized, making it an especially exciting choice,” said Dame Lynn Gladden, who chaired the judging panel for the award.


Hard Numbers: OpenAI monster funding round, Meta’s glasses sales, Teens fall for AI too, The Beatles win at the Grammys, Anthropic’s move to reduce jailbreaking


340 billion: OpenAI is closing in on a new funding round that would value the company at $340 billion. Japanese venture firm SoftBank is leading the round, which would make the ChatGPT developer the most valuable private company in the world, leaping ahead of TikTok parent company ByteDance, worth $220 billion. SoftBank and OpenAI also announced a new joint venture in Japan called SB OpenAI Japan on Monday.


1 million: Meta said that it sold 1 million units of its AI-enabled Ray-Ban smart glasses in 2024. It’s the first time the company has revealed sales numbers for its glasses, which retail between $299-$379.


35: Even young people get tricked by AI. A new report from Common Sense Media, a nonprofit advocacy group, found that 35% of teenagers aged 13–18 self-report being deceived by fake content online, including AI-generated media.


8: The Beatles won their eighth competitive Grammy Award on Sunday for the AI-assisted song “Now and Then.” A production team used AI to turn an unreleased John Lennon demo from the late 1970s into a polished track.


95: Anthropic announced a new “constitutional classifiers” system that in a test was 95% effective in blocking users from eliciting harmful content from its Claude models — up from 14% without the classifiers. Similar to the “prompt shields” Microsoft introduced last year, this is the latest effort to reduce “jailbreaking,” where users coerce AI models into ignoring their own content rules.


From the gallery


Each week, we’ll feature a specimen of generative AI — some good, some bad, and some with too many fingers or teeth. This week: The Authors Guild, a trade association for writers, introduced a new “Human Authored” certification so authors can verify that their books are indeed written by humans, not AI. The decision comes in response to AI-generated books flooding Amazon and online marketplaces in recent months.

 

The edition of GZERO AI was written by Scott Nover and edited by Tracy Moran.


 

 






No comments:

Post a Comment