The Opportunities and Risks Inherent to Trump’s AI Action Plan

President Donald Trump delivers remarks on artificial intelligence at the "Winning the AI Race" Summit in Washington D.C., on July 23.

President Donald Trump delivers remarks on artificial intelligence at the “Winning the AI Race” Summit in Washington D.C., on July 23.
Kent Nishimura/Reuters

President Donald Trump released his new Artificial Intelligence (AI) Action Plan to coincide with his “Winning the AI Race” summit in Washington, D.C. on Wednesday. The twenty-eight-page document contains more than ninety policy recommendations that the administration believes will expand the global sale of U.S. AI technology, speed up the construction of data centers, and reduce “red tape” that has proved an obstacle for the AI industry.

As Trump put it in his keynote address, the plan will help the United States “win at AI, while dismantling regulatory barriers.” 

But can the administration prompt the rapid scaling of AI inside and outside the government without new funding, planning more for risk assessments, bad actors, and other issues? Some experts also believe that other White House policies could derail some of the Trump’s proposed AI efforts. 

Following the summit and the president’s address, CFR convened seven of its experts to examine the action plan and detail the opportunities and risks they foresee. 

Trump’s Plan Could Open an AI Pandora’s Box

Sebastian Mallaby – Paul A. Volcker Senior Fellow for International Economics

Under President Biden, the U.S. government’s AI policy balanced three objectives: maintaining a lead over China in terms of frontier model development; promoting rapid adoption of AI throughout the economy, thereby boosting growth; and mitigating risks, both from the abuse of AI by bad actors and from AI systems that are misaligned with human interests. Under President Trump, the first two objectives are being promoted at the expense of the third.  

The clearest sign coming out of the Trump administration’s new executive orders is the stance on open-source and open-weight foundation models. These sorts of models can be freely downloaded and adapted by anybody, making them popular with academics, entrepreneurs, and anybody else who wants to build AI applications cheaply. The downside is that open models—while fueling the diffusion of AI through the economy—also increase risks. Bad actors might use them for bad ends, from the mass production of deep fakes to the building of weapons.  

The Biden administration had also ducked the challenge of clamping down on open models, hinting that a future administration might have to revisit the question. The Trump administration has evidently decided to stick with openness. It is accepting the extra risk in exchange for faster AI adoption and a stronger ability to compete with China, which also embraces open models.  

An Important First Step That Raises Essential Questions

Jessica Brandt – Senior Fellow for Technology and National Security

The AI Action Plan rightly emphasizes the importance of enabling the private sector to actively defend AI innovations against security risks, especially from malicious cyber actors and insider threats. Investing billions of dollars in AI infrastructure and tightening export controls on advanced chips will do little good if Beijing or other competitors can simply steal the latest trained models, reaping the benefits of U.S. investments while evading restrictions. The plan offers few specifics on how the administration intends to advance that goal, but there are plenty of measures it could consider, including robust threat intelligence sharing mechanisms and incentives for secure development.

The plan also admirably highlights the importance of evaluating national security risks from frontier models, which is an essential step in alleviating them. But it remains unclear what will happen when an evaluation finds that a model has crossed a capability or risk threshold, suggesting it can cause real-world harms at scale. Under their safety frameworks, major frontier firms have committed to pausing deployment absent sufficient mitigations, but there’s no shared understanding of what those mitigations should entail or who decides what qualifies as sufficient. That problem could soon become urgent for the national security policy community, so it’s important to start grappling with it now.

Is This AI Action Plan Sustainable?

Michael C. Horowitz – Senior Fellow for Defense Technology and Innovation

The Trump AI Action Plan is a tale of two impulses within the administration. Large sections of the action plan—including the focus on open-source and open-weight models, many of the pilot projects, promoting AI literacy in the workforce, evaluating national security risks in frontier models, and investing in biosecurity—are positive developments consistent with an ongoing bipartisan approach to U.S. leadership in AI. However, there is also a risk that many of the requested actions will not lead to follow through due to a lack of specified funding or potential macro tension with other Trump administration priorities, such as cutting science and technology research infrastructure inside and outside of government.  

For the Department of Defense, the most important recommendation is to create an AI and Autonomous Systems Virtual Proving Ground, which is essential for the kind of AI systems testing and evaluation necessary to create trusted confidence and scale adoption. With that and the other recommendations for the Pentagon, it will be critical to see if there is follow through in resourcing and organizational prioritization.

The document also reflects ongoing tensions that the administration is still working out concerning American AI policy abroad. For example, the administration has a goal of combating Chinese influence in international organizations that influence AI conferences. But this will require the administration to fund and support personnel to do that work within international institutions, such as the United Nations and the International Telecommunications Union, which is at odds with its policies. 

Trump’s AI Plan Jumpstarts Progress, and Sputters on Diplomacy

Kat Duffy – Senior Fellow for Digital and Cyberspace Policy

“Winning” any AI race requires fast, coordinated support for AI innovation, infrastructure, and international engagement. The AI Action Plan establishes important, constructive priorities by committing to increased domestic AI production capacity, supporting at least some systematic evaluation of frontier model risks, expanding Americans’ access to computing power by scaling the National Artificial Intelligence Research Resource, speeding up model testing through regional AI Centers of Excellence, and preparing American workers for the coming technological transition. 

It also proposes federal government incentives to improve and expedite America’s “full stack” AI offerings. This could prove a controversial proposal, but it will position American AI companies to compete more effectively against China’s highly subsidized products.

Unfortunately, the AI Action Plan’s international strategy is undermined by dogmatic rhetoric and competing impulses. Even longstanding allies and partners are framed as mere markets to be captured in the interests of global American supremacy. “Leverag[ing] the U.S. position in international diplomatic and standard-setting bodies” will be complicated by recent State Department firings that disproportionately targeted individuals and offices with technological expertise, as well as the United States’ decision to withdraw from leading multilateral institutions like the World Health Organization and UNESCO, both of which seek to shape AI’s evolution. Major markets like India and the European Union are striving to reduce their dependence on any external technology stack, be it Beijing or Washington’s, complicating the “Global Alliance” the plan seeks to build. 

Ultimately, when it comes to global engagement, the plan sings a zero-sum tune. While such a framing might resonate domestically, it will be discordant abroad.

AI Could Deliver National Security Benefits—and Risk Major Harm

Erin D. Dumbacher – Stanton Nuclear Security Senior Fellow

It is true that AI has the potential to transform warfighting and back-office operations across the U.S. military. Where AI can make processes and information-sharing more effective, experts should test, train, and implement. In weapons, faster data processing and delegation of tasks to AI could build resilience. More uncrewed systems could diversify options for leaders in a crisis. 

But there are also areas where aggressive adoption of AI would not serve national defense or security interests. Off-limits for AI should be strategic early warning, qualitative intelligence analysis functions, and anywhere that “black box” challenges and the fragility inherent to all AI could not be overcome or could have catastrophic consequences. AI at the Department of Defense should make Americans more safe, not less. 

Finally, to get ahead of the ways AI could propel developing weapons of mass destruction (chemical, biological, radiological, or nuclear), the administration will need to go beyond partnering with frontier AI firms to evaluate systems. Policies should start by restricting the most likely ways language learning models (LLMs) could share dangerous or harmful information. 

Trump’s AI Action Plan is at Odds with Trump Policy

Rush Doshi – C.V. Starr Senior Fellow for Asia Studies and Director of the China Strategy Initiative

The AI Action Plan opens by declaring that “America is in a race to achieve global dominance in artificial intelligence (AI)” with China. It advocates denying adversaries access to U.S. compute, stronger export controls on semiconductor manufacturing equipment, and steps to ensure advanced chips do not end up in China—including location verification features. 

But the plan is in tension with policy. The Trump administration has reversed its ban on sales of a sophisticated American chip (Nvidia’s “H20”) to China, which will power China’s models. Supporters of the reversal say only continued sales will prevent China from replacing Nvidia chips with Huawei ones. Opponents say that risk is overstated, the Huawei chips are not good enough, and that the best way to ensure dependence on American technology is to export the whole AI stack (U.S. chips, U.S. cloud, U.S. data centers, etc.) and not pieces of it that adversaries can use to build their own stack. 

Regardless of where one falls in the debate, the question now is whether this decision is a sign of a more permissive policy or is instead a singular action. The new AI Action Plan, despite its thirty pages and accompanying public comments by President Trump and Vice President JD Vance, does not yet resolve that question.

A Shot Across China’s Digital Bow

Jonathan E. Hillman – Senior Fellow for Geoeconomics

The AI Action Plan takes clear aim at China’s Digital Silk Road, Chinese leader Xi Jinping’s signature initiative for wiring the world and winning the future. One of the plan’s more promising international recommendations is the proposal to establish a program to help export the full U.S. AI technology stack—hardware, software, training, and most importantly, financing—to partners abroad. Despite security concerns, Chinese tech providers, especially Huawei, have continued to succeed in foreign markets due to their ability to provide that full package.

This part of Trump’s AI plan could be even stronger if it streamlines the interagency process and focuses more narrowly on emerging markets. The long list of U.S. agencies mentioned in the recommendation is likely to make an otherwise good idea slow and challenging to implement. Greater geographic focus would help as well. 

It’s emerging markets where Chinese tech companies have been making the deepest inroads, and it is smaller developing economies in particular where the Chinese option is sometimes the only one on the table. Expanding the availability of affordable U.S. technology in emerging markets would be a major win for the United States, commercially and strategically.

About Author /

Start typing and press Enter to search