
“It’s just so complicated and difficult to rely on an AI system for something like this, and it runs a massive risk of violating people’s civil rights,” said David Evan Harris, an AI researcher who previously worked on Meta’s Responsible AI team. “I would go so far as to say that with the current AI systems that we have, it is simply a bad idea to use AI to do something like this.”
Musk has said he’s aiming to rapidly cut at least $1 trillion from the federal budget deficit. But in the process, his work with DOGE has caused uncertainty, chaos and frustration across the government, as he’s gutted entire departments and made confusing demands of federal employees.
Several recent media reports citing unnamed sources indicate Musk’s DOGE team is now using AI to help accelerate those cuts.
Experts say this approach reflects the same “cut first, fix later” thinking that Musk brought to his Twitter takeover two years ago, which resulted in thousands of workers losing their jobs, spending cuts that caused technical glitches and lawsuits and controversial policies that alienated users and undermined the platform’s core ad business. But the consequences of dismantling government agencies, systems and services could be more widespread and severe than slimming down a tech company.
“It’s a bit different when you have a private company,” John Hatton, staff vice president of policy and programs at the National Active and Retired Federal Employees Association, told CNN. “You do that in the federal government, and people may die.”
The moves also come as Musk has tried to establish himself and his startup, xAI, as leaders in the AI industry. It’s not clear whether the company’s technology is being used by DOGE.
Representatives for Musk, DOGE and the US Office of Personnel Management did not respond to requests for comment.
Reports of AI to downsize the government
In early February, members of DOGE fed sensitive Department of Education data into AI software accessed through Microsoft’s cloud service to analyze the agency’s programs and spending, two unnamed people familiar with the group’s actions told the Washington Post.
DOGE staffers have also been developing a custom AI chatbot for the US General Services Administration called GSAi, Wired reported last month, citing two people familiar with the project. One of the unnamed sources said the tool could help “analyze huge swaths of contract and procurement data.”
After the Office of Personnel Management sent an email to federal workers on February 23 asking them to send five bullet points detailing what they “accomplished last week,” DOGE staffers considered using AI to analyze responses, NBC News reported citing unnamed sources familiar with the plans. The AI system would evaluate the responses and determine which positions were no longer needed, according to the report, which did not specify what AI tool would be used.
Musk said in an X post that AI would not be “needed” to review the responses and that the emails were “basically a check to see if the employee had a pulse.”
Wired also reported last month that DOGE operatives had edited Department of Defense-developed software known as AutoRIF, or Automated Reduction in Force, that could be used to automatically rank employees for cuts, citing unnamed sources.
Last week, 21 employees at the United States Digital Services (USDS) — the agency that has evolved into DOGE under the Trump administration — said they were resigning in protest. The group did not mention AI specifically, but said “we will not use our skills as technologists to compromise core government systems, jeopardize Americans’ sensitive data, or dismantle critical public services.” The group addressed its letter to White House chief of staff Susan Wiles and shared it online.
White House press secretary Karoline Leavitt responded to the resignations in a statement saying, “anyone who thinks protests, lawsuits, and lawfare will deter President Trump must have been sleeping under a rock for the past several years,” according to a report by the Associated Press.
In an X post, Musk called the USDS employees who resigned “Dem political holdovers who refused to return to the office.”
Part of the problem may be that, building an effective and useful AI tool requires a deep understanding of the data being used to train it, which the newly instated DOGE team may not have, according to Amanda Renteria, chief executive of Code for America, a non-profit group that works with governments to build digital tools and increase their technical capabilities.
“You can’t just train (an AI tool) in a system that you don’t know very well,” Renteria told CNN, because the tool’s outputs may not make sense, or the technology could be missing information or context crucial to making the right decision. AI tools can also get things wrong or occasionally make things up – an issue known as “hallucination.” Someone unfamiliar with the data they’re asking the technology to analyze might not catch those mistakes.
“Because government systems are older, oftentimes, you can’t just deploy a new technology on it and expect to get the right results,” she said.

In their letter, the former USDS employees said they were interviewed by people wearing White House visitor badges who “demonstrated limited technical ability,” and accused DOGE of “mishandling sensitive data, and breaking critical systems.”
Among the employees working at DOGE are a handful of men in their early 20s and staffers brought over from Musk’s other companies, CNN and others have reported.
The White House has said Amy Gleason – who has a background in health care and worked at USDS during President Donald Trump’s first term – is the acting administrator of DOGE, although White House press secretary Karoline Leavitt said Musk oversees the group’s efforts.
On Monday, Democracy Forward, a left-leaning non-profit policy research organization focused on the US executive branch, said it had submitted a series of Freedom of Information Act requests as part of an investigation into reported AI use by DOGE and the Trump administration. “The American people deserve to know what is going on – including if and how artificial intelligence is being used to reshape the departments and agencies people rely on daily,” Democracy Forward CEO Skye Perryman said in a statement.
AI concerns
Many of the concerns surrounding DOGE’s reported use of AI are similar to those regarding the technology’s use in other settings, including that the technology can replicate the biases that often exist among humans.
Some AI hiring tools have, for instance, been shown to favor White, male applicants over other candidates. Big tech companies have been accused of discrimination because of how their algorithms have delivered job or housing ads. AI-powered facial recognition technology used by police has led to wrongful arrests. And various AI-generated photo tools have taken heat for producing inaccurate or offensive depictions of different races.
If AI is now being used to determine what roles or projects to eliminate from the government, it could mean cutting crucial staffers or work simply because of what they look like or who they serve, Harris said, adding that women and people of color could be adversely affected.
Take, for example, the idea of using AI to evaluate email responses from federal government employees outlining their weekly accomplishments. Harris said responses from “really talented” federal workers whose first language is not English “may be interpreted by an AI system less favorably than the writing of someone for whom English is a native language.”
“Even if the AI system is not programmed to be biased, it might still favor the idiomatic expressions or the type of language used by certain groups of people over other groups of people,” he said.
While the nature of these concerns isn’t new, the potential fallout from using AI to determine mass government cuts could be more serious than in other settings.
Musk has acknowledged that DOGE may make mistakes and that it’s already eliminated crucial efforts, such as Ebola prevention, that he said it would restore. It’s not clear how or if AI was involved in that decision.
AI does offer efficiency-boosting benefits; it can rapidly parse and analyze huge amounts of information. But, if not used carefully, it could also put sensitive government data or people’s personal information at risk, experts say. Without proper protections and limits on who can access the system, data that’s fed to an AI program in a query could unexpectedly surface in responses to separate requests – potentially to people who shouldn’t have access to it.
Harris is particularly worried about DOGE’s handling of personnel records, which he described as being among the “most sensitive types of documents in any organization.”
“The idea that this group of people that has not had time to go through a lot of training about how to handle extremely sensitive documents, all of a sudden will not only have access to personnel records from a wide swath of public agencies, but then be able to use those (records) to make rapid firing decisions, is very concerning,” he said.
And Renteria said the consequences of lax data security by the government could be significant.
“If we, as a society, lose the idea that government’s going to take care of your data, at the very least, that really begins to break down people filing taxes, people going to access food assistance,” Renteria said.
But perhaps the most pressing concern, experts say, is the lack of transparency around DOGE’s reported use of AI. What AI tools are being used? How were they vetted? And are humans overseeing and auditing the results? CNN sent these questions to DOGE and did not receive a response.
Julia Stoyanovich, computer science associate professor and director of the Center for Responsible AI at New York University, said for AI to be effective, users must be clear about their goals for the technology, and adequately test whether the AI system is meeting those needs.
“I’d be really, really curious to hear the DOGE team articulate how they are measuring performance, how they’re measuring correctness of their outcomes,” she said.