Harvard’s Schneier Gives Lawmakers a Tour Through DOGE-Driven AI Risks
The Trump administration’s DOGE initiative to revamp federal systems and scale back the workforce has involved unvetted uses of artificial intelligence to comb through sensitive data and has exacerbated the government’s cybersecurity vulnerabilities, according to computer scientist, activist and Harvard Kennedy School lecturer Bruce Schneier.
“Data security breaches present significant dangers to everyone in the United States, from private citizens to corporations to government agencies to elected officials,” Schneier testified at a June 5 House Oversight hearing on the “Federal Government in the Age of Artificial Intelligence.”
“Over the past four months, DOGE’s approach to data access has massively exacerbated the risk. DOGE employees have accessed and exfiltrated data from a variety of government agencies in order to, in part, train AI systems,” Schneier said in his prepared testimony.
“Their actions have weakened security within the federal government by bypassing and disabling critical security measures, exporting sensitive data to environments with less security, and consolidating disparate data streams to create a massively attractive target for any adversary,” he said.
During the session, Democrats offered a motion to subpoena DOGE founder Elon Musk but the move was blocked by Republicans. Oversight Republicans emphasized beneficial uses of AI in a June 6 release on takeaways from the hearing.
“[E]xpert witnesses stressed the urgent need for the federal government to adopt artificial intelligence responsibly to enhance efficiency, improve public services, and reduce costs for taxpayers,” according to the GOP release.
“In support of President Trump’s commitment to maintaining U.S. dominance in AI, members underscored the House Oversight Committee’s ongoing efforts to eliminate unnecessary obstacles and accelerate responsible AI innovation,” the Republicans said.
“Members concluded that the Biden Administration’s regulation-first approach to AI stifled technological progress and emphasized that the entire federal government needs the tools and authorities now to deploy AI effectively,” the GOP release said.
But Schneier, appearing as the witness for committee Democrats, said the DOGE experience has been rife with AI-related problems and offers a poor model.
“Since January, DOGE has transformed from an agency to a more integrated program across agencies as many DOGE personnel and affiliates have moved into official roles within the government,” he testified. “In this new capacity, DOGE affiliates (who are no longer constrained in their data access by court orders or inter-agency agreements) have become widely embedded across agencies including Office of Personnel Management, the General Services Administration, Treasury, Health and Human Services, and many more.”
“At these organizations,” he said, “they are overseeing a transformation of data practices that follows a common DOGE approach’ with 4 distinguishing features:”
- Data consolidation: Exfiltrating and connecting the massive US databases to create a single pool of data that covers all people in the United States. This has long been a goal among some tech leaders: in fact, Oracle started as a CIA project, and aimed to create a database covering everyone in the US. Toward this end, DOGE affiliates are working to connect databases across many agencies, including highly sensitive data sets like IRS taxpayer returns which have been kept separate to encourage trust and tax compliance among the public.
- Reduced security protocols: DOGE affiliates have consistently removed access controls and audit logs, created unmonitored copies of data, exposed highly sensitive data to cloud-hosted tools, sought maximally permissive data access waivers, and omitted previously required security protocols for vetting staff.
- AI training and processing: Processing this data with AI tools, which exposes data outside carefully monitored environments.
- Outsourcing: Transferring control over data access to private companies, especially Palantir.
Schneier said, “For example, at IRS, DOGE is attempting to create a single tool that would allow access to all data from IRS systems (consolidation). Public reporting indicates that Palantir employees were working on the projects without a signed contract that would stipulate security measures (reduced security protocols).”
He said, “The plan for the project involved using AI tools controlled by a private entity to manage access to all IRS data (outsourcing). The data consolidation, removal of controls, AI use, and outsourcing to private sector actors may seem to enhance efficiency, but it actually amplifies known dangers of data in the hands of our adversaries.”
He observed that the DOGE experience has created two “adversarial use cases”—coercion and preparing the battlefield—to create “unprecedented cybersecurity risks for the American people and government.”
Schneier emphasized, “AI amplifies the dangers of data consolidation and shoddy security practices.”
“Historically,” he said, “one of the major limits on using data is the difficulty of searching through large volumes of data. As AI capabilities and deployment increase, that limitation decreases.”
- Using government data sources to train AI creates a permanent, untraceable record of the data. AI tools can access and return their training data, and can also use it to comb for vulnerabilities in either the system or the data.
- This administration or a future administration could use AI tools combined with data access to create massive surveillance systems that target all Americans.
- AI tools are not ready to take over for humans. No responsible company in the world is turning over its corporate decision-making or customer interactions to AI agents at scale. Using AI agents for higher-level systems creates exponential risks. There have been numerous stories recently of AI interfering with their being shut down, sending secret emails on its own, and so on. Giving AI agents the opportunity to do this on government services puts national security and public well-being at risk.
Schneier argued, “DOGE’s approach has already done irreparable damage to American security. However, the situation can get even worse. We must stanch the flow of data so that at least what our adversaries have taken will soon be outdated.”
He said, “By following the DOGE approach, the current administration has increased both the likelihood and the potential scale of attacks against us and endangered our safety, both individually and collectively. A decisive shift in the administration’s approach to data security can begin to right the ship.”