US government ramps up mass surveillance with help of AI tech, data brokers – and your apps and devices

On a Saturday morning, you head to the hardware store. Your neighbors’ Ring cameras film your walk to the car. Your car’s sensors, cameras and microphones record your speed, how you drive, where you’re going, who’s with you, what you say, and biological metrics such as facial expression, weight and heart rate. Your car may also collect text messages and contacts from your connected smartphone.

Meanwhile, your phone continuously senses and records your communications, info about your health, what apps you’re using, and tracks your location via cell towers, GPS satellites and Wi-Fi and Bluetooth.

As you enter the store, its surveillance cameras identify your face and track your movements through the aisles. If you then use Apple or Google Pay to make your purchase, your phone tracks what you bought and how much you paid.

All this data quickly becomes commercially available, bought and sold by data brokers. Aggregated and analyzed by artificial intelligence, the data reveals detailed, sensitive information about you that can be used to predict and manipulate your behavior, including what you buy, feel, think and do.

Companies unilaterally collect data from most of your activities. This “surveillance capitalism” is often unrelated to the services device manufacturers, apps and stores are providing you. For example, Tinder is planning to use AI to scan your entire camera roll. And despite their promises, “opting out” doesn’t actually stop companies’ data collection.

While companies can manipulate you, they cannot put you in jail. But the U.S. government can, and it now purchases massive quantities of your information from commercial data brokers. The government is able to purchase Americans’ sensitive data because the information it buys is not subject to the same restrictions as information it collects directly.

The federal government is also ramping up its abilities to directly collect data through partnerships with private tech companies. These surveillance tech partnerships are becoming entrenched, domestically and abroad, as advances in AI take surveillance to unprecedented levels.

As a privacy, electronic surveillance and tech law attorney, author and legal educator, I have spent years researching, writing and advising about privacy and legal issues related to surveillance and data use. To understand the issues, it is critical to know how these technologies function, who collects what data about you, how that data can be used against you, and why the laws you might think are protecting your data do not apply or are ignored.

Big money for AI-driven tech and more data

Congressional funding is supercharging huge government investments in surveillance tech and data analytics driven by AI, which automates analysis of very large amounts of data. The massive 2025 tax-and-spending law netted the Department of Homeland Security an unprecedented US$165 billion in yearly funding. Immigration and Customs Enforcement, part of DHS, got about $86 billion.

Disclosure of documents allegedly hacked from Homeland Security reveal a massive surveillance web that has all Americans in its scope.

DHS is expanding its AI surveillance capabilities with a surge in contracts to private companies. It is reportedly funding companies that provide more AI-automated surveillance in airports; adapters to convert agents’ phones into biometric scanners; and an AI platform that acquires all 911 call center data to build geospatial heat maps to predict incident trends. Predicting incident trends can be a form of predictive policing, which uses data to anticipate where, when and how crime may occur.

DHS has also spent millions on AI-driven software used to detect sentiment and emotion in users’ online posts. Have you been complaining about Immigration and Customs Enforcement policies online? If so, social media companies including Google, Reddit, Discord, and Facebook and Instagram owner Meta may have sent identifying data, such as your name, email address, phone number and activity, to DHS in response to hundreds of DHS subpoenas served on the companies.

Meanwhile, the Trump administration’s national policy framework for artificial intelligence, released on March 20, 2026, urges Congress to use grants and tax incentives to fund “wider deployment of AI tools across American industry” and to allow industry and academia to use federal datasets to train AI.

Using federal datasets this way raises privacy law concerns because they contain a lifetime of sensitive details about you, including biographical, employment and tax information.

Blurring lines and little oversight

In foreign intelligence work, the funding, development and controlled use of certain AI-driven gathering of data makes sense. The CIA’s new acquisition framework to turbocharge collaboration with the private sector may be legal with proper oversight. But the line between collaborating for lawful national security purposes versus unlawful domestic spying is becoming dangerously blurred or ignored.

For example, the Pentagon has declared a contractor, Anthropic, a national security risk because Anthropic insisted that its powerful agentic AI model, Claude, not be used for mass domestic surveillance of Americans or fully autonomous weapons.

On March 18, 2026, FBI Director Kash Patel confirmed to Congress that the FBI is buying Americans’ data from data brokers, including location histories, to track American citizens.

As the federal government accelerates the use of and investment in AI-driven spy tech, it is mandating less oversight around AI technology. In addition to the national AI policy framework, which discourages state regulation of AI, the president has issued executive orders to accelerate federal government adoption of AI systems, remove state law AI regulation barriers and require that the federal government not procure the use of AI models that attempt to adjust for bias. But using advanced AI systems is risky, given reports of AI agents going rogue, exposing sensitive data and becoming a threat, even during routine tasks.

Your data

The surveillance capitalism system requires people to unwittingly participate in a manipulative cycle of group- and self-surveillance. Neighborhood doorbell cameras, Flock license plate readers and hyperlocal social media sites like Nextdoor create a crowdsourced record of all people’s movements in public spaces.

Sensors in phones and wearable devices, such as earbuds and rings, collect ever more sensitive details. These include health data, including your heart rate and heart rate variability, blood oxygen, sweat and stress levels, behavioral patterns, neurological changes and even brain waves. Smartphones can be used to diagnose, assess and treat Parkinson’s disease. Earbuds could be used to monitor brain health.

This data is not protected under HIPAA, which prohibits health care providers and those working with them from disclosing your health information without your permission, because the law does not consider tech companies to be health care providers nor these wearables to be medical devices.

Legal protections

People have little choice when buying devices, using apps or opening accounts but to agree to lengthy terms that include consent for companies to collect and sell their personal data. This “consent” allows their data to end up in the largely unregulated commercial data market.

The government claims it can lawfully purchase this data from data brokers. But in buying your data in bulk on the commercial market, the government is circumventing the Constitution, Supreme Court decisions and federal laws designed to protect your privacy from unwarranted government overreach.

The Fourth Amendment prohibits unreasonable search and seizure by the government. Supreme Court cases require police to get a warrant to search a phone or use cellular or GPS location information to track someone. The Electronic Communications Privacy Act’s Wiretap Act prohibits unauthorized interception of wire, oral and electronic communications.

Despite some efforts, Congress has failed to enact legislation to protect data privacy, the use of sensitive data by AI systems or to restore the intent of the Electronic Communications Privacy Act. Courts have allowed the broad electronic privacy protections in the federal Wiretap Act to be eviscerated by companies claiming consent.

In my opinion, the way to begin to address these problems is to restore the Wiretap Act and related laws to their intended purposes of protecting Americans’ privacy in communications, and for Congress to follow through on its promises and efforts by passing legislation that secures Americans’ data privacy and protects them from AI harms.

This article is part of a series on data privacy that explores who collects your data, what and how they collect, who sells and buys your data, what they all do with it, and what you can do about it.

The Conversation

Anne Toomey McKenna serves on the Advisory Board to the Institute for Electrical and Electronics Engineers (IEEE)-USA’s Artificial Intelligence Policy Committee (AIPC) and Chairs multiple AIPC subcommittees. The AIPC work involves subject matter and education-related interaction with U.S. Senate and House congressional staffers and the Congressional AI Caucus. McKenna has received funding from the National Security Agency for the development of legal educational materials about cyberlaw (a course which the government still makes available online for the public) and funding from The National Police Foundation together with the U.S. Department of Justice-COPS division for legal analysis regarding the use of drones in domestic policing.

Scroll to Top