The Argument

The Argument

The age of spying

Not your father's privacy wars

Kobe Yank-Jacobs's avatar
Kobe Yank-Jacobs
Mar 20, 2026
∙ Paid
Americans have gotten used to a certain baseline level of surveillance. But what happens when that surveillance gets AI integration?

The recent showdown between the Department of Defense and Anthropic hinged in part on one of Anthropic’s conditions for contracting with the Pentagon: that the government would not use Claude for the domestic surveillance of U.S. citizens. According to reporting in The Atlantic, the Pentagon wanted to use Claude to analyze data that could include everything from Americans’ GPS-tracked movements to credit card transactions and Google search queries.

This rare high-profile blowup over digital privacy left me with two questions: How much do people actually care about surveillance? And how different could LLM-driven surveillance be from what we’re already living with?

On the first question, if you ask people directly, they’ll tell you they don’t like surveillance. A 2023 Pew Research Center poll found that 81% of Americans expressed concern about how companies used their data and 71% said the same about the government. But interestingly, in the same poll, 61% of respondents were skeptical that anything they could try to do about it would make a difference.

While most people dislike surveillance, they seem to accept it because they don’t see any alternative.

In a brilliant essay about the “Find my Friends” fad (where people permanently share their locations with friends and family), former OpenAI researcher Zoe Hitzig diagnosed the phenomenon as a “strange form of Stockholm syndrome for the surveillance age.”

After all, sharing your location with a friend is a small step once you’ve shared it with a corporation.

I think it’s time to reconsider our Stockholm-like resignation. Because, in answer to the second question, LLM-driven surveillance is going to be radically different from what we’re accustomed to.

AIs don’t just capture information, they take meaning from it

AI can, of course, capture and summarize incredible amounts of information. That alone makes existing mass data collection more useful to the government. However, what’s less well-understood is AI’s capacity to draw incredibly strong inferences from a small amount of data points: There are already studies claiming to diagnose physical and mental health issues from the mere tone of your voice. From just a photo, the latest models can reverse engineer where the photo was taken.

In her essay on mutual location sharing, Hitzig drew a useful distinction between surveillance and spying.

“Surveillance is about tracking actions — what you do, where you go, what you buy,” she wrote. “Spying, on the other hand, is about gleaning intent through a careful study of what you say, what you think, and what you feel.” Surveillance expands the surface area of what’s legible to the state. Spying is the act of successfully interpreting it.

Until now, the surface area of data collection has largely outpaced the ability to take meaning from it. AI is fundamentally different because it can do the analysis. It can actually spy.

In an essay published a few weeks before his showdown with the Trump administration, Dario Amodei, the CEO of Anthropic, argued that AI would make doing even broader data collection more desirable since, previously, “ it would have been difficult to sort through this volume of information, but with AI it could all be transcribed, interpreted, and triangulated” to create a better picture of each citizen.

Amodei’s point is true even before the government takes up novel methods or broader collections. With AI, the existing post-9/11 dragnet surveillance system is upgraded just because the government can parse it better.

But, of course, AI’s power goes beyond mere summarizing.

Before chatbots ever existed, AI’s main talent was pattern recognition. The opening sally in modern AI progress was a 2012 image recognition technology trained on over 1 million photos. That use case, by 2018, had been improved upon to the point that it was able to find “cardiovascular risk factors not previously thought to be present or quantifiable in retinal images.” That is, it found patterns that no human expert had found before — from an image of an eye alone, it could predict age, gender, smoking status, and adverse cardiac events.

That was 2018. What can it do now?

A recent economics working paper used publicly available LinkedIn and MBA program images to guess the personality traits of almost 100,000 MBA graduates from their headshots alone.

User's avatar

Continue reading this post for free, courtesy of Jerusalem Demsas.

Or purchase a paid subscription.
© 2026 Jerusalem Demsas · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture