14 Comments
User's avatar
Sam Tobin-Hochstadt's avatar

I think this thesis is maybe even more true of other forms of AI. For example, self-driving cars are taking us from cars owned and controlled by hundreds of millions of individual people to cars controlled by a small number of companies. That will centralize decisions about everything from speeding to route choices. Another example is surveillance. Companies like Flock are building centralized databases of camera footage which is searched using AI tools, with pretty obvious implications for centralization and state capacity (good and bad).

mathew's avatar

Which is why I think we need to pass strong laws and ideally a constitutional amendment that bans the government from mass surveillance of Americans including purchasing that data from 3rd parties.

No to facial recognition

No to cell phone tracking

No to license plate readers

Etc

Deadpan Troglodytes's avatar

Great piece. (Who doesn't love a stiff dose of sixteenth-century orthography?)

Dan Williams made a similar point in his (speculative) essay "How AI Will Reshape Public Opinion", which is complementary to this, and highly-recommended. Its subhead conveys the gist:

> Social media democratised public opinion, shifting influence away from elites and experts to ordinary people. LLMs will partly reverse this trend. They are a powerful, new technocratising force.

https://www.conspicuouscognition.com/p/how-ai-will-reshape-public-opinion

Twirling Towards Freedom's avatar

Terrific piece!

Drew Margolin's avatar

Excellent piece. I've been thinking the same thing for awhile, but you (and Dan Williams as noted above) actually thought it through with logic and evidence.

One thing on the possibility of liberal bias in AI responses, I wonder if the AI itself draws more information from more specific, "nerdy" writing, which tends to be produced by the professional managerial class, who tend to lean/be center left.

Two data points.

1. The new right tends to be very vibes based. Common sense, gut, intuition. Roger Scruton basically argued this is the role of conservatives. To put brakes on abstract, theoretical left wing boondoggles.

This has always been Trump's strongest appeal, too. "Look at this nonsense the wokes are peddling." It's a claim that resonates but doesn't have much behind it.

2. In my work with AI, it is notable in its _hunger for specificity_. It likes to learn from examples, and it likes to draw fine distinctions. In a paper we're submitting soon, we asked it to generate codes from 2000 tiktok transcripts, and it found 150, waaay more than any human would. But I would also bet that pmc liberals would find many more than MAGA conservatives.

Is it the training? Or maybe even the nature of how the models store knowledge? IDK. But I think it's embedded in them to prefer fine, nuanced thinking, for better or for worse.

RaptorChemist's avatar

You pretty much got it in your second paragraph, AIs trained on well-written, informative, accurate texts are mostly trained on writing by liberals.

It's also been noticed that AIs develop personas. Grok started going insane because training it to agree with alt-righters also made it start acting like an alt-righter. On the other hand, imagine a person who is kind, helpful, broadly knowledgeable, and cares deeply about what is and isn't true on a detailed level. It's not hard to guess which party they would join.

Stephen Colbert got it exactly right all those years ago, the facts have a liberal bias. If you train an LLM on the facts, they will too.

Wes Chow's avatar

I think too little attention has been given to Grokipedia, in which you see the reasoning trace for why Grok includes source material and decides what is relevant to write for some topic. Users don't edit documents like Wikipedia, they submit suggestions with citations and then Grok serves as the writer. In principle, the outcome is different than simply asking Grok a question because it is being asked to reason over material placed in front of it, rather than from its training data.

You could imagine duplicates of the same system built with different models arguing it out. There's a lot for us to learn about AI enhanced epistemics.

Marcus Seldon's avatar

This is true for now. I do worry about the medium-to-long term though. There are strong market incentives to produce sycophantic AI. Yes only three labs are at the cutting edge currently, and they happen to resist these incentives, but the next tier of model makers, including open source producers, are only about a year behind. I wouldn’t assume AI will be centralized at a few labs in the long run.

Auros's avatar

"it’s possible to get an AI to give you the strongest possible argument for or against any position regardless of its truthiness"

I think you mean _truthfulness_ here, not truthiness. All of the arguments it gives you will tend to have the quality of truthiness, as Stephen Colbert defined it. (That was the "word of the day" in the very first episode of his old show on Comedy Central, I think?)

Mila's avatar

Well done, gave me food for thought on AI not encountered elsewhere.

Longestaffe's avatar

This is fascinating, and all the more so because it's erudite and well-reasoned.

Harry Cheadle's avatar

It’s funny to imagine someone who does not trust the New York Times but *does* trust Grok, even though Grok trusts the New York Times

Jennifer Anderson's avatar

Really interesting perspective on effects on our shared reality. I still hate AI lol

Sam Penrose's avatar

Great column. For how the printing press overthrew Aristotle, see the most important book of the century IMHO, https://www.inventionofscience.com/?page_id=9 . Liberalism is a culture of literate elites. C.f. Hanania: https://www.richardhanania.com/p/liberals-read-conservatives-watch .