8 Comments
User's avatar
Taymon A. Beal's avatar

I think Jerusalem here slightly misunderstands Wiesenthal's position, or at least the interpretation of Wiesenthal's position that I'm personally most worried about, which is not primarily about writing quality as a proxy for take correctness.

If we had LLMs steelmanning everyone's arguments to find the hardest-to-assail version of them, then yes, that's an asymmetric advantage for whoever's actually right, reduces the value of various forms of sophistry, and ultimately serves to improve the quality of debate for truthseeking purposes.

And maybe in another six months we'll have LLMs that can do that. But today's LLMs are decidedly mediocre at analytic rigor; good enough for some purposes, but not good enough to much help an intelligent reader figure out the right answers to complicated questions about politics and policy. (See, e.g., the final section of https://www.slowboring.com/p/ai-progress-is-giving-me-writers; apologies to the Slow Boring non-subscribers, as it's paywalled.)

What they're very good at is smooth-sounding prose that superficially appears to hold together. My worry is that this makes it harder to spot flaws in an argument; not impossible, but you have to read a lot more carefully and develop fairly specific skills for detecting rhetorical cover-ups. I've been trying to get better at this but so far I'm not that good at it, and consequently find judging LLM-assisted arguments (in the sense of being able to go like "that's wrong, here's specifically why") to be harder than the fully human-generated ones that I used to typically run across.

Granted, there've always been specialist humans who developed the skill of hiding flaws in their arguments behind this kind of rhetorical slipperiness, but they used to mostly only show up in high-profile contexts, because doing it well was a rare skill. Now everybody has a machine in their pocket that can do it on demand, and *that* seems to me very plausibly bad for readers' epistemic health.

Jerusalem Demsas's avatar

I respond to this argument in the piece: "But I think the strongest version of Weisenthal’s argument is that people will use AI to make their arguments sound stronger even as they remain vacuous drivel.

This is quite bad! But since everyone will have access to AI making their arguments sound cogent, the real differentiator for the moveable observers will become the arguments themselves."

I also think I'm way more optimistic on AI's analytical rigor than you are.

Matt Lashof-Sullivan's avatar

I think the real problem here is that the audience only has so much time and attention. So, it may not even be hard to tell the difference between good and bad arguments *after reading them*. But it's gotten a lot harder to tell the difference between stuff worth engaging with and slop *before* reading it. And at some level people need to do that because of limited time.

I actually think this problem is uniquely hard to see for opinion journalists compared to everyone else, because opinion journalists' job is to engage with all the arguments that are out there. For the rest of us, we have other jobs we need to do for most of the day, so if bad arguments just require more reading and critical thinking to recognize as bad than they did before, that has a huge effect on the time cost of finding the good arguments.

Taymon A. Beal's avatar

I don't think I understand how "the real differentiator for the moveable observers will become the arguments themselves" is supposed to work in practice.

In a world where most arguments are rhetorically straightforward but of varying quality (which I think the real world used to be more like, though imperfectly), you might have the bad luck to witness an argument between a bad arguer for a correct position and a good arguer for a wrong position, and end up updating the wrong way. Ubiquitous LLMs do reduce that risk, I guess. But if you read a bunch of arguments, then on average you'll get a sense of which points on each side the other side has the hardest time rebutting, and so have a good chance of ending up being right.

If all the arguments are rhetorically smoothed out, then your impression after reading one is much more likely to be "idk, sounds right I guess", and even if you read multiple such arguments with contradictory conclusions, you'll have a much harder time finding the actual points of contradiction, because they'll just be sort of talking past each other. And so you don't learn anything about what you should ultimately believe. At least, this has been my experience reading LLM-generated analysis on topics I'm trying to figure out a correct opinion about.

Bob Eno's avatar

I'm very sympathetic to the faith in reason that Mr. Demsas exhibits here (and to the entire spirit of The Argument). But the degree to which her own arguments here will be valid depends very much on the commitment (or interest) of the audiences in reason. To the degree that these arguments occur online or in some other debate context that involves more or less neutral, unscreened public audience members, the ultimate goal is not really to make the superior argument, it's to persuade those who may be persuadable. Reason often comes in second to rhetoric.

"CHATgpt, give me good arguments for X in the debate context Y, and as back up supply ad hominem snark or deflecting segues that will have positive persuasive force for the likely undecided audience members in such a debate context." I think real life audiences are very little like disinterested debate judges who have set aside their priors and who follow rubrics governed by logical consistency.

Deflecting tactics may have more negative than positive force on silent neutrals than I assume. But in my own experiences observing and participating in social media with diverse audiences and "like" functions, I've found the number of likes generated by deflecting moves and snark, as compared to rigorous argument, usually prevails. This isn't the same as measuring the reactions of persuadable readers, but it's certainly not encouraging. (Of course, I'm not sure AI would actually outperform people in using these tactics -- they aren't demanding.)

Aaron Bailey's avatar

It’s the excellent “chocolate or vanilla” scene from Thank You For Smoking!

https://youtu.be/xuaHRN7UhRo?si=Kir8DtCCKG_aNyBm

Ben's avatar

Arguments which are bad because they are fundamentally flawed being made better (by AI) doesn't strike me as particularly beneficial. Nor does an arms race where views which were already predicated on sound premises now have to be articulated at an even higher standard simply to compete with BS. I think vaccines are a representative (and predictable) example; the most compelling arguments the best chatbot in the world can marshal for people not vaccinating their children will always have adverse consequences in reality, even if written persuasively. Meanwhile, those who *don't* want people to pointlessly die of preventable illnesses are going to have to either invest unnecessary effort into an already stupid debate, or otherwise outsource their thinking to robots.

Austin L.'s avatar

Hi Jerusalem or other readers when will the Monday podcasts start again?