Discussion about this post

User's avatar
Bob Eno's avatar

I'm very sympathetic to the faith in reason that Mr. Demsas exhibits here (and to the entire spirit of The Argument). But the degree to which her own arguments here will be valid depends very much on the commitment (or interest) of the audiences in reason. To the degree that these arguments occur online or in some other debate context that involves more or less neutral, unscreened public audience members, the ultimate goal is not really to make the superior argument, it's to persuade those who may be persuadable. Reason often comes in second to rhetoric.

"ChatGPT, give me good arguments for X in the debate context Y, and as back up supply ad hominem snark or deflecting segues that will have positive persuasive force for the likely undecided audience members in such a debate context." I think real life audiences are very little like disinterested debate judges who have set aside their priors and who follow rubrics governed by logical consistency.

Deflecting tactics may have more negative than positive force on silent neutrals than I assume. But in my own experiences observing and participating in social media with diverse audiences and "like" functions, I've found the number of likes generated by deflecting moves and snark, as compared to rigorous argument, usually prevails. This isn't the same as measuring the reactions of persuadable readers, but it's certainly not encouraging. (Of course, I'm not sure AI would actually outperform people in using these tactics -- they aren't demanding.)

Taymon A. Beal's avatar

I think Jerusalem here slightly misunderstands Wiesenthal's position, or at least the interpretation of Wiesenthal's position that I'm personally most worried about, which is not primarily about writing quality as a proxy for take correctness.

If we had LLMs steelmanning everyone's arguments to find the hardest-to-assail version of them, then yes, that's an asymmetric advantage for whoever's actually right, reduces the value of various forms of sophistry, and ultimately serves to improve the quality of debate for truthseeking purposes.

And maybe in another six months we'll have LLMs that can do that. But today's LLMs are decidedly mediocre at analytic rigor; good enough for some purposes, but not good enough to much help an intelligent reader figure out the right answers to complicated questions about politics and policy. (See, e.g., the final section of https://www.slowboring.com/p/ai-progress-is-giving-me-writers; apologies to the Slow Boring non-subscribers, as it's paywalled.)

What they're very good at is smooth-sounding prose that superficially appears to hold together. My worry is that this makes it harder to spot flaws in an argument; not impossible, but you have to read a lot more carefully and develop fairly specific skills for detecting rhetorical cover-ups. I've been trying to get better at this but so far I'm not that good at it, and consequently find judging LLM-assisted arguments (in the sense of being able to go like "that's wrong, here's specifically why") to be harder than the fully human-generated ones that I used to typically run across.

Granted, there've always been specialist humans who developed the skill of hiding flaws in their arguments behind this kind of rhetorical slipperiness, but they used to mostly only show up in high-profile contexts, because doing it well was a rare skill. Now everybody has a machine in their pocket that can do it on demand, and *that* seems to me very plausibly bad for readers' epistemic health.

13 more comments...

No posts

Ready for more?