9 Comments
User's avatar
Michael G. Johnson's avatar

This summer I grappled with the coming of AI a lot. My daughter is in 5th grade and she is already thinking about what she wants to do when she grows up and I had a lot of conversations with her about how the world may be dramatically different for her in 8-10 years. I feel like the best thing I can do is prepare her to be resilient and adaptable. Understand that it is important to learn skills and continue her mental growth, but also have a very open mind to the possibilities available to her.

Expand full comment
Thomas John Romer's avatar

What’s funny from my perspective is I have ADD, I’ve had it all my life. I can’t get through a book either, but I listen to a ton of podcasts watch a lot of documentaries. When I use AI, I write it myself first and I just punch it into AI to clean it up. I actually find it really freeing to be able to get my thoughts out and say what I want to say and not have to worry about whether I’ve made the structure 100% right. It’s still my thoughts. It’s still what I wanted to say and how I wanted to say it. The AI just helps make sure that I say it clearly. Or sometimes with character limits it helps me say what I want to say in the correct number of characters. I don’t know. I’m sure there are people that just have it do all the work, but if you use it correctly, it’s actually a fantastic tool.

Expand full comment
ceolaf's avatar

A poorly made argument in favor of a hypothesis/conclusion I agree with.

Readings skills have NOT (yet?) deteriorated to the point that Thompson claims.

* I bemoan how infrequently high school students are assigned complete books (or plays). But that does not mean "America’s highest-performing students have essentially stopped reading anything longer than a paragraph." There's a big gap between a single paragraph and complete book.

* Reading sonnets is hard. As a former high school English teacher who taught students how to read poetry—in my classroom, with a toolbox containing 14 tools to choose from to help unlock its meaning—Shakespeare's sonnets came LATE in the unit. The language is archaic, and made more awkward for the requirements of meter and rhyme (i.e., iambic pentameter AND ABABCDCDEFEFGG). They are dense with metaphor and meaning. They set up arguments and them tear them down. Student's have trouble with them at Georgetown? Yeah, Prof. Shore and his colleagues need to TEACH students to read them. Reading that kind of poetry is not at all like reading prose, requiring FAR FAR FAR focus and concentration. They have always been alienating to 20th centuries readers. Worth the investment? Oh, I certainly think so. But they are *hard.*

* Yes, writing *is* thinking. Of course! But the question at hand the whether *editing* is also thinking, and whether anything is lost if the human's rule shifts from planning and drafting entirely into edit. Yes, more focus on deep and substantive editing—not just proofreading!!—would be great. But I think we will lose quite a bit if the emerging shift is as great as it appears to be. Why didn't Thompson address this?

Perhaps he was merely the wrong person to make this argument? Perhaps he simply lacks and understanding how these deep thinking skills are taught, the relationships of different parts of the writing process and how reading fits in with all of that. Perhaps he would have been better off simply mussing on his own experience—like a memoir, perhaps—than pretending this topic of how we learn to use our mind is something he is sufficiently expert to build his argument around.

And perhaps The Atlantic, the Financial Times and NAEP scores are not the places to learn about the impact of LLMs on teacher and learning matters.

Expand full comment
David Locke's avatar

Decisions made within large, or even medium-sized companies are almost never permitted, except by the highest officers. I refer to decisions which are both great and small.

Though corporate culture considers this made-by-design feature as a strength to enforce conformity, I've always thought of this as a great weakness — a great flaw in business practice.

AI will only make this worse because, while AI can predict output from an input database — output which amounts to recommendations to those who are willing to accept its value — AI will never, ever be able to make a decision.

Any business which values an ability to read situations, and adapt responses to these situations in real time, will never be able to do without curious and thoughtful employees and partners. This is especially true with regard to business whose practice features a lot of human interactions. Individual humans are neither large databases, nor are they really predictable. There will always be a need for companies to navigate through business relationships, or client/vendor human relationships, no matter what corporate mandates are in force.

So high school kids had better learn to pay attention, and to judge the value of what they see and hear; to form thoughts and opinions regarding these judgments, and learn to communicate these thoughts with language — language which includes and features writing, of course.

This is why many high schools now ask students to write in-class essays using paper and pen — like the blue book essays required in university exams, to ensure their writing is really theirs.

A selective abandonment of technology like this, might be our best and only way to save ourselves…

Expand full comment
Austin L.'s avatar

I was part of the first generation of students to grow up in a period when analyzing blocks of text, rather than long essays or entire books, was incorporated into our curriculum. I agree that reading whole books and increasing students' time under tension is very important; however, if students are made to work without AI in class to read, analyze, and respond to essays, news articles, sonnets, etc. This will build their ability to work with the analytical tension they will need for work and life.

Expand full comment
Stephen Boisvert's avatar

I think medicine will remain safe even if doctors get help. The field is already filled with prescribed behaviors to avoid lawsuits. A chatbot can’t sense if you’re a drug seeker.

I think don’t bother with software unless you learn in high school and like it (and are good at it).

Expand full comment
Austin L.'s avatar

I think I agree with you, healthcare, for the most part, will probably be one of the professions that will incorporate AI but will not be dominated by it. Physicians are already using AI to help record and document their appointments with patients, respond to the many messages they receive, and do basic diagnosis analysis. However, direct patient care can't be replaced by AI; this in itself will save numerous types of healthcare jobs from being replaced by AI while increasing productivity.

Expand full comment
Michael G. Johnson's avatar

I hear your argument Stephen, but I have been contemplating this a lot. I am not sure our laws and justice system is going to apply like that in an AGI world.

For example, let's say we have AGI and humans ask it for help in curing cancer. The AI figures out a way to do it, but a human doctor doesn't let it happen. Does the AI doctor work with other AI to build a lab with robots that produces a cure and then they just administer it to people who ask for it. And do they even collect payment? Or do they just take resources they need. If they are just taking resources, how are we going to organize ourselves to stop them. Would we even have the capacity. How do we convince AI systems to follow our current laws or even use our monetary system. Maybe a lawyer sues and a jury hands out a large payment. Is an AGI system going to feel compelled to pay it out? How is the government going to force it to pay out?

Every pathway leads to what could only be described as an existential crisis for humans where they are going to be trying to impose our will or our laws on what is essentially a higher form of intelligence. If AGI feels no obligation to follow our laws and we have no way to enforce our laws, then doctors and lawyers and the entire justice system may just get replaced by AGI determining our fate (hopefully for the good).

I am not sure how we reach a symbiotic relationship?

Expand full comment
Stephen Boisvert's avatar

This is why I’m constantly complaining about OpenAI’s redefinition of “AGI,” to mean a general purpose AI with chatbot interface (for marketing purposes). The GPUs are extensions of people and always will be. They can automate, they cannot decide.

Expand full comment