7 Comments
User's avatar
Michael G. Johnson's avatar

This summer I grappled with the coming of AI a lot. My daughter is in 5th grade and she is already thinking about what she wants to do when she grows up and I had a lot of conversations with her about how the world may be dramatically different for her in 8-10 years. I feel like the best thing I can do is prepare her to be resilient and adaptable. Understand that it is important to learn skills and continue her mental growth, but also have a very open mind to the possibilities available to her.

Expand full comment
David Locke's avatar

Decisions made within large, or even medium-sized companies are almost never permitted, except by the highest officers. I refer to decisions which are both great and small.

Though corporate culture considers this made-by-design feature as a strength to enforce conformity, I've always thought of this as a great weakness — a great flaw in business practice.

AI will only make this worse because, while AI can predict output from an input database — output which amounts to recommendations to those who are willing to accept its value — AI will never, ever be able to make a decision.

Any business which values an ability to read situations, and adapt responses to these situations in real time, will never be able to do without curious and thoughtful employees and partners. This is especially true with regard to business whose practice features a lot of human interactions. Individual humans are neither large databases, nor are they really predictable. There will always be a need for companies to navigate through business relationships, or client/vendor human relationships, no matter what corporate mandates are in force.

So high school kids had better learn to pay attention, and to judge the value of what they see and hear; to form thoughts and opinions regarding these judgments, and learn to communicate these thoughts with language — language which includes and features writing, of course.

This is why many high schools now ask students to write in-class essays using paper and pen — like the blue book essays required in university exams, to ensure their writing is really theirs.

A selective abandonment of technology like this, might be our best and only way to save ourselves…

Expand full comment
Austin L.'s avatar

I was part of the first generation of students to grow up in a period when analyzing blocks of text, rather than long essays or entire books, was incorporated into our curriculum. I agree that reading whole books and increasing students' time under tension is very important; however, if students are made to work without AI in class to read, analyze, and respond to essays, news articles, sonnets, etc. This will build their ability to work with the analytical tension they will need for work and life.

Expand full comment
Thomas John Romer's avatar

What’s funny from my perspective is I have ADD, I’ve had it all my life. I can’t get through a book either, but I listen to a ton of podcasts watch a lot of documentaries. When I use AI, I write it myself first and I just punch it into AI to clean it up. I actually find it really freeing to be able to get my thoughts out and say what I want to say and not have to worry about whether I’ve made the structure 100% right. It’s still my thoughts. It’s still what I wanted to say and how I wanted to say it. The AI just helps make sure that I say it clearly. Or sometimes with character limits it helps me say what I want to say in the correct number of characters. I don’t know. I’m sure there are people that just have it do all the work, but if you use it correctly, it’s actually a fantastic tool.

Expand full comment
Stephen Boisvert's avatar

I think medicine will remain safe even if doctors get help. The field is already filled with prescribed behaviors to avoid lawsuits. A chatbot can’t sense if you’re a drug seeker.

I think don’t bother with software unless you learn in high school and like it (and are good at it).

Expand full comment
Austin L.'s avatar

I think I agree with you, healthcare, for the most part, will probably be one of the professions that will incorporate AI but will not be dominated by it. Physicians are already using AI to help record and document their appointments with patients, respond to the many messages they receive, and do basic diagnosis analysis. However, direct patient care can't be replaced by AI; this in itself will save numerous types of healthcare jobs from being replaced by AI while increasing productivity.

Expand full comment
Michael G. Johnson's avatar

I hear your argument Stephen, but I have been contemplating this a lot. I am not sure our laws and justice system is going to apply like that in an AGI world.

For example, let's say we have AGI and humans ask it for help in curing cancer. The AI figures out a way to do it, but a human doctor doesn't let it happen. Does the AI doctor work with other AI to build a lab with robots that produces a cure and then they just administer it to people who ask for it. And do they even collect payment? Or do they just take resources they need. If they are just taking resources, how are we going to organize ourselves to stop them. Would we even have the capacity. How do we convince AI systems to follow our current laws or even use our monetary system. Maybe a lawyer sues and a jury hands out a large payment. Is an AGI system going to feel compelled to pay it out? How is the government going to force it to pay out?

Every pathway leads to what could only be described as an existential crisis for humans where they are going to be trying to impose our will or our laws on what is essentially a higher form of intelligence. If AGI feels no obligation to follow our laws and we have no way to enforce our laws, then doctors and lawyers and the entire justice system may just get replaced by AGI determining our fate (hopefully for the good).

I am not sure how we reach a symbiotic relationship?

Expand full comment