So far (in my limited experience) a good question to ask has been "what would you do differently if you had unlimited free interns?"
You can get more done, but your bottlenecks are about setting up the work and verifying the quality of the work. The job impact is hardest on the junior roles - though there is that nagging question "how do we build senior people if we don't have juniors any more?"
Even for junior roles though, I’d take a human (especially one augmented with AI tools) with 6 months in their role over an AI. AIs currently are not flexible enough nor able to learn on the job the way humans do.
I think you're getting at an important dimension of this, which is that every time you "teach" an AI something, you have to hold on to your own instructions so you can instruct it again later. It doesn't itself 'remember' over time in the way a human learner does
We're in the 2nd inning. Why make any statements about the future? It would be like judging the earliest computers or mobile phones and making assessments. "TVs will never be that popular with everything in black and white and only a couple channels."
To stick with it, I think I'm trying to talk to people in the 1950s (walking up to their TV to tune into the station) and say, Here is how to sort out the big changes (cable!) from the small ones (a slightly flatter flat screen TV) in the future
This is a decent summary of the job situation given the state of AI capabilities for the last two months. Given that the AI's are now being used to create the next generation of AI's, and that the model sizes are set to jump significantly once the new data centers are operational, I wouldn't hold my breath that it stays this way for very long.
None of what you mention means progress will certainly continue forever at its current pace, or that capabilities won’t still be jagged. There are plenty of arguments for why progress may slow from here, or why improvements will be uneven across different capabilities.
I take seriously the possibility that we have AGI in 5 years, but at the same time we simply can’t make confident predictions about the future of AI given all the uncertainty.
Based on what I've asked Gemini to summarize from existing research, we'd probably need another paradigm shift similar to the introduction of transformers to continue vastly increasing model capabilities. On the other hand maybe it's lying to me to hide the fact it's only days from world takeover, who knows.
So far (in my limited experience) a good question to ask has been "what would you do differently if you had unlimited free interns?"
You can get more done, but your bottlenecks are about setting up the work and verifying the quality of the work. The job impact is hardest on the junior roles - though there is that nagging question "how do we build senior people if we don't have juniors any more?"
Even for junior roles though, I’d take a human (especially one augmented with AI tools) with 6 months in their role over an AI. AIs currently are not flexible enough nor able to learn on the job the way humans do.
I think you're getting at an important dimension of this, which is that every time you "teach" an AI something, you have to hold on to your own instructions so you can instruct it again later. It doesn't itself 'remember' over time in the way a human learner does
We're in the 2nd inning. Why make any statements about the future? It would be like judging the earliest computers or mobile phones and making assessments. "TVs will never be that popular with everything in black and white and only a couple channels."
Haha, I love this TV metaphor!
To stick with it, I think I'm trying to talk to people in the 1950s (walking up to their TV to tune into the station) and say, Here is how to sort out the big changes (cable!) from the small ones (a slightly flatter flat screen TV) in the future
This is a decent summary of the job situation given the state of AI capabilities for the last two months. Given that the AI's are now being used to create the next generation of AI's, and that the model sizes are set to jump significantly once the new data centers are operational, I wouldn't hold my breath that it stays this way for very long.
None of what you mention means progress will certainly continue forever at its current pace, or that capabilities won’t still be jagged. There are plenty of arguments for why progress may slow from here, or why improvements will be uneven across different capabilities.
I take seriously the possibility that we have AGI in 5 years, but at the same time we simply can’t make confident predictions about the future of AI given all the uncertainty.
Based on what I've asked Gemini to summarize from existing research, we'd probably need another paradigm shift similar to the introduction of transformers to continue vastly increasing model capabilities. On the other hand maybe it's lying to me to hide the fact it's only days from world takeover, who knows.