From the beginning, the current AI boom has been characterized by one sentiment: it’s never been more over. For knowledge workers, there is palpable terror that intelligence too cheap to meter will shortly make every laptop jockey across the world unemployable as a matter of simple economics. Forget third-world sweatshops hollowing out the American manufacturing base, the true threat was always the one that Ned Ludd warned us about: automation making humans obsolete. And unlike the weavers of the 19th century, there will be no ladder of prosperity into the service economy for the human redundancies to climb. This is it, the final obsolescence for any member of the proletariat selling keystrokes for their daily bread.
I agree within the kind of jobs you’re talking about. But I already see it taking over jobs in other domains. For example, AI is good at photo editing. REALLY good at it. Me with my iPhone 15 Pro and AI (and admittedly some hobby photography knowhow) can produce results that used to require a lot of equipment and training. If I had the money, I would have paid for Christmas mini shoots, for example. And now I won’t even consider it. Maybe it doesn’t rise to full pro quality. But it’s good enough that I would rather pay $0 than $300 and eat the quality difference. I don’t know what other jobs have this characteristic, but I expect the employment opportunities for your run of the mill family photographers will shrink rapidly once more people figure this out. I think very gifted photographers will be fine. But your garden variety photographers that produce more generic work will find a lot less demand for their skills. There will be no room in that field for your average kid who just wants to take pictures for a living. Which is a shame from a certain point of view (although I know a subset of AI theorists who don’t care if mediocre creative people should get to do something they love for pay). Because I know a mediocre photographer who is extraordinarily dedicated to the job.
The risk isn't that AI is going to take away jobs, but it will be so intrusive and stifling through its implementation in Social Media, banking, transportation, human resources, etc. that it will destroy all ability for human agency outside of scenarios it can handle. Gotta be safe, dontchaknow. It will become a straitjacket over all mankind.
I don't disagree, but I see this as a sliding scale we're already pretty advanced on. Software constraints to human judgment already exist in all the places you named. E.g. manual timesheets that could be fudged have been replaced by automated tracking in the trucking industry, paper resume sorting has been replaced by a series of inflexible form fields in HR, etc. If anything, there's hope that AI tools will add some much needed wiggle room to existing software constraints. But even if they don't, I see their adoption as one more ratchet click on an existing trend, not a qualitative departure from business as usual.
I'm with you on the technical analysis side, but I think you overestimate the extent to which human judgement is even relevant for most jobs.
The way it looks to me, NTP is a cheatcode that can simulate *any* process humans are already able to perform. That's the vast majority of manufacturing and office jobs. It breaks down the moment we move out of distribution - but so do most people who are trained to be wheels in the production machine.
So you're still left with the question of what to do with all those people when you've unlocked the ability to clone their work patterns. Can they be retrained? Do they have something more to contribute to the economy and now they're freed up to learn to think creatively?
You make a good point about what happens to people when they get displaced by AI, and robots too in the future. If even 20% of the workforce gets idled, we call that a depression. Who's going to buy the products that AI, and robots, produce?
Tree of Woe's article is evocative and worth reading, but I think he's anthropomorphizing a set of statistical weights. It's very cool to see that virtual neural structures exist that correspond to concepts, and indeed you would expect this to be the case for neural nets to be able to work at all. But it's a very far bridge from that to concluding what they do is reasoning.
I think this is an interesting read — I have a counter point that I’m curious what you’d think about.
Basically, I am aware that current AIs seem to have limitations in their ability to truly reason. Another example of this is the kind of hallucinations you get when they try and solve simple ciphers.
But I’ve read Graber. Some people see the last century of technology not reducing employment and say ‘gee isn’t it cool how new technology always creates new opportunities’ and others see endless proliferation of bullshit jobs and needless busywork. We all could be working fewer hours but the system deeply incentivizes employment, even if people are just twiddling their thumbs.
On top of that a HUGE amount of work isn’t creative, high stakes decision making. It’s responding to email, it’s telling customers the same thing you always tell them, it’s slow copy pasting and sloppy clicking around in windows file explorer (whoops, didn’t mean to click that).
One theory I have is that nothing will change, which I actually think is bad. I think technology should be used to liberate humans from the toil of menial tasks, and if we are clinging to the 9-5 for all of time that’s bad.
Alternatively, though, is what I call the neutral theory of AI disruption. Basically let’s say AI progress essentially flatlines today. They don’t get better at reasoning or originality or accuracy at all. Would that mean no disruption to the economy?
I think not. I think 2025 is to AI what 1996 was to the internet. Early days. I think in the long run it will be possible to automate a lot more *low to mid skill cognitive tasks*. Coding started off as physically rewiring computers, then it was low level languages, then object oriented coding, and now natural language can direct computers. This totally opens up new ways for people and processes to interface with computers in a more flexible way. *Everything* is run by computers these days. AI will be able to do things like order processing, customer service help, summarizing communications, writing emails, etc etc. I’m not even sure we know all the ways it can be leveraged. I suspect that in the next ~10 years we will see that organizations that fully integrate and leverage AI will outcompete the dinosaurs that insist on doing things the same old way. I think a lot more of the world will be run by computers talking to each other. I think a lot of wrote bullshit tasks will be made obsolete, and that those tasks probably make up a shocking percentage of the total work done in this country.
Admittedly this is somewhat motivated reasoning. I think in the long run loosing our jobs could be the best thing to ever happen to us as a species. But I also think I might be right. The stupid AI we have now really is a big deal.
Humans started out using our hands. Then we replaced hands with tools. Then we replaced muscle power with animal power. Then we replaced animal power with machine power. Then we replaced human operation of machines with automation. Now for the very first time in our history, we are replacing what makes us human, brain power, with AI power.
One curb on AI is the exponential amount of resources required to advance to the next level. A second is that many tasks require hardware, and the cost of automating them makes it not viable. This second objection can be overcome by a universal robot which is a simple chassis with battery and engine, a choice of mobility options from wheels to tracks to legs, a combination of sensory devices to provide information on the external world, and an infinite selection of arms and tools. You need a hand that can open a tin of instant coffee and put two spoons in a mug, you go online and ten minutes later a drone has delivered it. Couch potatoes won't even need to get up to go to the fridge for a beer. The fridge will come to you.
“Fully AI managed robot economy by 2028” ie in 2.5 years is something that only someone who has never left the “”””knowledge industries”””” could believe. Even if the technology to fully automate the management of every sector of the economy existed today (it does not), implementation would take many years, potentially decades.
Total human extinction due to AI by 2030 is literally a more realistic prediction.
The flaw in your argument is that you assume that pointy haired managers care if the 'AI' produces valid code or logically consistent text. That is beyond their comprehension or interest. All they care about is replacing a high cost human with a bot plus a low-cost bot herder. The fact that this will produce garbage and destroy the company he works for is irrelevant. So, yes, the 'AI' WILL take your job, not because it's competent, but because a human will decide so.
It’s been only twice when I’ve read on Substack accurate statements on AI, congratulations. I built modern LLM style AI’s in the early 90’s for fun. Today is old hat but with much more power and memory than I had in 1993.
I think one thing you underestimate is the volume of work which is “auto-complete”. The vast majority of work in “the enterprise” is auto-complete - accounting, procurement, logistics, IT (no, not just coding), order management, planning, HR…. support. 80% of human work in a typical manufacturing company today is “auto-compete”.
Coding is my favorite auto-complete, in its own dimension along with writing sales and marketing proposals, and business plans.
I think of coding three ways - new technology, recombined technology, and rework. Very little coding is new technology, the largest amount is recombine and rework.
AI is at least 1000x faster than humans in all these categories - the rework and recombine, entity recognition (accounting, writing contracts, scheduling orders, etc.).
That’s what is going to be exposed.
Reason is higher order pattern recognition. Today’s AI’s can’t store reason systems, we don’t understand yet how to store reason patterns, but I think it will not be far off.
What we lack is continuous model management, which is to say an LLM had a discrete snapshot model it holds, not continuous as we do.
The power required to simulate that as LLM’s do is the limitation, since that all they do is simulate a wet network.
It took many decades for stored instruction architectures to focus on power (hello cellphone). It’s going to take many decades to do the same for running AI models.
“2030: humans wiped out by robots” maybe not in a Skynet kind of way, but there is a non-zero chance of some idiot programmer coding “death to humans” into an AI and then that AI acting on it. Again, not in a “launch all the nukes” scenario, but shutting down banks, power generators, etc. Given human greed, putting AI in control of infrastructure to “reduce overhead” isn’t that far fetched.
I think the real risk is that people will forego the low level work that is required to become proficient enough to be the guy QC'ing the AI. Existing models can already replace large amounts of menial tasks done by entry level people. It can also write better papers than most college students. The fact that it is writing papers and doing tasks for these people anyway means that those students and interns will never surpass the AI. This is why teachers should require handwriting of papers in rooms with no Wifi is to ensure learning is happening, and institutions will have to develop non-productive interns over longer periods of time. They will have to do this to avoid a collapse of expertise in 20 years when all the pre-AI people start retiring.
This is it, in a nutshell. A tiny risk of catastrophic ruin over a sufficiently large number of events or a sufficiently long time frame is a problem: tiny is fine until ruin is final.
This stuff must be giving Nassim Nicholas Taleb heart palpitations.
AI can generate code that used to require a well-paid guru to write. That reduces the skill required to be a programmer, which will lower salaries for programmers. Whether that means fewer total IT employees, time will tell.
Not exactly. AI can help you write code faster. It still needs a person to review it and tweak it, because LLMs hallucinate, and the developers need to know enough to see and fix those errors.
LLMs trying to use non existent libraries and packages are a known security risk for AI coding tools.
What it does is decrease the need for entry level developers, and make it more difficult for them to get the experience to be able to become proficient enough to be useful.
I agree within the kind of jobs you’re talking about. But I already see it taking over jobs in other domains. For example, AI is good at photo editing. REALLY good at it. Me with my iPhone 15 Pro and AI (and admittedly some hobby photography knowhow) can produce results that used to require a lot of equipment and training. If I had the money, I would have paid for Christmas mini shoots, for example. And now I won’t even consider it. Maybe it doesn’t rise to full pro quality. But it’s good enough that I would rather pay $0 than $300 and eat the quality difference. I don’t know what other jobs have this characteristic, but I expect the employment opportunities for your run of the mill family photographers will shrink rapidly once more people figure this out. I think very gifted photographers will be fine. But your garden variety photographers that produce more generic work will find a lot less demand for their skills. There will be no room in that field for your average kid who just wants to take pictures for a living. Which is a shame from a certain point of view (although I know a subset of AI theorists who don’t care if mediocre creative people should get to do something they love for pay). Because I know a mediocre photographer who is extraordinarily dedicated to the job.
What models and prompts have you used for editing?
The risk isn't that AI is going to take away jobs, but it will be so intrusive and stifling through its implementation in Social Media, banking, transportation, human resources, etc. that it will destroy all ability for human agency outside of scenarios it can handle. Gotta be safe, dontchaknow. It will become a straitjacket over all mankind.
Butlerian Jihad now.
I don't disagree, but I see this as a sliding scale we're already pretty advanced on. Software constraints to human judgment already exist in all the places you named. E.g. manual timesheets that could be fudged have been replaced by automated tracking in the trucking industry, paper resume sorting has been replaced by a series of inflexible form fields in HR, etc. If anything, there's hope that AI tools will add some much needed wiggle room to existing software constraints. But even if they don't, I see their adoption as one more ratchet click on an existing trend, not a qualitative departure from business as usual.
lol I don't know if that's an argument for or against the point. ;)
But yes, as the saying goes, I fear less the machines getting smarter than people as much as I fear people making themselves dumber than the machines.
Like the stories of people who follow GPS to the point of driving into a lake.
https://www.monkeyuser.com/2023/deprecated/?ref=comic
I'm with you on the technical analysis side, but I think you overestimate the extent to which human judgement is even relevant for most jobs.
The way it looks to me, NTP is a cheatcode that can simulate *any* process humans are already able to perform. That's the vast majority of manufacturing and office jobs. It breaks down the moment we move out of distribution - but so do most people who are trained to be wheels in the production machine.
So you're still left with the question of what to do with all those people when you've unlocked the ability to clone their work patterns. Can they be retrained? Do they have something more to contribute to the economy and now they're freed up to learn to think creatively?
Idk, that seems too idealistic to me.
You make a good point about what happens to people when they get displaced by AI, and robots too in the future. If even 20% of the workforce gets idled, we call that a depression. Who's going to buy the products that AI, and robots, produce?
Other robots.
I want to believe that you're right. But at the same time, we see evidence of Claude thinking: https://open.substack.com/pub/treeofwoe/p/more-than-just-autocomplete?utm_source=share&utm_medium=android&r=51bf6
I haven't tried asking it questions requiring inductive reasoning, though.
Tree of Woe's article is evocative and worth reading, but I think he's anthropomorphizing a set of statistical weights. It's very cool to see that virtual neural structures exist that correspond to concepts, and indeed you would expect this to be the case for neural nets to be able to work at all. But it's a very far bridge from that to concluding what they do is reasoning.
As I wrote here - https://ombreolivier.substack.com/p/ai-actively-incinerating-cash?r=7yrqz - one of the big things AI is doing is burning cash.
It better be because I’m retiring. 😂
I think this is an interesting read — I have a counter point that I’m curious what you’d think about.
Basically, I am aware that current AIs seem to have limitations in their ability to truly reason. Another example of this is the kind of hallucinations you get when they try and solve simple ciphers.
But I’ve read Graber. Some people see the last century of technology not reducing employment and say ‘gee isn’t it cool how new technology always creates new opportunities’ and others see endless proliferation of bullshit jobs and needless busywork. We all could be working fewer hours but the system deeply incentivizes employment, even if people are just twiddling their thumbs.
On top of that a HUGE amount of work isn’t creative, high stakes decision making. It’s responding to email, it’s telling customers the same thing you always tell them, it’s slow copy pasting and sloppy clicking around in windows file explorer (whoops, didn’t mean to click that).
One theory I have is that nothing will change, which I actually think is bad. I think technology should be used to liberate humans from the toil of menial tasks, and if we are clinging to the 9-5 for all of time that’s bad.
Alternatively, though, is what I call the neutral theory of AI disruption. Basically let’s say AI progress essentially flatlines today. They don’t get better at reasoning or originality or accuracy at all. Would that mean no disruption to the economy?
I think not. I think 2025 is to AI what 1996 was to the internet. Early days. I think in the long run it will be possible to automate a lot more *low to mid skill cognitive tasks*. Coding started off as physically rewiring computers, then it was low level languages, then object oriented coding, and now natural language can direct computers. This totally opens up new ways for people and processes to interface with computers in a more flexible way. *Everything* is run by computers these days. AI will be able to do things like order processing, customer service help, summarizing communications, writing emails, etc etc. I’m not even sure we know all the ways it can be leveraged. I suspect that in the next ~10 years we will see that organizations that fully integrate and leverage AI will outcompete the dinosaurs that insist on doing things the same old way. I think a lot more of the world will be run by computers talking to each other. I think a lot of wrote bullshit tasks will be made obsolete, and that those tasks probably make up a shocking percentage of the total work done in this country.
Admittedly this is somewhat motivated reasoning. I think in the long run loosing our jobs could be the best thing to ever happen to us as a species. But I also think I might be right. The stupid AI we have now really is a big deal.
Humans started out using our hands. Then we replaced hands with tools. Then we replaced muscle power with animal power. Then we replaced animal power with machine power. Then we replaced human operation of machines with automation. Now for the very first time in our history, we are replacing what makes us human, brain power, with AI power.
One curb on AI is the exponential amount of resources required to advance to the next level. A second is that many tasks require hardware, and the cost of automating them makes it not viable. This second objection can be overcome by a universal robot which is a simple chassis with battery and engine, a choice of mobility options from wheels to tracks to legs, a combination of sensory devices to provide information on the external world, and an infinite selection of arms and tools. You need a hand that can open a tin of instant coffee and put two spoons in a mug, you go online and ten minutes later a drone has delivered it. Couch potatoes won't even need to get up to go to the fridge for a beer. The fridge will come to you.
AI can have my job
“Fully AI managed robot economy by 2028” ie in 2.5 years is something that only someone who has never left the “”””knowledge industries”””” could believe. Even if the technology to fully automate the management of every sector of the economy existed today (it does not), implementation would take many years, potentially decades.
Total human extinction due to AI by 2030 is literally a more realistic prediction.
The flaw in your argument is that you assume that pointy haired managers care if the 'AI' produces valid code or logically consistent text. That is beyond their comprehension or interest. All they care about is replacing a high cost human with a bot plus a low-cost bot herder. The fact that this will produce garbage and destroy the company he works for is irrelevant. So, yes, the 'AI' WILL take your job, not because it's competent, but because a human will decide so.
"It'll take the jobs, not the work."
It’s been only twice when I’ve read on Substack accurate statements on AI, congratulations. I built modern LLM style AI’s in the early 90’s for fun. Today is old hat but with much more power and memory than I had in 1993.
I think one thing you underestimate is the volume of work which is “auto-complete”. The vast majority of work in “the enterprise” is auto-complete - accounting, procurement, logistics, IT (no, not just coding), order management, planning, HR…. support. 80% of human work in a typical manufacturing company today is “auto-compete”.
Coding is my favorite auto-complete, in its own dimension along with writing sales and marketing proposals, and business plans.
I think of coding three ways - new technology, recombined technology, and rework. Very little coding is new technology, the largest amount is recombine and rework.
AI is at least 1000x faster than humans in all these categories - the rework and recombine, entity recognition (accounting, writing contracts, scheduling orders, etc.).
That’s what is going to be exposed.
Reason is higher order pattern recognition. Today’s AI’s can’t store reason systems, we don’t understand yet how to store reason patterns, but I think it will not be far off.
What we lack is continuous model management, which is to say an LLM had a discrete snapshot model it holds, not continuous as we do.
The power required to simulate that as LLM’s do is the limitation, since that all they do is simulate a wet network.
It took many decades for stored instruction architectures to focus on power (hello cellphone). It’s going to take many decades to do the same for running AI models.
"I hope you're right. I really do."
“2030: humans wiped out by robots” maybe not in a Skynet kind of way, but there is a non-zero chance of some idiot programmer coding “death to humans” into an AI and then that AI acting on it. Again, not in a “launch all the nukes” scenario, but shutting down banks, power generators, etc. Given human greed, putting AI in control of infrastructure to “reduce overhead” isn’t that far fetched.
I think the real risk is that people will forego the low level work that is required to become proficient enough to be the guy QC'ing the AI. Existing models can already replace large amounts of menial tasks done by entry level people. It can also write better papers than most college students. The fact that it is writing papers and doing tasks for these people anyway means that those students and interns will never surpass the AI. This is why teachers should require handwriting of papers in rooms with no Wifi is to ensure learning is happening, and institutions will have to develop non-productive interns over longer periods of time. They will have to do this to avoid a collapse of expertise in 20 years when all the pre-AI people start retiring.
"Hope you're not on that flight!"
This is it, in a nutshell. A tiny risk of catastrophic ruin over a sufficiently large number of events or a sufficiently long time frame is a problem: tiny is fine until ruin is final.
This stuff must be giving Nassim Nicholas Taleb heart palpitations.
AI can generate code that used to require a well-paid guru to write. That reduces the skill required to be a programmer, which will lower salaries for programmers. Whether that means fewer total IT employees, time will tell.
Not exactly. AI can help you write code faster. It still needs a person to review it and tweak it, because LLMs hallucinate, and the developers need to know enough to see and fix those errors.
LLMs trying to use non existent libraries and packages are a known security risk for AI coding tools.
What it does is decrease the need for entry level developers, and make it more difficult for them to get the experience to be able to become proficient enough to be useful.