I've worked on the hardware side in fortune 500 companies for over a decade. I currently own 2 software companies of nearly 20 years. I built out some of the earliest neural learning networks that eventually became LLM's. I built my first PC over 46 years ago at the age 6 w/ my father who worked for IBM.
I understand the hardware side may lag, but I also worked in these fields a long time and understand the obstacles ahead.
I was at the forefront of AI technology, but I don't profess to be an expert. But I do think I have a decent grasp of how it works.
Thank you for your response, Jaxon! Glad to hear that your experienced in AI and hardware, yet, that makes me even more worried. I have plenty of questions and would love to keep this discussion going. Please don’t take my earlier reply personally, my frustration is aimed at the constant wave of AI‑is‑everything articles, not at you.
I’ve never worked for an S&P 500 firm or in an ML‑first business. But, like they say, just as you don’t need to be a baker to taste stale bread, I’ve spent countless hours using large language models (LLMs) in the real world use-cases. Since the GPT‑2 era, I’ve logged roughly 1,500–2,000 hours across coding, architecture planning, learning, book summarization, email drafting, and other personal stuff like travel planning, I was over-hyped by it and it really took me off.
My journey has gone from top enthusiasm (around GPT‑3.5 release) to a post‑hype calm within the past 3 years period where i'm sleeping peacefully and not being scared about job lose at all. My LLM usage has dropped from ~90 % of my working time to ~5 %.
Three questions for you:
1. Why do you believe a non‑deterministic tool like an LLM can replace Phase‑2/3 jobs when it still struggles with many Phase‑1 tasks? (I’ll detail Phase‑1 issues below.)
2. What makes the timeline so specific? Why 2027 rather than 2028 (or 2030)?
3. Where is the paradigm shift exactly? Transformer research debuted in 2017, and since then I see improvements, but no genuine paradigm break.
Ten reasons for me that the LLM boom smells like a bubble:
1. LLMs generate averaged content. Code, research, design -> the output is usually on the axis X from „bad” to just “good enough,” not good, not very good and definitely not exceptional. Because this is what they are internally right? Sophisticated average math calculators running on faster hardware.
2. In terms of Software Engineering -> Coding was never the bottleneck. Writing a function takes minutes once you understand the codebase and requirements. LLMs solve ~10 % of the problem, not the 90 % that involves deep context and process alignment. It has extreme problems with large codebases and legacy systems. No, it’s not a context problem. But guess what, it still have problems even with handling this 10% of work and seems that programmers are better without it https://arxiv.org/abs/2507.09089 -> the research yet is still done on a small trial basis.
3. Foundation models fall short. The industry is pivoting to smaller, task‑specific models, which come with their own flaws.
4. The business case is really weak for LLM’s. Despite billions invested, most AI companies remain unprofitable (see https://www.wheresyoured.at/wheres-the-money/). Only NVIDIA and slightly AMD catching up, earn real returns by selling “shovels.”
7. AI‑generated graphics and content. You can see the AI image is generated from a first sight. They’re just looking bad. Junior Designer on Fiverr does better but that may be subjective. Not to mention, that after using the AI for so much time, I spot AI‑generated text in seconds. It feels flat, just an average string of words with no heart in it. Even a casual reader can sense this in today’s best models, which is why I don’t see them replacing skilled copywriters anytime soon. Read something like https://maalvika.substack.com/p/being-too-ambitious-is-a-clever-form and you’ll understand that LLMs are still miles away from writing with that level of voice and heart and probably never will.
9. “AI‑powered” no‑code tools disappoint. Projects like Lovable and v0. churn out generic templates - nothing we don’t already have and can be easily done with no-code tools without AI.
Jobs like diagnostics, trading, or financial analysis can’t tolerate even a 0.1 % error rate, let alone 1–2 %. Until LLMs overcome this, their role will stay narrow. Hardware is racing ahead and I'm pretty sure, that the software will lag, not hardware. Even model creators seems to shift focus towards the UX/DX tweaks now - better APIs, chat features - rather than fundamental breakthroughs. Claude 3.5 often outperforms 3.7 (and even 4 in some cases) in coding, highlighting that creators probably are training these models without really understanding how to fix the common flaws.
The much‑touted “agentic AI” is basically multiple average calculators wired together. We’ve moved from “prompt engineering” to “context engineering.” Tomorrow it’ll be another buzzword just to make the LLM thing of the future, that you can do better with it and it's your skills issue, not the LLM's!
Emerging headwinds
1. Legal and licensing barriers. Charging AI crawlers (https://blog.cloudflare.com/introducing-pay-per-crawl/) will raise training costs, and courts may limit access to copyrighted works. The times for easy and cheap training became a thing of the past and just to remind, these companies are not yet profitable :)
Just a few years ago, CEOs were telling us that blockchain, NFTs, and the Metaverse were the next big paradigm shift: everyone would wear an Oculus and meet “offline” without leaving home. https://spyglass.org/vision-pro-weak-sales/
Now those same voices claim that AI delivers “real value” where blockchain failed. But what value, exactly? For most people, LLMs have simply become a fancier way to search - and they still lean on Google to pull the right sources. One SEO study shows that the majority of LLM answers are still lifted from Google’s top‑10 results, because they still need deterministic tools to run, so how could this be the end of search?
To be clear, I’m not against AI or LLMs. They’re impressive tools. They’re just not as good as the hype suggests, and articles that say otherwise make me want to face‑palm. Hitting the deadlines you mention would take a lot of luck, because we still need a breakthrough on the scale of electricity, the light bulb, or the telephone - something fundamentally new, not merely a larger LLM or interconnected LLM's a.k.a agents asking themselves if they are correct. That will never reduce error rate to 0%.
Many industries can’t tolerate the non‑determinism and high error rates in today’s models; some require error rates below 0.1%. Until that changes, mass job losses to LLMs seem very very unlikely. Ironically, the best thing AI has done so far is steer more people back toward no‑code tools that reduce those very errors.
Wrapping up, after my nearly 2,000 hours journey of hands‑on work, I now use LLMs mainly for:
1. Search assistance.
2. Small code snippets / autocompletion (think Copilot like).
4. Scaffolding projects (yet, still thinking of dropping it as it fails a lot)
That’s it. That’s what I found a perfect fit after all this day working with these tools. These tools are impressive but nowhere near world‑changing - at least not yet. When I see breathless predictions of mass job losses by 2027, I can’t help but shake my head.
Meta is spending millions to recruit the world’s best engineers because it still needs a true breakthrough - we're at similar point to Cold War and Los Alamos but yet this is right now happening with private companies. If current LLMs were already sufficient, that hiring spree wouldn’t be necessary. Hitting the timelines you cited will take both a major discovery and a fair amount of luck; the dates are not impossible, just very very highly optimistic.
I don't think the author of the article understands how AI works neither, in order get into phase 2 & 3 so early, the tech would have to be very lucky
I've worked on the hardware side in fortune 500 companies for over a decade. I currently own 2 software companies of nearly 20 years. I built out some of the earliest neural learning networks that eventually became LLM's. I built my first PC over 46 years ago at the age 6 w/ my father who worked for IBM.
I understand the hardware side may lag, but I also worked in these fields a long time and understand the obstacles ahead.
I was at the forefront of AI technology, but I don't profess to be an expert. But I do think I have a decent grasp of how it works.
If you have a specific question, please ask.
Thank you for your response, Jaxon! Glad to hear that your experienced in AI and hardware, yet, that makes me even more worried. I have plenty of questions and would love to keep this discussion going. Please don’t take my earlier reply personally, my frustration is aimed at the constant wave of AI‑is‑everything articles, not at you.
I’ve never worked for an S&P 500 firm or in an ML‑first business. But, like they say, just as you don’t need to be a baker to taste stale bread, I’ve spent countless hours using large language models (LLMs) in the real world use-cases. Since the GPT‑2 era, I’ve logged roughly 1,500–2,000 hours across coding, architecture planning, learning, book summarization, email drafting, and other personal stuff like travel planning, I was over-hyped by it and it really took me off.
My journey has gone from top enthusiasm (around GPT‑3.5 release) to a post‑hype calm within the past 3 years period where i'm sleeping peacefully and not being scared about job lose at all. My LLM usage has dropped from ~90 % of my working time to ~5 %.
Three questions for you:
1. Why do you believe a non‑deterministic tool like an LLM can replace Phase‑2/3 jobs when it still struggles with many Phase‑1 tasks? (I’ll detail Phase‑1 issues below.)
2. What makes the timeline so specific? Why 2027 rather than 2028 (or 2030)?
3. Where is the paradigm shift exactly? Transformer research debuted in 2017, and since then I see improvements, but no genuine paradigm break.
Ten reasons for me that the LLM boom smells like a bubble:
1. LLMs generate averaged content. Code, research, design -> the output is usually on the axis X from „bad” to just “good enough,” not good, not very good and definitely not exceptional. Because this is what they are internally right? Sophisticated average math calculators running on faster hardware.
2. In terms of Software Engineering -> Coding was never the bottleneck. Writing a function takes minutes once you understand the codebase and requirements. LLMs solve ~10 % of the problem, not the 90 % that involves deep context and process alignment. It has extreme problems with large codebases and legacy systems. No, it’s not a context problem. But guess what, it still have problems even with handling this 10% of work and seems that programmers are better without it https://arxiv.org/abs/2507.09089 -> the research yet is still done on a small trial basis.
3. Foundation models fall short. The industry is pivoting to smaller, task‑specific models, which come with their own flaws.
4. The business case is really weak for LLM’s. Despite billions invested, most AI companies remain unprofitable (see https://www.wheresyoured.at/wheres-the-money/). Only NVIDIA and slightly AMD catching up, earn real returns by selling “shovels.”
5. Users still dislike chatbots and support by AI. Three years post‑GPT‑3.5, 77 % of people find chatbots frustrating and 88 % prefer a human (https://hbr.org/2025/05/fixing-chatbots-requires-psychology-not-technology). That suppose to be main use-case of LLM's?
6. Legal and research accuracy is shaky. LLMs omit critical details, misread tables, and can be very easily tricked which makes them not usable for summaries (https://www.theguardian.com/technology/2025/jul/14/scientists-reportedly-hiding-ai-text-prompts-in-academic-papers-to-receive-positive-peer-reviews).
7. AI‑generated graphics and content. You can see the AI image is generated from a first sight. They’re just looking bad. Junior Designer on Fiverr does better but that may be subjective. Not to mention, that after using the AI for so much time, I spot AI‑generated text in seconds. It feels flat, just an average string of words with no heart in it. Even a casual reader can sense this in today’s best models, which is why I don’t see them replacing skilled copywriters anytime soon. Read something like https://maalvika.substack.com/p/being-too-ambitious-is-a-clever-form and you’ll understand that LLMs are still miles away from writing with that level of voice and heart and probably never will.
8. Non-deterministic nature. Finance, banking, accountants - these can’t simply tolerate the ~1–2 % numerical errors LLMs introduce and this is not fixable with better hardware or training (https://www.microsoft.com/en-us/research/publication/table-meets-llm-can-large-language-models-understand-structured-table-data-a-benchmark-and-empirical-study/).
9. “AI‑powered” no‑code tools disappoint. Projects like Lovable and v0. churn out generic templates - nothing we don’t already have and can be easily done with no-code tools without AI.
10. Even the early adopters struggle. Salesforce embraced AI layoffing a lot, yet the're not happy with it? (https://www.theinformation.com/articles/ai-giving-salesforce-boost). Similar stories abound (https://medium.com/@ashume/it-layoffs-or-ai-smokescreen-what-2024-2025-industry-data-actually-reveals-a5fd0ffa969f). And remember Devin, the “software‑engineer‑killer”? Its hype fizzled (https://www.reddit.com/r/ChatGPTCoding/comments/1jbsefc/what_happened_to_devin/).
Why deterministic accuracy still matters
Jobs like diagnostics, trading, or financial analysis can’t tolerate even a 0.1 % error rate, let alone 1–2 %. Until LLMs overcome this, their role will stay narrow. Hardware is racing ahead and I'm pretty sure, that the software will lag, not hardware. Even model creators seems to shift focus towards the UX/DX tweaks now - better APIs, chat features - rather than fundamental breakthroughs. Claude 3.5 often outperforms 3.7 (and even 4 in some cases) in coding, highlighting that creators probably are training these models without really understanding how to fix the common flaws.
The much‑touted “agentic AI” is basically multiple average calculators wired together. We’ve moved from “prompt engineering” to “context engineering.” Tomorrow it’ll be another buzzword just to make the LLM thing of the future, that you can do better with it and it's your skills issue, not the LLM's!
Emerging headwinds
1. Legal and licensing barriers. Charging AI crawlers (https://blog.cloudflare.com/introducing-pay-per-crawl/) will raise training costs, and courts may limit access to copyrighted works. The times for easy and cheap training became a thing of the past and just to remind, these companies are not yet profitable :)
2. Goodhardt’s law in play. https://en.wikipedia.org/wiki/Goodhart%27s_law -> models are being trained to outperform benchmarks but provide less and less quality to the users based on the Grok 4 (https://www.reddit.com/r/singularity/comments/1lyzqzg/grok_4_disappointment_is_evidence_that_benchmarks/).
Just a few years ago, CEOs were telling us that blockchain, NFTs, and the Metaverse were the next big paradigm shift: everyone would wear an Oculus and meet “offline” without leaving home. https://spyglass.org/vision-pro-weak-sales/
Now those same voices claim that AI delivers “real value” where blockchain failed. But what value, exactly? For most people, LLMs have simply become a fancier way to search - and they still lean on Google to pull the right sources. One SEO study shows that the majority of LLM answers are still lifted from Google’s top‑10 results, because they still need deterministic tools to run, so how could this be the end of search?
To be clear, I’m not against AI or LLMs. They’re impressive tools. They’re just not as good as the hype suggests, and articles that say otherwise make me want to face‑palm. Hitting the deadlines you mention would take a lot of luck, because we still need a breakthrough on the scale of electricity, the light bulb, or the telephone - something fundamentally new, not merely a larger LLM or interconnected LLM's a.k.a agents asking themselves if they are correct. That will never reduce error rate to 0%.
Sources -> https://www.windowscentral.com/software-apps/meta-chief-ai-scientist-ai-is-not-replacing-people
And that we need a bigger shift for that https://www.axios.com/2017/12/15/artificial-intelligence-pioneer-says-we-need-to-start-over-1513305524
Many industries can’t tolerate the non‑determinism and high error rates in today’s models; some require error rates below 0.1%. Until that changes, mass job losses to LLMs seem very very unlikely. Ironically, the best thing AI has done so far is steer more people back toward no‑code tools that reduce those very errors.
Wrapping up, after my nearly 2,000 hours journey of hands‑on work, I now use LLMs mainly for:
1. Search assistance.
2. Small code snippets / autocompletion (think Copilot like).
3. Improving learning / questioning when reading books (condensing unfamiliar topics).
4. Scaffolding projects (yet, still thinking of dropping it as it fails a lot)
That’s it. That’s what I found a perfect fit after all this day working with these tools. These tools are impressive but nowhere near world‑changing - at least not yet. When I see breathless predictions of mass job losses by 2027, I can’t help but shake my head.
Meta is spending millions to recruit the world’s best engineers because it still needs a true breakthrough - we're at similar point to Cold War and Los Alamos but yet this is right now happening with private companies. If current LLMs were already sufficient, that hiring spree wouldn’t be necessary. Hitting the timelines you cited will take both a major discovery and a fair amount of luck; the dates are not impossible, just very very highly optimistic.
While writing this comment accidentally found that some people roasted this article already -> https://www.reddit.com/r/BetterOffline/comments/1lzskmw/the_ai_layoff_tsunami_is_coming_for_red_america/ PS. You forgot to mention you’re professional poker player? I love poker:)