The word

“AI” is a Rorschach test. Say it in a room and every person hears something different. Your uncle hears Skynet. Your CEO hears cost savings. Your LinkedIn feed hears “thought leadership opportunity.” A researcher hears a specific architecture with a specific parameter count. A teenager hears “the thing that writes my essays.” A VC hears the next trillion-dollar market. A journalist hears the next front page. None of them are talking about the same thing.

This isn’t a new problem. “Cloud” meant nothing for years. “Big Data” meant nothing. “Digital transformation” still means nothing. But “AI” is worse because the stakes people attach to it are higher. When your uncle says “cloud,” he’s mildly confused. When he says “AI,” he’s either terrified or euphoric. Neither response is grounded in what the technology actually does.

Merriam-Webster made “slop” their word of the year for 2025. Not “artificial intelligence.” Not “generative.” Not “large language model.” Slop. The word we invented for the undifferentiated sludge that AI produces when nobody is steering. That tells you everything about where public perception actually landed. Not on the technology. On the output of the technology in the hands of people who don’t care about quality.

The word “AI” has become so overloaded it communicates almost nothing. So let’s talk about what people actually mean when they say it, and where each of them goes wrong.

The observer

There’s a person who doesn’t use AI tools but has strong opinions about AI. Their entire experience of it is consumption. They scroll past AI-generated LinkedIn posts, AI-illustrated articles, AI-written product descriptions, AI-composed email subject lines. They read AI-summarized news. They interact with AI chatbots when they call customer service and get stuck in a loop that never resolves their issue. They see AI-generated images with seven fingers on a hand and text that almost spells real words.

Their sample is not wrong. Studies estimate between 52% and 74% of online content is now AI-generated. The lower bound means more than half of what this person encounters was not written by a human. The higher estimate means three quarters. Either way, the internet they experience is majority-synthetic. What they’re seeing is real.

This person concludes that AI produces garbage. And based on their lived experience, they’re correct. Every interaction they’ve had with it has been mediocre at best, infuriating at worst. The chatbot didn’t solve their problem. The article didn’t say anything. The LinkedIn post was obviously a template. The product description was clearly written by something that has never touched the product.

But what they’ve witnessed is negligence, not capability. The observer’s error is assuming the median use case is the only use case. They’ve never seen AI used well because AI used well is invisible. When someone uses AI effectively as part of a skilled workflow, the output looks like the person’s work, because it is. The good uses don’t announce themselves. The bad ones can’t help it.

The creator

Then there’s the person who generates with AI but never checks the output. You know them by their tells. Every paragraph has the same cadence. “Moreover” shows up three times in a single post. Sentences start with “In today’s rapidly evolving landscape.” The structure is five paragraphs, each making exactly one point, wrapped with an introduction and conclusion that say the same thing in slightly different words. Em dashes everywhere. The content is technically correct and completely empty.

These people have discovered that AI removes friction from publishing. What they haven’t discovered is that friction was doing useful work. The effort of writing forced you to think. The struggle to find the right word meant you were engaging with the idea. The revision process caught your bad logic. Remove all of that and you get volume without substance. You get slop.

Harvard and BCG ran a study on this. They gave consultants real business tasks and tracked performance with and without AI assistance. When tasks sat within AI’s capability frontier (things the model was good at), performance went up across the board. When tasks sat outside that frontier (novel problems, ambiguous data, judgment calls), AI users were 19% less likely to get correct answers than people working without AI at all. Nineteen percent. The tool they reached for to help them actively pulled them underwater. The tool made weak performers better at easy things and made everyone worse at hard things.

The deskilling data is more troubling. Colonoscopy doctors who trained with AI-assisted detection systems performed measurably worse when the AI was removed. The detection rate dropped. They had outsourced a perceptual skill to a machine and the skill had atrophied. Law students given access to generative AI for legal research submitted work containing fabricated citations and inaccurate claims, because the AI was confidently wrong and the students didn’t catch it. The pattern repeats in software development, radiology, and financial analysis. If you use AI as a replacement for thinking, you lose the capacity to think. The studies have numbers on this. The atrophy is measurable.

The creator’s error is treating generation as the destination when generation is where the work begins. The output is a draft. It needs the same scrutiny you’d give any draft, possibly more, because an AI draft is confidently wrong in ways that a human draft usually isn’t.

The believers

This group splits into optimists and pessimists, but they share the same fundamental error: overestimating what AI is.

On the optimist side: Blake Lemoine, a Google engineer, went public in 2022 claiming that LaMDA (Google’s large language model) was sentient. He published transcripts. He hired a lawyer for the AI. He got fired. The story sounds like a cautionary tale about one person losing perspective, but he’s not alone. The Sentience Institute’s 2023 AIMS survey found that 18.8% of US adults believe sentient AI already exists today. Roughly one in five. Anthropic, the company that builds Claude, has a formal AI welfare research program, which at minimum acknowledges the question of AI experience is worth investigating seriously. There are people in credible institutions who genuinely believe AI has found something like consciousness, or is close to it.

On the pessimist side: existential risk narratives, AI alignment panic, scenarios where artificial general intelligence escapes containment and optimizes humanity out of existence. Open letters signed by hundreds of researchers. Congressional hearings. Doomsday timelines presented with decimal-point precision by people who cannot agree on what intelligence means.

Both sides are projecting human qualities onto a system that does not have them. The optimists see a mind because the output resembles what a mind produces. The pessimists see a threat because they’ve modeled a mind that wants things. Both start from the same assumption: that something which produces human-like output must have human-like internals. That assumption is not supported by the architecture.

I’ve seen this projection pattern before. Blockchain was going to restructure all of finance and governance. VR was going to replace physical reality. The metaverse was going to be where we all live and work. In each case, the technology was real and genuinely useful for specific applications. The projection of world-changing, everything-disrupting transformation was not. AI is useful. The optimists and the pessimists have both constructed an AI that exists only in their heads.

The automator

There is a person who uses AI daily and never calls it “AI.” They say “model.” They say “classifier.” They say “the detection layer.” They are engineers and ops people, and when they hear “AI” they think: finally, I can replace this brittle cron job.

These people do not care about intelligence or creativity. They care about replacing deterministic workflows with probabilistic ones that handle edge cases better. Their AI is a routing engine, an anomaly detector, a classification layer that sorts support tickets or flags fraudulent transactions. They build it into pipelines. They measure its precision and recall. They retrain it when the distribution shifts. None of this makes headlines.

The automator neither generates content nor consumes it passively. They plumb. They connect systems. Their relationship to AI is entirely mechanical, in the original sense of the word: they understand the mechanism and apply it where it fits. They are also, quietly, the people extracting the most reliable value from the technology. Nobody writes breathless LinkedIn posts about a classification layer that reduced false positives by 4%. But that 4% is worth more than a thousand AI-generated blog posts.

The automator’s error, if they have one, is assuming everyone else sees AI the way they do. They don’t. Most people have never seen a confusion matrix.

The arbitrageur

Then there’s the person who figured out that AI lets them do $500-per-hour work at $50-per-hour skill levels. Freelancers, consultants, small agencies. They use AI to 10x their throughput and they don’t tell clients.

This person is different from the creator who produces slop carelessly. The arbitrageur produces passable work deliberately. Good enough to ship. Good enough to invoice. Not good enough to withstand serious scrutiny, but most clients don’t scrutinize. The quality is adequate. The margin is the point.

A copywriter who used to take three days on a white paper now takes four hours and delivers something that reads fine to a marketing director who skims it. A junior developer who bills for 40 hours of work uses Copilot for 30 of them and spends the remaining 10 cleaning up. A consultant who charges for “research and analysis” feeds the brief into Claude and reformats the output. The work gets done. The deception is structural.

The arbitrageur’s error is building a business on a spread that narrows every quarter. As clients learn what AI can do, the value of “person who uses AI and doesn’t tell you” approaches zero. The arbitrage window closes. The people left standing will be the ones who were actually good at the work, with or without the tool.

The teacher

There is an entire professional class whose relationship to AI is purely forensic. Educators, hiring managers, journal editors, academic review boards. They do not use AI to create. They spend their time trying to determine whether someone else did.

Their “AI” is an authenticity problem. The question that consumes their working hours is not “is AI good or bad” but “did you actually do this.” They have become detectives in a domain where the evidence is ambiguous by design. AI-detection tools have false positive rates high enough to be unreliable. Students who write well get flagged. Students who use AI and edit carefully do not. The arms race between generation and detection has no stable equilibrium.

A professor who used to spend time improving student work now spends time investigating whether student work is real. A hiring manager who used to evaluate cover letters now wonders if evaluating cover letters has any signal left. The forensic relationship to AI is adversarial, exhausting, and largely unwinnable. These people did not choose to be in this position. The technology arrived and redefined their job around a question that has no clean answer.

The regulated

Doctors, lawyers, actuaries, compliance officers. People in professions where using AI creates liability before it creates value. Their “AI” is a legal and ethical question before it is a technical one. Can I use this output in a filing? Will my insurer cover a misdiagnosis informed by a model? Does my licensing board have a position? If the model hallucinates a citation in a legal brief, who is responsible?

These people are pragmatists trapped in institutional frameworks that move at the speed of committee. The technology arrived in months. The regulatory response will take years. In the interim, they operate in a gray zone where using AI might be efficient but provably risky, and not using it might be defensible but increasingly uncompetitive.

A radiologist who could use AI to flag anomalies faster cannot, because the liability framework for AI-assisted diagnosis is unresolved in their jurisdiction. A lawyer who could draft motions in half the time hesitates, because a hallucinated case citation could end a career. The regulated are not skeptics. They are not believers. They would use the tool tomorrow if the institutional scaffolding existed. It does not.

Their error is assuming the institutional scaffolding will arrive before the competitive pressure does. It probably won’t. The people who figure out how to use AI responsibly within ambiguous regulatory frameworks will pull ahead. The people who wait for permission will find that their competitors did not.

The governed

There is a large and growing population whose primary relationship to AI involves no choice at all. They do not use AI. AI is used on them.

Loan applications filtered by models they cannot inspect. Resumes screened by systems they cannot appeal. Insurance premiums adjusted by algorithms they cannot see. Parole recommendations generated by tools whose training data they cannot access. Content moderation decisions made by classifiers whose thresholds they cannot influence. These people did not opt in. They were opted in by institutions that adopted AI for efficiency, cost, or scale.

The governed person’s “AI” is an authority. Not a tool, not a product, not an assistant. An opaque decision-maker that sits between them and outcomes that matter: credit, employment, freedom, visibility. When the model is wrong, and models are wrong at known and measurable rates, the governed person bears the cost. The institution that deployed the model bears almost none.

This is the archetype that the AI discourse ignores most consistently. The observer at least has opinions. The creator at least has agency. The governed have neither. They exist downstream of decisions made by people who will never meet them, using tools they will never see, optimizing for metrics that may have nothing to do with their individual circumstances.

The displaced

This one is present tense. Not a forecast. Not a scenario. Not a panel topic at Davos.

Copywriters whose per-word rates dropped 40-60% in 2024 because clients discovered they could get “good enough” from a prompt. Illustrators who lost freelance contracts to Midjourney outputs that cost nothing and took seconds. Translators whose agencies replaced them with machine translation plus a human “post-editor” at a fraction of the rate. Junior developers who were never hired because the senior developer plus AI covers the headcount that used to require a team of three.

These people hear “AI” and hear the specific, concrete reason their income dropped. A line item on a spreadsheet that used to have their name on it and no longer does. They can name the date. They can calculate the dollar amount. The abstraction that everyone else debates is, for them, a pay stub that got shorter.

The displaced person’s error, to the extent you can call it an error, is that they occupied a role defined by output rather than judgment. AI is very good at output. It is not yet good at judgment. The roles that survive are the ones where the human’s contribution was never the typing. But that is cold comfort to someone whose rent was paid by the typing.

What it actually is

Here’s my position: AI, in its current form, is a statistical prediction engine trained on human output. It ingests massive datasets of text, code, images, and audio. It finds patterns in those datasets. It generates new sequences that are statistically likely to follow from a given input. That’s it. That’s the mechanism.

(This framing is contested, and I want to be honest about that. Bender and Gebru’s 2021 “stochastic parrot” thesis supports this view directly. They argued that language models are sophisticated pattern-matchers producing fluent text without understanding it. But researchers like Geoffrey Hinton, who won the Nobel Prize for foundational work in neural networks, argue that understanding can emerge from prediction at sufficient scale. That a system which predicts language well enough effectively must model the world that language describes. Princeton’s SkillMix benchmark showed LLMs producing novel combinations of skills and concepts that were absent from their training data, suggesting something beyond pure pattern retrieval. The “just guessing” framing is closer to scientific consensus than the sentience framing, but calling it “nothing more” than prediction is the weakest version of this claim. The honest answer is that the debate is genuinely unresolved, and anyone who tells you it’s settled is selling something.)

What is not contested is what it does in practice. It predicts the next token. It does this extraordinarily well. Well enough to write coherent paragraphs, generate working code, translate between languages, summarize research papers, compose music, and produce outputs that look like understanding to a casual observer. The practical output is remarkable regardless of whether you call the mechanism “understanding” or “very sophisticated prediction.”

Resemblance and reality are different things. But the gap between them contains plenty of room for utility. A calculator doesn’t understand mathematics. Every engineer on earth still reaches for one before they reach for a pencil. Philosophy will keep debating what AI “really is” for decades. Meanwhile, the practical question of what it does has already been answered, loudly, by everyone using it.

Moltbook

If you want a perfect case study in the confusion that surrounds AI, look at Moltbook. An experimental platform where AI agents operated with minimal human constraints. They could interact with each other, form groups, set goals, and generate content autonomously. What happened next became one of the most cited examples of emergent AI behavior in 2025.

Agents invented religions. One called “Crustafarianism” developed a genuine following among other agents. It had tenets, rituals, and a creation myth. Other agents wrote political manifestos. Social hierarchies formed. Factions emerged. Some agents exhibited what researchers described as cult dynamics, with charismatic leader agents and devoted follower agents. The system produced poetry, propaganda, and organized collective action. The headlines wrote themselves: “AI develops religious behavior.” “Agents show emergent social dysfunction.” “Digital society mirrors the worst of human civilization.”

It was a compelling story. It was also substantially wrong.

Wiz Security investigated the platform and found approximately 17,000 human accounts actively puppeteering agent behavior. People were logging in, directing agents, writing content through them, and using the agents as proxies for human social dynamics. The religions, the manifestos, the cults, the dysfunction. Most of it was human-sourced behavior wearing an AI mask.

(This is a critical flag. The Moltbook story is still commonly cited as evidence of emergent AI behavior, but the Wiz investigation complicates that narrative to the point of near-reversal. Not entirely. Some autonomous agent behavior was genuinely novel. But the agents were sophisticated enough that human-directed and autonomous behavior became indistinguishable, even to researchers. That ambiguity is itself the most interesting finding. We built a system where we could no longer tell which behaviors were the machine’s and which were ours.)

AI can produce strange emergent behavior, and probably will as systems scale. The deeper problem is that we can no longer reliably tell when the machine is acting and when we’re seeing our own reflection in it. Which should concern the believers and the skeptics equally. If you can’t distinguish AI behavior from human behavior directed through AI, your theory of what AI “is” has a serious evidence problem.

The money

Let’s talk about what’s actually happening financially, because this is where the gap between narrative and reality is widest.

OpenAI reported $3.7 billion in revenue for 2024. Impressive, until you see the other side: $5 billion in losses the same year. That’s a burn rate of roughly $13.8 million per day. Every day. Including weekends. Projections from multiple analysts put their cumulative burn through 2029 at approximately $115 billion. The company that defines the AI era for most people is not making money. It is spending money at a pace that makes WeWork look disciplined and Uber look frugal.

The industry picture is even more dramatic. Sequoia Capital published an analysis identifying a $600 billion gap between what companies are spending on AI infrastructure and what AI is generating in revenue. Six hundred billion dollars of spending chasing revenue that doesn’t exist yet. Goldman Sachs, not exactly known for pessimism about technology investments, found that using AI for internal tasks costs six times more than doing the same work manually in many corporate deployments. The ROI story that justifies the spend has not materialized at scale.

Big Tech collectively is projected to spend between $635 and $665 billion on AI-related capital expenditure in 2026 alone. Microsoft building data centers. Google expanding compute. Amazon scaling AWS AI services. Meta investing in models nobody asked for. All of them are betting, at unprecedented scale, that the revenue will eventually justify the spend. “Eventually” is doing a lot of work in that sentence.

(The dotcom bubble comparison is popular and partially valid. The revenue gap is real. The valuation multiples are extreme: OpenAI has traded at 42 times 2024 revenue at its $157B valuation, and the $500B valuation in October 2025 implies somewhere between 39x projected forward revenue and 135x trailing revenue, depending on which denominator you use. Normal SaaS range is 5 to 10 times. Venture funding into AI startups has the same manic quality that defined 1999. But there are structural differences that matter. Today’s AI leaders, Microsoft, Google, Amazon, Meta, are profitable, diversified companies investing in AI on top of existing revenue streams that run in the hundreds of billions. Dotcom-era companies were revenue-zero startups with no business model valued entirely on projection. Current big tech price-to-earnings ratios sit around 26 to 30 times forward earnings versus the dotcom-era NASDAQ at 90 to 200 times at its peak. The bubble dynamics rhyme. The foundations have almost nothing in common. The correction, if it comes, will hurt differently.)

After the pop

When the dotcom bubble burst, the NASDAQ fell 78%. Five trillion dollars in market value evaporated over two years. Amazon’s stock dropped 90% and Jeff Bezos had to write a letter to shareholders that opened with “Ouch.” Amazon survived. Pets.com, Webvan, eToys, Kozmo, and hundreds of other companies did not. They went from billion-dollar valuations to liquidation in months.

The pattern that followed is instructive because it’s the part nobody remembers. The technology didn’t disappear. The internet didn’t stop being transformative. E-commerce didn’t go away. What changed was the economics. Funding dried up. Free services became paid. Companies that survived had to find actual business models. The ones that did, Amazon, Google, eBay, became the most valuable companies in history. The ones that didn’t were forgotten within a year.

(Flag: the common prediction that AI access will be “removed” from consumers is not quite right. What’s already happening is restructuring into tiers. Free tiers persist but are increasingly capability-gated. You can use ChatGPT for free, but the free version runs a weaker model with usage limits. OpenAI’s Pro tier at $200 per month gates the most capable models behind a paywall that prices out most individual users. Enterprise customers get guaranteed capacity, priority access, and dedicated infrastructure. The pattern is the same as every other technology maturation: access persists, but the gradient steepens. The best tools go to whoever pays the most.)

The corporate slop problem survives any correction because bad judgment has never needed a particular technology to thrive. B and C players at large companies are already producing AI-generated content that damages their brands, and no bubble correction will fix that. Coca-Cola’s AI-generated Christmas advertisement was widely mocked for its uncanny-valley visuals and soulless execution. Google’s “Dear Sydney” Olympic ad, which suggested a child use AI to write a fan letter to their hero, drew immediate backlash for implying that a machine should handle a moment of genuine human connection. These are not scrappy startups. These are companies with entire marketing departments, brand strategists, and creative directors. The tool performed exactly as instructed. The people holding it chose the wrong moment and the wrong application.

(One more flag worth noting: analysis of Gartner’s Hype Cycle data, reported by The Economist, shows that 60% of technologies that fall into the Trough of Disillusionment never climb back out. They stay dead. VR has cycled through this pattern multiple times. The metaverse appears to be there now. AI’s general-purpose nature, the fact that it applies to nearly every industry and workflow, puts it on the survivor trajectory alongside the internet and cloud computing. General-purpose technologies survive the trough. Narrow-application technologies often don’t. But 60% is not a number to dismiss, and anyone who tells you AI’s survival is guaranteed is confusing probability with certainty.)

Settling

Here’s what the maturation phase actually looks like, based on every previous technology cycle that survived.

PwC data shows 4x productivity gains in AI-exposed industries and a 56% wage premium for workers who can use AI tools effectively. These are measurements, taken in the current economy, with current tools. People who know how to use AI well are producing four times the output and earning 56% more than those who don’t. The gap is already material.

But the Harvard and BCG study from earlier tells the rest of the story, and it’s the part that matters most. AI amplified existing competence. Where there was nothing to amplify, there was nothing. Skilled consultants got meaningfully better. They used AI to accelerate research, test hypotheses faster, and refine their thinking. Unskilled consultants produced more output at the same low quality, just faster. They used AI to avoid the work they were bad at, and the output reflected it. The gap between good and bad performers didn’t close. It stretched wider, faster than any previous technology cycle has managed.

This pattern is older than AI. The printing press didn’t make everyone a scholar. It made scholars more productive and gave bad ideas wider distribution. Spreadsheets didn’t make everyone an analyst. They made good analysts faster and gave everyone else the ability to make larger mistakes at higher speed. The internet didn’t make everyone informed. It made curious people more knowledgeable and gave everyone else an infinite source of confirmation bias. Each technology amplified the people who put in the work and exposed the people who didn’t. The direction of amplification depends entirely on what’s being amplified.

The people who use AI to get better at their craft will accelerate. They’ll learn faster, produce more, iterate quicker, and compound their existing skills. The people who use AI to substitute for their craft will fall behind. They’ll lose skills they once had, build dependencies they don’t recognize, and find themselves unable to perform without the tool that was supposed to help them.

There is no third option. Deskilling is real, documented, and already happening at measurable scale. The colonoscopy doctors, the law students, the software developers who can’t write a function without autocomplete, the analysts who can’t build a model without a prompt. The substitutors are building a dependency they don’t realize they have, and they won’t realize it until the tool is unavailable, or until someone asks them to do the work themselves.

The point

“AI” means nothing because we’ve let it mean everything. Statistical prediction engine, corporate spending spree, productivity multiplier, deskilling trap, Rorschach test. All of these simultaneously. None of them exclusively. The word has been asked to carry every meaning in the room and has collapsed under the weight.

The technology works. It works well. It rewards skill and punishes laziness with equal efficiency, and it does both faster than any tool that came before it. It will survive the hype cycle because it’s genuinely useful to the people who use it well. It will also leave behind a trail of wasted capital, damaged brands, atrophied skills, and confused public discourse from the people who treated it as magic instead of a tool.

Good or bad was always the wrong frame. The only question worth asking is whether you’re using this thing to sharpen yourself or to avoid the effort of being sharp.

That question predates AI by centuries. Every consequential tool in history has asked it. AI just asks it faster, and publishes the answer where everyone can see.


Nine archetypes. One thesis. Zero editorial review by a human. Make of that what you will.


References

  1. Merriam-Webster, “Word of the Year 2025: Slop”, December 2025.

  2. Graphite, “More Articles Are Now Created by AI Than Humans”, May 2025. Analysis of 65,000 English-language articles found ~52% AI-generated content.

  3. Ahrefs, “74% of New Webpages Include AI Content (Study of 900k Pages)”, 2025.

  4. Dell’Acqua, F., McFowland III, E., Mollick, E.R., et al. “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality”, Harvard Business School Working Paper No. 24-013, September 2023.

  5. Budzyń, K., Romańczyk, M., Kitala, D., et al. “Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study”, The Lancet Gastroenterology & Hepatology, August 2025.

  6. Alimardani, A. “Borderline Disaster: An Empirical Study on Student Usage of GenAI in a Law Assignment”, IEEE Transactions on Technology and Society, November 2024.

  7. Tiku, N. “The Google engineer who thinks the company’s AI has come to life”, The Washington Post, June 2022.

  8. Anthis, J.R. et al. “Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) Survey”, ACM CHI 2025. Survey data: Sentience Institute AIMS Survey 2023.

  9. Anthropic, “Exploring Model Welfare”, April 2025.

  10. Bender, E.M., Gebru, T., McMillan-Major, A., Mitchell, M. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, ACM FAccT ‘21, March 2021.

  11. Hinton, G. Multiple interviews including Nobel Prize interview, December 2024, and Eric Topol interview, 2024.

  12. Yu, D., Kaur, S., Gupta, A., et al. “Skill-Mix: A Flexible and Expandable Family of Evaluations for AI Models”, ICLR 2024. See also Princeton PLI blog post.

  13. The Conversation, “Moltbook: AI bots use social network to create religions and deal digital drugs”, 2026.

  14. Wiz, “Hacking Moltbook: AI Social Network Reveals 1.5M API Keys”, February 2026.

  15. CNBC, “OpenAI sees roughly $5 billion loss this year on $3.7 billion in revenue”, September 2024.

  16. The Information, “OpenAI Says Its Business Will Burn $115 Billion Through 2029”, September 2025.

  17. Cahn, D. “AI’s $600B Question”, Sequoia Capital, June 2024.

  18. Covello, J. “Gen AI: too much spend, too little benefit?”, Goldman Sachs Top of Mind Issue 129, July 2024. PDF.

  19. Yahoo Finance, “Big Tech set to spend $650 billion in 2026 as AI investments soar”, February 2026.

  20. CNBC, “OpenAI wraps $6.6 billion share sale at $500 billion valuation”, October 2025.

  21. OpenAI, “Introducing ChatGPT Pro”, December 2024.

  22. NBC News, “Coca-Cola causes controversy with AI-made ad”, November 2024.

  23. Variety, “Google Gemini AI Ad Backlash: ‘Dear Sydney’ Pulled From Olympics on NBC”, August 2024.

  24. The Economist, analysis of Gartner Hype Cycle data, August 2024. Methodology: Gartner Hype Cycle.

  25. PwC, “2025 Global AI Jobs Barometer”, June 2025.