WEBSITE AND COMPANIES THAT COLLAPSED DUE TO AI

Case Studies

1. Builder.ai

Introduction:
Builder.ai (formerly Engineer.ai) was a startup claiming to offer no-code / low-code / AI-powered software and app development — promising that its platform (called “Natasha”) could build custom apps automatically, with minimal human labour. The company raised large funding rounds (including backing from Microsoft, the Qatar sovereign wealth fund, etc.) and achieved “unicorn” status (valuation over US$1 billion). (Wikipedia)

What Went Wrong / Collapse Details:

  • It was exposed that much of the “AI work” was actually being done by large numbers of human engineers in India (reportedly ~700) rather than automated AI systems. The AI claims were therefore exaggerated. (Business Standard)
  • The startup inflated revenue figures massively (for example, claiming ~US$220 million in 2024 when real revenue was far lower, closer to ~US$55 million) and had financial irregularities. (techmakura.com)
  • Governance problems: lack of oversight, missing CFO for long periods, auditing issues, misleading investors. (techaimag.com)
  • Exposure, loss of investor confidence, seizure of funds by a creditor (Viola Credit), insolvency proceedings. Layoffs of large part of workforce. (Wikipedia)

Conclusion / Lessons:
Builder.ai’s collapse shows the dangers of “AI washing” (presenting human work as AI), exaggerated claims & over-promising without technological or product proof, weak corporate governance, and unsustainable business models depending heavily on investor hype rather than real value. For investors, employees, and customers, due diligence is critical. Claims of automation need to be backed with transparency.


2. Babylon Health

Introduction:
Babylon Health was a digital healthcare provider app / virtual clinic model using AI to triage, consult, etc. It was ambitious: offering telemedicine, AI diagnostics tools, virtual check-ins, etc., often promoted as combining AI + human medical expertise. The company expanded across many countries and had large user bases. (Wikipedia)

What Went Wrong / Collapse Details:

  • Babylon Health faced operational difficulties and financial stress. Revenue streams and costs were not aligned with expectations. It could not fully realize the promise of AI-powered health services at scale. (Wikipedia)
  • The US branch filed for bankruptcy in August 2023; the UK branch entered administration shortly after. They scaled too fast, made regulatory and business model miscalculations, and weren’t able to sustainably serve many markets. (Wikipedia)

Conclusion / Lessons:
Even in fields where AI has strong potential (healthcare, diagnostics), delivering safe, reliable, and scalable AI services is very hard. Regulatory, ethical, trust, cost, and clinical validation issues are major hurdles. Overpromising (or assuming AI will reduce costs or risks too aggressively) can lead to failure when reality (liability, regulation, complexity) catches up.


Additional Example / Partial Case: Chegg

Chegg hasn’t “collapsed” completely, but is a good example of a company suffering greatly due to AI features in search & content summarization. They’ve taken legal action claiming Google’s AI summaries divert traffic away from Chegg’s site, harming their business. (The Washington Post)

This shows how AI features built into large platforms (search engines, aggregators) can disrupt business models of content providers who rely on user traffic & content consumption. Even if not “destroyed,” they face serious existential threats.


General Conclusions & Takeaways

From these cases, some common lessons emerge:

  1. Hype vs Reality
    Many companies market themselves as AI-driven or largely automated in order to attract investment. If the tech doesn’t match the marketing (or if human labour is masked as automation), that discrepancy can lead to collapse when uncovered.
  2. Transparency & Governance Matter
    Good financial reporting, honest disclosures, strong oversight (e.g. having a CFO, proper audits) are essential. Lack of these allows overstatements, fraud, or mismanagement to go unnoticed until it's too late.
  3. Scalability & Cost Structure Risks
    AI can promise efficiencies, but running AI systems (models, inference, maintenance), covering compliance/regulation, and scaling operations also has costs. Many firms underestimate cost or overestimate benefit.
  4. Regulation, Ethics, Trust
    For health, content, legal, education sectors especially, using AI safely and ethically is hard. Regulators, user trust, lawsuits are real risks.
  5. Competition from Platform AI
    When large platforms (Google, etc.) integrate AI features (summaries, direct answers), companies that depend on traffic, content, or certain services may lose their competitive edge. Chegg is an example. Businesses must find defensible value beyond what large-platform AI can emulate.
  6. Avoid overreliance on external funding & inflated valuations
    Many collapses occur when a startup’s cash burn is high and they depend on future funding; when that stops (due to macroeconomic shifts, revelation of shortcomings, regulator interest), their collapse can be swift.

Leave a Reply

Your email address will not be published. Required fields are marked *


Macro Nepal Helper