
Cory Doctorow's grim prophecy explains why your favorite apps get worse. Here's why AI might be next – and what's at stake.
You know the feeling. You find some new online service. It is great. It does what you want, cleanly, and fast. Maybe it's a social media app that truly connects you with friends. Or a search engine that gives perfect results without endless ads. Or a streaming service with many shows and no constant breaks. You tell your friends about it. They join too. Everyone is happy. They enjoy a truly valuable tool. Then, slowly, without anyone really noticing, it starts to get worse. More ads fill your screen. Search results show paid content first. Key features move behind a confusing paywall. Or the whole service just gets slow and annoying. This sadly common problem has a name: Enshittification. Author and activist Cory Doctorow made up this word. It clearly shows how a platform steadily gets worse. First, it helps users. Then, it uses those users to help businesses. Finally, it squeezes as much money as it can from itself and its owners. This often spoils things for everyone else.
This pattern is sadly common online. It affects many digital services. Search engines put ads before good results. Social media apps push angry posts to get clicks. Online stores promote their own goods over others. Each has followed the enshittification plan in its own way. But what about the powerful new world of artificial intelligence? AI can change our lives greatly. Can it avoid this bad trend? Or is it extra likely to fall apart like other platforms? This article explains how enshittification works. It shows why AI might be even more at risk. We will also look at ways to stop AI from becoming untrustworthy, unclear, and only focused on money. Learning about this is not just for study. It is vital for anyone who cares about the future of AI and good AI ethics.
Understanding the Cycle of Degradation – The Enshittification Blueprint
Enshittification is a clear, three-step process. Old and new online platforms follow it. This happens as they grow and deal with business demands. This plan helps us understand why successful services often turn bad.
-
Stage 1: Good for Users. At the start, a new online service wants mostly one thing. It wants to get many users. To do this, it must be useful, right for you, and easy to use. Think of early Google. It gave clear, helpful search results. It had no ads. Or early Facebook. It connected college students. No secret rules changed what they saw. No flood of ads hit them. The goal is to be truly useful and valuable. It builds users and loyalty. It makes itself a must-have tool. Venture capitalists often pay for platforms then. They care more about getting users than making money right away. This lets them build a great product.
-
Stage 2: Good for Businesses. Later, the service has many loyal users. They are "stuck" because they put in time and data. Then the platform makes a big change. Its goal slowly shifts. It starts to make money from businesses. These businesses want to reach its users. It might sell top ad spots. It might show goods from paid sellers or advertisers first. Or it might run special ad campaigns for them. Users still have a fair experience. But the first problems appear. Search results might show more paid links. Social media might show brands more than friends' posts. Online shops might quietly push some items. They do this because advertisers pay them. The ways they make money get complex. This creates a fight between user value and business needs.
-
Stage 3: Extracting from Users (Enshittification). The last stage is the worst. The platform has made money from businesses. It has also locked in its users. Then it focuses on getting as much money as it can from users. Service quality gets much worse. Users notice this. Users get flooded with bad and useless ads. Computer programs show content just to keep users clicking. This often hurts user happiness or choice. Key features are hidden behind paywalls. Users must pay to use them. Or the service just becomes slow and unreliable. This is very frustrating. The goal fully changes. It moves from being useful to aggressively making money. Users are now trapped. They find it hard or impossible to leave. This is because friends are on it. Or they can't take their data easily. Or there are no good other options. We see this happen again and again. Once-loved services become annoying tools. Users put up with them instead of enjoying them.
This pattern is sadly common online. It affects chat apps, entertainment, and more. But AI is not just another platform. It is part of how we make choices. It needs complex work to run. These things make it an even worse danger.
Why AI is Uniquely Vulnerable to the "Enshittification" Trap
Old online services face pressures that make them worse. But AI's main purpose and the strong urge to make money from it make it even more likely to fall into this trap. When AI itself gets worse, the risks are much bigger. It's more serious than a social media feed just being less useful.
-
High Operational Costs & Profit Pressure: Building, teaching, and running big, complex AI systems costs a huge amount. This includes Large Language Models (LLMs) or image tools. These systems need huge amounts of good data to learn. Often, only one company owns this data. They also need massive computer power. This usually means thousands of powerful graphics cards. Just the power they use is amazing. The companies making these new AI systems are not charities. They are often public companies or startups with lots of investor money. They face huge pressure to make big profits from their large investments. These high costs can quickly make companies try hard to make money. They might cut quality. Or they might mix business aims directly into what the AI gives out. This is to please investors and keep running.
-
Subtle Corruption of Output: Ads on a webpage are usually clear. But AI's "bad stuff" can be hidden in its answers. This means its answers might be biased for money or just worse. This can also be true for its suggestions or creative work. It makes it very hard to spot. Imagine asking AI for a place to eat. It quietly suggests a paid restaurant. This restaurant paid to be shown first. Or you ask to compare products. The AI might unfairly push an item from a partner company. AI might also learn from data that quietly favors some businesses or ideas. This can happen even without clear ads. The AI's answers no longer just serve your fair needs. Instead, business aims deeply affect them. This makes the service getting worse harder to see. It also makes it more harmful.
-
The "Hallucination" Factor: Companies also feel pressure to cut computer costs or work faster. This is a big weak point, beyond just making money. This can make AI answers worse. An AI built to be cheap might give simple answers. They may lack detail. Or worse, it might "make up" facts that are not true at all. These "hallucinations" are a known problem. This is when AI creates fake but believable information. The system's real goal might be to be cheap, fast, or get users to click more. For example, it might make 'fun' but false content. If this matters more than truth, then the AI will become less reliable and less trusted. This seriously harms how useful an intelligent tool is. If you can't trust it, its value drops fast. It becomes just another source of bad info. It is not a helpful assistant.
-
Incentive Misalignment: The main problem is that goals are deeply out of line. AI companies are often paid for getting users to interact a lot. They do this with shocking news, closed groups, or addicting habits. They also get paid for ads, paid content, or selling better features. But they do not focus on being truly right, useful, or ethical. If this is the case, the AI's quality will surely get worse. Users' questions, actions, and data are often gathered. This data trains future AI versions that aim to make more money. Then the user becomes the product. They are not the customer. This sets up a twisted goal system. The real customer of AI is not the user. It is advertisers or business partners. This leads to designs that aim for profit, not purpose. It can also hurt privacy, fairness, and truth.
Such strong forces are at work. Can we escape this pull? Can we build and use AI that goes against this strong money-making drive? This drive puts profit before good ethics and real user value.
Charting a Course Beyond Enshittification for AI
The plan for enshittification looks bleak. But there are ways out. Other business models can lead to a better, lasting future of AI. Changing this path needs a joint, mindful effort. Developers, companies, lawmakers, and especially users must work together.
-
Open-Source Models: Open-source AI gives us great hope. It is spreading and getting better. What if one company does not own the AI's core, its models, or its training data? Then the goals change completely. Groups can build AI together. No one central power runs it. These AI models are open. Anyone can check them. Anyone can change them. This can put usefulness, truth, fairness, and ethics first. It can do this instead of just making money. Look at Hugging Face's many open models. Or other group-run LLM projects. They show what is possible. They help new ideas grow. They let people check the AI. This stops businesses from messing with it. Getting money and working together are still hard. But open-source means sharing value. It puts this before private profit.
-
Premium Subscription Models: Imagine an experience with no ads, high quality, and respect for privacy. Users would pay directly for it. This could create a very different bond. Here, users plainly pay for a service. It clearly meets their needs. They know their data will not be used to make money for other companies. This eases the huge pressure to make money by making things worse. The goal is to keep service high quality. This keeps paying users. Think of early Netflix (no ads), Spotify Premium, or pro software tools. They show users will pay for a great, clear experience. But companies always feel pressure to cut costs. Or to find "extra" ways to make money. For example, they might add ads to paid plans. Some streaming services do this. Staying alert against this is key.
-
Ethical Business Models & Conscious Design: We need a basic change in how AI companies work. This includes how they are set up, run, and paid. This is more than just saying they are good. It means putting AI ethics and lasting user value at the heart of their business plan. Putting long-term good, user happiness, and trust first, instead of quick profits, could create stronger, more trusted, and truly smart AI systems. This needs careful design at every step of AI making. From choosing data and building the model. To how users see it and how it's used. It means making AI easy to understand. Checking it for fairness. Building privacy in from the start. Having people watch it. Using strong safety rules. These choices might cost more at first. Rules that demand openness, responsibility, and user rights could help. They could push for ethical design.
-
The Stakes are Higher: We must always remember and say that AI is not just a simple online tool. It's not just for finding food or looking at social posts. These are basic tools. They can change how we learn, work, talk, decide, and even think. These powerful tools could get 'enshittified'. This means they get ruined by business aims. They become full of mistakes. Or they are made to control, not help. Then the effects on society could be very bad. On learning. On public talks. On how people interact. Think of AI for schools. It might quietly push certain company lessons. Medical AI might give unfair advice. Or news AI might twist facts for politics or money. Our information system, fair economy, and democracy itself are at risk.
Our choice is clear. It is up to both the people who build these systems and the people who use them. They must ask for and back a better, fairer future for AI.
Conclusion and Final Thoughts
The threat of enshittification hangs over the growing AI world. It is a proven, harmful danger for past online platforms. AI has special traits. High costs. Hidden bad outputs. It can 'hallucinate' facts. Its goals are often wrong. These make it very open to the same profit-led decay. To escape this trap, we need new business models. Companies must change their main goals. They need to aim for real usefulness and public good. Not just top profits. They must also promise to uphold AI ethics and user value. This must happen at every step of AI's growth.
If we don't check AI, it will follow the old internet's path. This path is clear. It will slowly get worse. No one will notice right away. It will focus on making money. Our powerful AI tools will become untrustworthy, unclear, and less smart. To change this path, we need constant, careful work. We need new ways to run things. We need different business models. These must be different from what ruled the internet for years. Our future of true, trusted intelligence depends on this. So does the quality of our shared future.
What steps do you believe are most critical to prevent the enshittification of AI, and what role can individual users play in demanding a better future from the intelligent tools we increasingly rely on?
AI was used to assist in the research and factual drafting of this article. The core argument, opinions, and final perspective are my own.
Tags: #AIEthics, #PlatformDegradation, #CoryDoctorow, #BusinessModels, #FutureOfAI