AI companies aren’t going to tell you this, so I will: the tools you’re using every day are getting progressively worse, and they know it.
It’s not a bug. It’s not bad updates. It’s the inevitable result of AI systems training on an internet that’s now mostly AI-generated content.
They’re eating their own output, and the quality is collapsing faster than anyone wants to admit publicly.
Here’s how to spot the contamination yourself – and what to do about it before it tanks your work.
The Test AI Companies Hope You Never Run
Open any AI image generator right now. Generate “realistic human hand holding a smartphone.”
Do it five times.
Count the fingers. Notice the weird joint angles. See how some attempts are perfect while others are bizarrely wrong? That’s not random variation – that’s a model that’s losing confidence in what hands actually look like.
Now try the same prompt you used six months ago on any AI tool you’ve been using consistently. Compare the results.
I’ve been doing this weekly since March. The degradation is measurable and consistent. But you won’t see AI companies publishing these comparisons.
Because the trend line goes in the wrong direction.
What They’re Not Telling You
Here’s what’s actually happening behind the scenes:
AI models need massive training data. They used to scrape the internet, which was mostly human-created content. That worked fine.
But now? The internet is 50-60% AI-generated content and climbing fast. Every time they retrain their models to “improve” them, they’re inadvertently feeding them synthetic data created by previous AI models.
It’s like training a chef by only letting them eat at restaurants run by their own students. Eventually, everyone’s cooking degrades toward the same mediocre mean.
AI companies know this. They’re just betting you won’t notice until they figure out a solution.
Spoiler: there isn’t one with current approaches.
The Five Signs of AI Contamination
1. Inconsistent Quality on Identical Prompts
Run the same prompt three times. If you get wildly different quality levels – one perfect, one mediocre, one broken – the model’s confidence is degrading. It’s literally guessing now.
2. Everything Looks Vaguely Familiar
AI-generated designs all starting to feel similar? That’s because they’re training on each other’s output. Pinterest, Dribbble, Behance – contaminated with AI work that looks like AI work that was inspired by AI work.
It’s convergent evolution toward generic.
3. Weird Artifacts in Previously-Reliable Features
Your background removal tool used to nail hair edges. Now it occasionally just… doesn’t. That’s not a bug – it’s the model forgetting patterns it used to know because those patterns are being diluted by AI-generated training data that got them slightly wrong.
4. The “Uncanny Valley” Feeling You Can’t Explain
Something feels off but you can’t pinpoint why. Proportions are slightly wrong. Color relationships are weird. It’s technically competent but somehow soulless.
Trust that feeling. It’s your brain detecting patterns that violate reality.
5. Results That Were Good Last Month Are Worse This Month
This is the big one. If you’re keeping examples of AI outputs over time, you can literally watch quality degrade. Same prompt, worse results. That’s model collapse in action.
Why This Is Unfixable (Right Now)
“Can’t they just filter out AI-generated content from training data?”
In theory, yes. In practice:
AI-generated content is increasingly indistinguishable from human content – even to other AIs trying to detect it. The arms race between generation and detection is already lost.
Users are incentivized to pass AI content as human-made, especially on platforms that penalize or exclude AI work. So the data is lying about its origin.
Even “human-created” content now includes AI-assisted elements. Where do you draw the line? The graphic designer who used AI for the background? The writer who used AI to polish their prose? It’s all contaminated.
By 2026, researchers estimate over 90% of online content will be AI-generated or AI-influenced.
The training data well is poisoned. You can’t unpollute it.
What AI Companies Are Actually Doing
They’re trying three things, none of which are working:
Synthetic data filtering. Doesn’t work when AI can’t reliably detect AI content. False positives remove good human data. False negatives let garbage through.
Human curation. Doesn’t scale. The internet is too big. By the time humans curate a dataset, it’s already outdated and contaminated.
Watermarking. Voluntary, easily removed, and doesn’t solve the problem of content that’s already been generated and uploaded without watermarks.
What they’re not doing: admitting the problem publicly, because that tanks investor confidence in AI’s trajectory.
How to Spot Contamination in Your Workflow
The Reference Check
When AI gives you design inspiration, reverse image search it. If it appears across dozens of sites with no clear origin or attribution, it’s probably AI-generated reference contaminating your judgment.
The Consistency Test
Generate the same asset 10 times. If quality varies wildly, the model is unstable. Don’t trust it for anything important.
The Reality Audit
Compare AI output to real-world examples. Does that UI pattern actually exist in shipped products? Do those color combinations appear in physical spaces? If AI is your only reference, you’re in a feedback loop.
The Degradation Timeline
Keep examples of AI outputs from different months. Review them quarterly. If quality is declining, that tool is dying. Find alternatives while you still can.
What This Means for Your Work
Stop using AI as a final step. Ideation? Fine. Rapid exploration? Great. Final production output? Absolutely not. Always have human judgment as the last filter.
Build contamination-free reference libraries. Curate design inspiration you know comes from real humans working on real products. Photos of physical spaces. Screenshots of actual shipped software. Scan this stuff yourself if you have to.
Verify every AI-generated claim. User research, competitive analysis, market data – if AI generated it, it might be trained on AI-generated research. Verify against primary sources.
Trust your instincts over AI confidence. If something feels wrong, it probably is. Your judgment, trained on reality, is more reliable than AI trained on recursive synthetic data.
Document everything. Keep records of what works and why. Build your own knowledge base that isn’t contaminated by AI eating itself.
Why This Makes You More Valuable
Here’s the part AI companies really don’t want you to realize: as their tools degrade, human creative judgment becomes more valuable.
The ability to spot when AI output is wrong, understand why it failed, and make correct decisions despite the tools lying to you – this is now a premium skill.
Because AI can only recognize patterns in its training data. And its training data is getting exponentially worse.
You’re not becoming obsolete. You’re becoming the quality filter that AI can’t replicate.
What to Do This Week
Run the tests I described. Compare AI outputs to your saved examples from months ago. Search for design inspiration and try to identify what’s AI-generated versus human-made.
You’ll start seeing the contamination everywhere.
Then adjust your workflow:
- Use AI for speed, not for truth
- Verify everything against reality
- Trust your judgment over tool confidence
- Build reference libraries of confirmed human work
The designers who survive this aren’t the ones who reject AI or blindly trust it.
They’re the ones who learned to spot when AI is lying – and know what to do about it.
The Uncomfortable Reality
Your AI tools are trained on garbage now. Not because the companies are incompetent, but because the internet itself is mostly garbage. AI-generated garbage, trained on AI-generated garbage, in an exponentially degrading spiral.
This isn’t getting fixed. The solution requires either:
- Rebuilding the internet with guaranteed human-only content (impossible)
- Fundamentally different AI architectures that don’t depend on internet-scale training data (years away, maybe never)
- Accepting that AI quality peaked sometime in 2024 and managing the decline
Option 3 is what’s actually happening. They’re just not saying it out loud.
So now you know. Your AI tools are contaminated. Quality is degrading. And the companies selling them to you are hoping you don’t notice until they can figure out what to do about it.
At least now you can spot it yourself.
And adjust your workflow before the contamination spreads to your actual design decisions.