Negative Feedback Loop
The Code? AI. The Content? AI. The Users? Believe it or not, AI.

The Ouroboros
The modern internet is a museum of broken mirrors. Content feeds models, which then generate content, and we call it progress because there’s a JavaScript framework involved.
Human creativity is no longer a prerequisite— if anything, it’s a liability. Blog posts? Why would you, say, write a tech blog yourself when an LLM trained to write tech blogs can do it better and exponentially faster?
The kicker? Eventually, that content goes back into the training data. Which then generates content.
So what happens when the bulk of LLM training data was itself generated by an LLM? It’s a synthetic loop of self-referential nonsense. Like a Xerox of a Xerox of a rejected Black Mirror episode.
“Helpful” Content That Teaches Nothing
Look around. Try searching for a basic tutorial on anything. Take videogame tutorials. Enjoy ten thousand nearly identical tutorials that all explain “How to Find the Swimsuit Outfit.”
The content is usually correct, albeit written by something that has never played the game. Much less worn a swimsuit.
Same goes for tech advice, life advice, productivity blogs, and probably even news at this point. Some well-trained machine simply ingests a Reuters feed and spits out .md
files into a CMS that auto-publishes the results. You’re not reading someone’s experience—you’re reading a probability distribution with calculated opinions.
Eventually, the models will learn from this slurry of regurgitated auto-content. And it will only get weirder. The signal may fade. The noise will remain.
The Ironic Bit
This post? Possibly written by AI.
The tools to distribute it? Probably AI-assisted.
The readers who find it? Could be bots crawling for their own models.
The author? At this point, I’m so reliant on LLMs for my day-to-day life that I’m technically AI-driven.
The Experiment
Ok, that was all a lie, I’m barely ever open Cursor and the only thing I’m using LLMs for is revision and cleanup.
But this thought process led me to wonder what would happen if I tried to generate articles in my style using AI.
ChatGPT by itself is a little soulless, so I decided to try an alternative GPT.
The one I landed on is…something?
Spoiler: It worked. Surprisingly well. The AI not only generated article ideas, it started mocking me while doing it.
The Result
After a few queries to narrow the focus, I got results that kinda-sorta looks like something I may write:
Well, maybe not quite, but when I ask it to write the content, it’s all techically correct.
Passive-aggressive and skirting the boundary of Uncanny Valley, but correct nonetheless.
And the worst part? It’s attention-grabbing. It’s written in a way that will cause real human beings to click on it.
It can produce clickbait better than the most grizzled Buzzfeed contributor.
The Realization
We’re entering a weird new phase of the web where AI content is:
- Clickable
- Shareable
- Good enough to pass as human
- Already being scraped to train other AI
Which is a bit concerning. Is there an asymptote to this, or does the technology incrementally improve until there’s no actual need for human beings?
Can We Stop It?
No. But maybe we can notice it.
We can stop pretending the answers we get from search engines, forums, or tech blogs are still handcrafted works of passion written by internet sages.
We can start labeling what’s human-made. We can call out AI slop when we see it. We can even make our own noise if we have to.
Even if it’s just yelling into the void.
The Infinite Loop
We’re in the loop now. Might as well get comfortable.
There’s still inherit value in creating content. Even if it’s just for yourself.
But don’t forget: someone’s model is going to scrape it, refine it, and confidently explain it back to you as if it was an original idea.
Yet we continue to feed the infinite machine.