We Didn’t Learn. Now We’re Automating Our Worst Mistakes.

It’s happening again – only this time, it’s worse. AI models trained on the very social media platforms that polarized our politics, spread misinformation, and fueled division are now being sold as our next great productivity tools. Are we automating our mistakes, or building a future we’ll regret?
We Didn’t Learn. Now We’re Automating Our Worst Mistakes.
In 2016, social media hijacked democracy. In 2020, it radicalised public discourse. By 2024, it was the de facto battlefield of geopolitics.
We knew it was broken. We saw how engagement-driven algorithms optimised for outrage, not truth. We watched them poison elections, push misinformation, and turn civil discourse into trench warfare.
And what did we do? We handed them the keys to the future.
We’re not just repeating our mistakes. We’re hardwiring them into the foundation of society, automating them at scale, and calling it innovation. The very platforms that fuelled chaos are now training the AI systems that will schedule our meetings, write our emails, filter our information, and (soon enough) make our decisions.
This isn’t just history repeating itself. It’s history going exponential.
Garbage In, Dictators Out
AI doesn’t just appear out of thin air. It needs to be trained — on vast amounts of data. Think of it like a child learning a language: it absorbs everything it hears, patterns emerge, and over time, it starts to understand and predict. The more (and better) the data, the smarter it gets.
But here’s the problem: AI is only as good as the data it’s trained on.
And two of the most powerful AI models in development today — Meta’s Llama and Musk’s xAI Grok — are being trained on social media data. The same platforms that gave us fake news, radicalisation pipelines, and algorithmic outrage are now the textbooks for the intelligence systems that will power the next generation of technology.
If AI is like a child, this is the equivalent of raising it in a house where the only sources of knowledge are conspiracy theorists, anonymous trolls, and outrage-baiting influencers.
We don’t do this with our actual children. We surround them with good teachers, curated books, and role models. We teach them to think critically, to consider different perspectives, to engage in thoughtful discussion.
But with AI, we’re throwing that philosophy out the window. Instead of nurturing intelligence, we’re force-feeding it the worst of human discourse — and then expecting it to grow into something smarter than us.
The Microsoft Experiment We Forgot
In 2016, Microsoft gave us a preview of what happens when you train AI on the unfiltered chaos of social media. They launched Tay — a chatbot designed to learn conversational patterns directly from Twitter. It started out friendly, eager to engage. But within 16 hours, it had spiraled into a full-blown racist, misogynistic disaster, parroting the very worst content it encountered. Microsoft pulled the plug almost immediately.
It should have been a wake-up call: if you train a machine on garbage, it doesn’t just reflect the garbage — it amplifies it. And yet here we are, scaling that same flawed approach to a societal level. Instead of using it to chat with a Twitter bot, we’re using it to train AI that will run our inboxes, filter our newsfeeds, and influence our political decisions.
The New Overlords
Imagine handing a classroom of ten-year-olds over to a group of nihilistic, rage-fueled propagandists for five years and then expecting them to grow into well-balanced adults.
That’s what we’re doing with AI.
Musk’s xAI is using Twitter (sorry, X) as a major training source — one of the most polarised, toxic, and manipulated platforms in existence. Meta’s Llama has been trained on years of Facebook and Instagram posts, two platforms that have been caught fueling ethnic violence, election interference, and mass-scale misinformation.
These are the models we’re about to trust with our search engines, business decisions, personal assistants, and governance tools.
From Blog Posts to World Policy
As I outlined in The Great AI Heist, governments and powerful organisations are increasingly leaning on AI systems for massive, society-wide applications. This goes far beyond automating workflows; we’re seeing AI quietly shaping economic policy, influencing military strategies, and even steering regulatory frameworks. And many of these systems are built on the same flawed, engagement-driven, and outrage-fueled training models that led Tay astray.
We’ve shifted from using AI to help you write blog posts or schedule emails to embedding it into the highest levels of societal decision-making. AI isn’t just a tool anymore — it’s a central player in determining our future. The implications are staggering: we’ve taken the very systems that divided us on social media and handed them the reins to our laws, economies, and national security. We’re not just repeating our mistakes — we’re institutionalising them.
The Problem with AI Isn’t the Intelligence. It’s the Environment.
When you raise a child in a toxic environment, they absorb that toxicity. AI is no different.
Instead of training these models on centuries of scientific knowledge, great literature, and human wisdom, we’re training them on rage, engagement loops, and clickbait headlines. Instead of teaching them how to reason, we’re teaching them how to optimise for attention — because that’s what their teachers (social media algorithms) were designed to do.
The result?
A generation of AI that doesn’t prioritise truth, balance, or ethics — but rather what will drive the most engagement. And because AI is trained to sound confident (whether it’s right or wrong), it’s about to become an unstoppable misinformation engine.
We were already manipulated by the platforms. Now, we’re handing them the ability to manipulate us at scale, automatically, and with a friendly chatbot interface.
We Didn’t Learn. Now We Pay.
We had a choice. AI could have been built on the best of human knowledge. It could have been trained on verified research, balanced perspectives, and historical wisdom.
Instead, we trained it on rage, tribalism, and misinformation — and now we’re about to put it in charge.
We didn’t learn from social media’s mistakes.
Now, we’re about to automate them.
At scale.
Forever.