After reading Cathy O’Neil’s brilliant Weapons of Math Destruction, I found myself thinking about how would her insights in algorthm and big data apply to the rise of Generative AI (GenAI). While GenAI holds big promises to revolutionize industries, it also risks replicating many of the same systemic issues highlighted in her analysis.
Cathy warned in her book about the dangers of opaque, unfair, and unaccountable systems—algorithms that perpetuate biases and create harmful feedback loops. Today’s GenAI systems, despite their transformative capabilities, could easily fall into these same traps if developers do not have these concerns on their radar.
GenAI systems, like large language models, are often described as “black boxes.” Their outputs, regardless if text, images, or code, are the result of incredibly complex processes that are extremely difficult, if not impossible, to fully explain or completely understand the chain of thought / criteria used for the conclusion.
And this is already happening through the extremely rampant adoption of GenAI models in the market withotu the proper scrutiny, for instance when allowing it to read job applications and share outcomes without in an intransparent manner (similarly to the books’ case of Apple’s CV-reading algorithm which was identified to enforce sexist biases).
GenAI learns from massive datasets scraped from the internet—data that reflects the biases, stereotypes, and inequities of our world.
GenAI is being deployed at unprecedented scale across industries, from customer service to content creation to medicine. However, much of this power is concentrated in the hands of a few large tech companies.
When biased AI systems are used to make decisions, they can create self-reinforcing cycles of harm without proper human intervention.
To avoid turning GenAI into a “Weapon of Math Destruction,” we must take proactive steps to address these risks. Cathy’s solutions for algorithmic systems are just as relevant today:
As someone deeply passionate about reducing bias and promoting fairness in technology, I believe these solutions aren’t just idealistic. They’re necessary.
The stakes are too high to ignore. Generative AI is shaping how we work, communicate, and make decisions. If we don’t address its risks now, we risk entrenching inequality on a massive scale.
Cathy O’Neil’s work reminds us that algorithms are not inherently fair, they reflect the values of those who create them. Let’s ensure that as we build the future of AI, we prioritize transparency, fairness, and accountability.