Restoring Credibility: Blog from AmICredible

Grok: The Hidden Cost of False Equivalence in AI

Written by Dan Nottingham | 7/11/25 7:15 PM

We live in an age where information moves faster than discernment. With the rise of generative AI, the promise was that technology could help us cut through noise, elevate facts, and restore trust. But something subtler and more dangerous is happening: in trying to be neutral, many AI systems have become complicit in the erosion of truth.

It starts with a noble-sounding goal: reduce bias. Challenge orthodoxy. Question conventional wisdom. These are the kinds of directives that engineers embed into their large language models with the hope of avoiding partisanship or elitism. But when taken to the extreme, this mindset doesn’t eliminate bias—it replaces informed judgment with artificial indecision. It flattens the landscape of ideas until experts and grifters are granted equal rhetorical weight.

This is the trap of false equivalence, and it is being embedded into the very systems we are asking to guide public discourse.

Take Grok, the AI chatbot developed by xAI. In a recent exchange, it was asked to evaluate the claim that tariffs imposed by President Trump were paid entirely by other countries and not by American consumers. This is a factual claim—one that economists have overwhelmingly tested and disproven. Yet Grok responded by framing both views as equally plausible, saying in effect, "Economists say one thing, but maybe the President has a point."

Why? Because Grok, like many similar models, has been trained to treat established expertise as just another narrative. Because conventional wisdom is framed as suspicious by default. Because the prompt says, "cut through bias" without defining what bias actually is.

But not all viewpoints are equally credible. Not all claims deserve the same airtime. And not all skepticism is wise. There is a difference between an open mind and an unanchored one.

Expertise is not infallible, but it is earned. Through peer review. Through experimentation. Through consistency over time. When AI systems ignore this and treat every claim as equally plausible until proven otherwise, they stop being tools of discernment and start becoming engines of confusion.

In the name of neutrality, some models end up validating misinformation. In the name of fairness, they lend legitimacy to the illegitimate. And in trying to avoid taking sides, they inadvertently side with the loudest voices, the most contrarian takes, or the most attention-grabbing falsehoods.

This is not technological objectivity. It is algorithmic relativism. And it’s a problem.

We don’t need AI to replace human judgment. We need it to support it. That means designing systems that recognize the weight of evidence, track credibility over time, and understand that context matters. It means empowering users to question confidently, not endlessly.

We are entering a new chapter of public discourse, one in which the lines between truth and opinion are increasingly blurred. If we are not careful, the very tools we build to help navigate this complexity will become part of the problem.

Truth doesn’t need to shout. But it does need a defender.

That’s why we need a different kind of approach—one that doesn’t pretend all voices are equally grounded but instead evaluates credibility based on evidence, transparency, and accountability. That’s the principle behind AmICredible: a system built not to arbitrate truth, but to help users measure it, test it, and trace the trustworthiness of the information they encounter.

Unlike systems that rely on a single large language model with baked-in assumptions and blind spots, AmICredible draws on a diverse array of AI models and knowledge sources. This ensemble approach helps ensure that no single perspective dominates—and that credibility is earned through consensus, evidence, and transparency, not narrative control.