Skip to content
New Logo V2
Thinking about the future of AI
finding truth confirmation bias fighting misinformation

An Open Letter to Tech Industry Leaders

Dan Nottingham
Dan Nottingham |

To those leading the next generation of technology — founders, executives, engineers, investors, and visionaries — we are at a turning point. Artificial intelligence now has the power to inform billions of people faster than any platform in history. That power carries a responsibility we can’t delegate to algorithms. It demands human judgment, transparency, and restraint.

The recent creation of Grokpedia — an AI-driven alternative to Wikipedia — is a case study in what not to do. It was launched with the promise of being “less woke,” more balanced, and free from the biases that supposedly infect Wikipedia. Its founders declared that it would tell “the truth, the whole truth, and nothing but the truth.” The intention may have been noble. The execution was not.

By attempting to correct one form of perceived bias, Grokpedia simply replaced it with another. Bias isn’t removed when you reverse the direction of ideology — it’s multiplied. When a system sets out to be “less” of one worldview, it’s already declared allegiance to another. That is not neutrality; it’s partisanship disguised as objectivity.

"As leaders, we have a duty to recognize the difference between using AI to assist discovery and using it to define reality."

Even more troubling is the decision to remove the human element — the editors, fact-checkers, and open debate that make a knowledge platform credible. Wikipedia, for all its imperfections, exposes its process to the world. You can trace every edit, question every citation, and see where human disagreement shaped the outcome. Grokpedia hides that process behind an opaque AI model. It turns truth into a proprietary product, manufactured by an algorithm that can’t be interrogated.

This isn’t just a philosophical problem. It’s a practical one. AI systems inherit the values of their creators. When you claim to build “the truth” while excluding diverse human oversight, you’re not eliminating bias — you’re automating it. You’re teaching machines to speak with certainty in areas where humility is required. That approach doesn’t build trust in AI; it destroys it.

As leaders, we have a duty to recognize the difference between using AI to assist discovery and using it to define reality. The first approach empowers people. The second controls them. Truth cannot be engineered through ideology, and it cannot thrive without transparency.

AmICredible takes the opposite approach. Instead of replacing human editors, it empowers them. Instead of claiming neutrality, it shows its sources. Instead of promising “the truth,” it helps users understand how credibility is determined — with evidence, transparency, and accountability. AI, when used responsibly, should strengthen human reasoning, not override it.

If we want the public to trust artificial intelligence, we must earn that trust by example. We cannot fight misinformation by creating new monopolies on truth. We cannot claim objectivity while silencing the debate that defines it.

Technology doesn’t need to pick sides. It needs to protect the process by which truth emerges. That’s what credibility really means — and it’s what will separate the companies that shape the future from those that simply distort it.

Sincerely,
Dan Nottingham
Founder, AmICredible

Share this post