Last week, two separate jury verdicts landed against Meta and Google in cases tied to social media's impact on young people. The rulings, and the legal conversation building around them, are focused primarily on product design: the mechanics of endless scroll, the pull of notification loops, the architecture of feeds engineered to keep users engaged.
The mental health crisis these platforms have contributed to is real, serious, and long overdue for accountability. And as that reckoning unfolds, it also opens the door to a broader question: what else needs to change? Because the same design choices that engineered addiction also shaped the information environment millions of people navigate every day.
The current legal arguments have carefully avoided the content question, in part because Section 230 of the Communications Decency Act still provides significant protection for platforms as it relates to user-generated content. The lawsuits that have gained traction focus on how the products are built, not the content that fills them.
However, the design of a platform and the information it surfaces are deeply connected. The feeds engineered to maximize engagement are the same feeds responsible for how news, information, and content reach users every day. In this way, the container and the content are inseparable. A feed optimized for engagement will surface what is emotionally activating, not what is accurate. That is not a coincidence. It is the product working as intended.
This is why credibility transparency matters as a direct response to the engagement-first model. When users can see whether a claim has been verified before they share it, the mechanics of viral amplification lose some of their grip. The design problem and the content problem have the same solution: giving people better information about the information they're seeing.
The lawsuits are asking platforms to reckon with how their design impacts the end user. The harder question is what it does to people's trust in the platforms themselves.
This is not a theoretical problem. The evidence has been building for years.
According to Gallup's 2025 data, only 28% of Americans trust the mass media to report the news fully, accurately, and fairly, one of the lowest levels ever recorded. The Pew Research Center's 2026 report found that most U.S. adults say they regularly encounter news they believe is inaccurate, and about half say it is difficult to determine what information is true.
Those numbers reflect a public that has stopped assuming the information environment is reliable. They are navigating it with skepticism as a default, and largely without tools to help them do it well.
Social platforms have become the dominant distribution channel for news without building any of the infrastructure that news organizations historically maintained to support credibility: editorial standards, source verification, correction processes, transparency about sourcing. Instead, they built algorithms that reward virality and emotional engagement, then stepped back from responsibility for what those algorithms amplified.
Accountability for platforms is not just a legal concept. It is a product decision.
The most effective platforms of the next decade will not be the ones that successfully defend themselves in court. They will be the ones that invest in credibility as a feature. That means building transparency into the feed, not just into the terms of service. It means making it easier for users to evaluate the sources they encounter, not just easier to engage with them.
This is not idealism. It is what the research shows users actually want. Pew Research's data indicates that most Americans say they would know how to verify a news story, but only a fraction are confident that others can do the same. There is a clear public appetite for tools that make credibility more legible. Platforms just aren’t building them. And if they did, at this point, would the public even trust that those tools are impartial?
Here is where the accountability gap becomes personal. The legal cases exist because platforms were not self-regulating. Credibility tools fill a parallel gap; they give individuals and communities accountability without waiting for platforms to build that infrastructure themselves. The courtroom is one form of this. Choosing to verify before you amplify is another.
The conversation that began in these courtrooms is about harm reduction. This is important, and urgent. And it also creates hope that this is just the start and that beyond the legal reckoning, there is an opening for platforms, users, and the broader information ecosystem to actively restore confidence in what people encounter every day.
What would it look like for platforms to treat credibility as a design priority rather than an afterthought?
It would look like users being able to see not just who shared something, but whether the underlying claim has been verified. It would look like context sitting alongside content, and signals of credibility that travel with the information instead of getting stripped away as it spreads. It would look like making it easy to challenge something questionable, not just to amplify it.
While platforms work through their legal reckoning, and while we hope that what comes next leads to genuinely healthier online spaces, we don't have to wait to do something small and meaningful right now. The volume of information, the uncertainty about what is true, the exhaustion of not knowing what to trust, these are real pressures that contribute to the overwhelm so many people feel online. Checking the credibility of what you read before you believe it or share it won't solve everything. But it is one small, concrete step that puts you back in control.
AmICredible was built for exactly that moment. Paste any statement, headline, or claim into the platform and get a credibility score backed by sourced evidence. Not to tell you what to think, but to give you solid ground to stand on. A little more confidence in what you're reading. A compass to help you navigate the noise. That is a small thing. But small things, practiced by enough people, are how the broader culture starts to shift.
The verdicts against Meta and Google are a beginning, not an endpoint. The harder work is rebuilding confidence in the information landscape, and that belongs to everyone: platforms, and individuals alike.
Because the trust problem does not stop at the product design. It runs through every piece of content the product delivers.
AmICredible is a platform that helps people evaluate the credibility of online statements using AI and sourced evidence. Learn more at amicredible.ai.