I feel slightly guilty for a month-long silence, but I think you’ll find I’ve spent my time well. I’ve been up to my eyeballs in data analysis for some projects related to topics I’ve covered here, and I think I’m covering new ground. That’s where I’ve needed to focus, and it’s why I’ve been AWOL.
During this hiatus, I finally cancelled my New York Times subscription. The final straw was an article (that I can no longer access) written by two seasoned journalists, Alan Blinder and Michael C. Bender, about the feud between Harvard University1 and the present Administration. I was writing an installment about it, and even created a graphic to go with it:
As I started to go through this highly publicized case to understand the issues, I saw that Mr. Bender, an allegedly experienced journalist with a history degree from Ohio State, quoted an administration official as stating that the Administration was cutting off funding because Harvard had somehow broken the rules. Specifically:
The administration preferred to work with Harvard and encouraged the university to “come to the negotiating table in good faith” instead of grandstanding, said Harrison Fields, a White House spokesman. “Harvard is showboating,” Mr. Fields said. “But they know more than anyone that not playing ball is going to hurt their team. They need to be in compliance with federal law in order to get federal funds.”
Someone with his credentials should know better: Mr. Fields isn’t a lawyer, so he has no authority on what is or is not legal. More importantly, in our system of Government, the Executive Branch does not get to decide whether Federal law was broken2. That’s the function of the Judiciary Branch. In addition, the rules that were allegedly broken were created by Executive Order. Making the rules is the function of the Legislative Branch. This is grade-school civics, the “separation of powers”. Yet, here he was, bothsidesing, giving equal weight to the University’s position (solid) and the Administration’s (absurd) position. I’ve had enough of this so-called “reporting”.
Since my Facebook feed (such as it is) leans toward Trump supporters (mostly long-lost friends from high school), I also saw several rant-posts along the lines of “Why does the government have to pay Harvard, when Harvard is so rich?” traffic, straight out of the Fox altiverse. That adds legitimacy to the “debate” and ignores the fact that the Government awarded research grants, competitively, to the University. Unsurprisingly, the current Administration is breaking contracts unilaterally, but it was galling that the New York Times wouldn’t call them on it. So I’m done.
On with it:
The fundamental structure of information consumption has transformed profoundly during my lifetime. As I analyze these changes through both historical and scientific lenses, I’ve identified a critical inflection point: the collapse of traditional information gatekeeping mechanisms alongside an unprecedented proliferation of unfiltered content.
The Knowledge-Information Paradox
Empirically, there is a counterintuitive relationship between information access and knowledge acquisition. Despite the ubiquity of smartphones, which provide unprecedented access to information repositories, this hasn’t translated into corresponding knowledge gains3. A 2023 survey commissioned by the US Chamber Foundation revealed that seven in ten Americans cannot pass basic civics assessments covering fundamental democratic concepts, with only half able to identify which governmental branch transforms bills into law4.
More troubling is the knowledge stratification phenomenon: As information access has expanded, knowledge distribution has grown increasingly uneven across demographic groups.5 Despite twenty-five years of digital revolution, NAEP civics assessment data shows that average 8th-grade performance in 2022 was statistically unchanged from 1998. Unprecedented access to information by a generation raised on electronic media produced no measurable improvement in civic knowledge, while performance gaps between high and low achievers have persisted.6 This pattern represents what researchers term “knowledge fragmentation,” a condition where increased information availability paradoxically leads to more specialized but less integrated understanding. The mechanism likely involves the dissolution of shared information processing systems (traditional media gatekeepers) without adequate replacement by individual filtering competencies.
The Vanishing Gatekeeper Function
Traditionally, editors, publishers, and broadcast directors served as cognitive filters: Editors, publishers, and broadcast directors who filtered, contextualized, and prioritized information before it reached us. The newspaper editor’s decision to feature the Yankees game over your child’s Little League triumph wasn’t a value judgment but a demographic calculation based on collective audience interest. Now, you can get a customized newsfeed that features just the news you want, ignoring potentially more significant news that matters more broadly.
The information ecosystem that has disappeared operated on a shared foundation. While imperfect (and frequently prone to non-trivial institutional biases), it created common reference points that facilitated social cohesion. Research from Northwestern University’s Local News Initiative and Harvard’s Shorenstein Center documents that communities without local newspapers experience weakened social cohesion, declining awareness of regional affairs, and increased political polarization7.
The dissolution of these shared reference points presents a measurable societal challenge. When everyone is tasked with “doing their own research,” we observe two problematic outcomes:
Cognitive overload: The average person encounters approximately 34GB of information daily (equivalent to reading 174 newspapers), but can only meaningfully process about 0.5% of this volume (roughly one newspaper’s worth).8
Confirmation cascades: Without external validation mechanisms, individuals naturally gravitate toward content that reinforces existing beliefs, creating self-reinforcing information bubbles that grow increasingly resistant to correction.
Crowd-sourced Gatekeepers
Simply lamenting the loss of Walter Cronkite-style trusted arbiters is not a practical solution. We cannot culturally or legally restrict information flow without risking our hard-won freedoms. However, these gatekeepers’ central function, separating signal from noise, remains essential. And it’s become increasingly clear that the “platforms” are either unable or unwilling to take on that responsibility, primarily because it necessarily involves human oversight that is difficult to control and expensive to scale.
But the volume of information is beyond human capability. We still need scalable, algorithmic verification systems that provide transparent, verifiable evaluation of content credibility. Several approaches are explored, from decentralized verification networks to AI-assisted fact-checking systems9. Integrating large language models with reputation scoring systems represents a promising direction among many emerging solutions.
A reputation-based approach differs fundamentally from engagement metrics like “likes” or shares, which measure popularity rather than accuracy. Instead, it creates computationally tractable networks of expertise that can evaluate claims within their proper domains. Initial findings suggest that crowd-sourced accuracy ratings could be leveraged to improve the quality of shared content and reduce misinformation propagation by shifting user attention to accuracy rather than engagement-driven factors.10
LLMs as Truth Mediators
Large language models have 3 features that make them valuable in this context:
Pattern recognition across vast collections of text, enabling rapid flagging of potential factual inconsistencies
Ability to trace claims back to their origin through citation networks
The capacity to quantify the uncertainty of statements rather than making choices as to which statement is correct.
However, their limitations are equally significant. Without a grounding in expert verification networks, these models risk amplifying falsehoods with persuasive eloquence. Recent research shows hallucination rates varying dramatically by domain and task: while the newest models achieve rates as low as 1.5-1.8% for general tasks, specialized domains like medical literature can see rates of 28-39%. Like George Santos or Jon Lovitz’s SNL character Tommy Flanagan, LLMs often express more confidence when generating incorrect information11.
This is why reputation systems are essential. We can create verification mechanisms that scale with our information environment by computationally representing expertise networks, not merely through credentialing but by demonstrating proficiency in specific knowledge domains. In fact, rather than employing armies of “content moderators,” the community itself can be the human-in-the-loop needed to verify the AI’s assertions, much like Wikipedia or open-source.
Emerging Implementations
Various initiatives are attempting to address these challenges. I’ve written several installments about SciValidate12, a conceptual framework that proposes visual indicators of scientific validity backed by transparent expertise networks. This system is envisioned as building upon existing academic infrastructure, using identifiers like ORCID and publication records, to establish baseline reputation scores across domains.
Other promising approaches include blockchain-based reputation systems, collaborative fact-checking networks, and AI-assisted information quality assessment tools. Each faces significant technical and social challenges, from identity verification to expertise boundary definition to platform integration.
Yet these implementation challenges pale compared to the societal cost of continuing without effective truth validation mechanisms. The current information environment operates as what game theorists call a “market for lemons,” where the inability to distinguish the quality of a product leads to a downward spiral of deteriorating content.
Moving Beyond Gatekeepers to Networks of Trust
The operative question isn’t whether we need information validation mechanisms, but how to design systems that preserve individual agency while providing meaningful quality signals. The answer lies not in restoring centralized gatekeepers but in creating decentralized, transparent networks of expertise.
In this framework, authority becomes fluid and domain-specific. A physicist might have a high reputation in thermodynamics but receive no special weighting when discussing economic policy. This approach preserves digital media’s democratizing potential while addressing its most problematic failures.
The challenge requires collaborative development across disciplines, from computer science to behavioral psychology to media studies. To make such systems trustworthy and effective, developers should focus on the social dynamics leading to broad adoption.
Ultimately, I believe we’re witnessing an information crisis and the painful evolution of our collective knowledge systems. The gatekeepers of yesterday cannot return, nor should they. However, the current environment has failed to preserve its essential function, which is the separation of accurate signals from the distraction of noise. We need mechanisms suited to our digital reality. Integrating AI systems with human expertise networks offers our most promising path forward.
Full disclosure #1: I graduated from Harvard in 1988. Full disclosure #2: I have never donated anything besides my time to the school for any reason. There’s tribal affiliation, but nothing else in it for me.
Standard journalism practice treats even notorious criminals as ‘alleged’ until conviction. The Times denied Harvard this basic courtesy.
Pew Research Center. (2021). “Mobile Technology and Home Broadband 2021.” https://www.pewresearch.org/internet/2021/06/03/mobile-technology-and-home-broadband-2021/
US Chamber Foundation. “New Study Finds Alarming Lack of Civic Literacy Among Americans.” This contemporary survey reveals that more than 70% of Americans fail basic civics tests despite unprecedented access to information. https://www.uschamberfoundation.org/civics/new-study-finds-alarming-lack-of-civic-literacy-among-americans
Bonfadelli, H. (2002). “The Internet and Knowledge Gaps: A Theoretical and Empirical Investigation.” European Journal of Communication, 17(1), 65-84. https://doi.org/10.1177/0267323102017001607
National Assessment of Educational Progress. (2022). “Civics Assessment.” NAEP data shows that while average 8th-grade civics scores in 2022 were only 2 points lower than 2018 and not significantly different from 1998, performance gaps persist, with the bottom quartile experiencing decline while top performers remain stable. https://www.nationsreportcard.gov/civics/
Northwestern University’s Local News Initiative documents that communities losing local newspapers experience decreased social cohesion, reduced civic engagement, and increased political polarization. Harvard’s Shorenstein Center research shows similar patterns where local news loss correlates with weakened community ties.
Bohn, R., & Short, J. (2009). “How Much Information? 2009 Report on American Consumers.” Global Information Industry Center, University of California, San Diego. This study reported that Americans consume about 34GB of information daily, equivalent to approximately 174 newspapers.
Current approaches include decentralized blockchain-based verification networks like Fact Protocol (2024) and systems described in ‘Blockchain-Based Platform to Fight Disinformation Using Crowd Wisdom and Artificial Intelligence’ (Applied Sciences, 2023), and automated fact-checking systems like ClaimBuster, Full Fact’s AI tools used across 30 countries, and commercial platforms like Originality.AI.
Pennycook, G., & Rand, D. G. (2021). “Shifting attention to accuracy can reduce misinformation online.” Nature, 592(7855), 590-595. https://doi.org/10.1038/s41586-021-03344-2
Recent benchmarking studies show significant variation in LLM hallucination rates: while GPT-4 variants achieve 1.5-1.8% rates for general tasks, specialized domains see much higher rates, with medical reference generation showing 28.6% hallucination rates for GPT-4. Notably, a recent study found that models are significantly more likely to use confident language when hallucinating than when referring to a factual base.