Modern news publishers have a big problem.
Industry standard is to publish reports -- even when incomplete -- within minutes of critical events.
However, today's fact-checking processes take up to *13 hours* to correct information in those rushed reports.
And if any mistakes slip through the cracks, as the saying goes, that misinformation travels halfway around the world before the truth can even put on its pants.
In our polarized world, misinformation has shown significant power to shape public opinion and influence the direction of our societies. If we want to share a *trustworthy* digital landscape, which empowers us to better understand our world and make better decisions because of it, we must create new tools to validate our online information -- and fact-checking is a pillar of that effort.
My name is Sam Butler, and I'm working on a project called CrowdFact. Our objectives are to (a) reduce the current response time on misinformation from 13 hours to a matter of minutes, and (b) provide a model for stabilizing and protecting the digital information landscape in advance of the worldwide 2020 election cycles.
Our tool solves the aforementioned problems by bringing fact-checking power *outside* the newsroom. With our browser overlay technology, anyone can publicly fact-check claims -- on *any* piece of online content -- and "bridge" them to supporting/refuting evidence, which will be visible to all other users looking at that content.
Here's a video of our tool in action:
https://www.youtube.com/watch?v=FphXsJS7wtA#t=02m05s
Presuming users have fact-checking competence and good intentions, this solution is robust.
-- But when they *don't?*
As a digital community, that is a question we must address.
In our tool, when users cite (or "bridge") evidence to *contradict* an existing claim, that claim is highlighted in *red* to alert other readers of potential misinformation. When users bridge evidence to *support* an existing claim, that claim is highlighted in *green*, to signal validity to other readers. When users have bridged both *contradicting and supporting* evidence for a given claim, that claim is highlighted in *yellow* to alert other readers that it is questionable and up-for-debate.
See this image for a visual representation:
The biggest problem here is that we don't have an objective way to *evaluate* the quality of bridged evidence -- including the sensibility of the citation and the trustworthiness of the referenced source.
In short, someone can make a random "contradicting" bridge on a completely valid claim, and make that claim be highlighted in red to appear untrustworthy. Someone can likewise make a random "supporting" bridge on a completely false claim, and make that claim be highlighted in green to appear trustworthy.
If other users fail to explore these bridges and simply take the color of the highlighted text at face-value, they can come away with the impression that misinformation is true and real information is false, which is a dangerous outcome.
So the question becomes: how *can* we objectively evaluate the quality of bridges?
Several solutions to come to mind, but each of them have their own flaws.
1. Internal platform moderation
Whenever a user bridges a claim to supporting/contradicting evidence, that bridge is evaluated by internal staff to determine its validity. Only after passing this moderation will a bridge be publicly displayed.
However, what if a user make a legitimate bridge to contradict a disreputable claim in an article -- and while that bridge awaited moderation, other users read the article's misinformation and accepted it as true?
This demonstrates the tradeoff of speed and reliability in any fact-checking effort.
Flaws: Slows down the tool's fact-checking capabilities, doesn't scale with user activity, potential for platform + moderator biases
2. Crowdsourced moderation
When users bridge claims to supporting/contradicting evidence, that bridge is immediately visible to (at least some portion of) the existing user base. Any time a user comes across a supporting/contradicting bridge, they will soon have an incentive (based on a cryptocurrency rewards protocol) to explore that bridge, and flag it if it appears unsubstantiated.
With a trusted crowd of user-moderators -- who could potentially view supporting/contradicting bridges prior to the rest of the community -- this could be an effective and scalable solution.
Likewise, if the broader userbase positively responds to the incentive and takes it upon themselves to evaluate any encountered bridges, this could also be an effective and scalable solution.
Flaws: Allows unmoderated bridges to become publicly visible, risking that others may erroneously internalize them. If relying on user-moderators, you are susceptible to their biases while slowing down the tool's fact-checking capabilities.
3. Tangible disincentives
As described in the preceding paragraph, our technology will include a platform-specific cryptocurrency which can reward users for their fact-checking efforts.
Similarly, we can also create opportunity costs for publishing misinformation and disreputable bridges.
Through their activity on our platform, users will have an opportunity to earn a share of daily token rewards. If users have been found to post misinformation/disreputable bridges, they could lose a portion of their projected rewards tokens to disincentivize that foul play.
The biggest strengths of this solution are that it (a) preemptively disincentives bad actors, and (b) incentives bad actors to personally correct their own disreputable activity.
Flaws: If users are willing to accept this disincentive in exchange for publishing misinformation, this solution is ineffective
4. Machine-learning verification
Use algorithms to (a) analyze all available information about a given bridge and the evidence that it cites to support/refute a claim, (b) calculate the projected validity of the bridge, and (c) determine whether or not to make this bridge publicly available.
ML verification can be a near-instantaneous and reliable source of moderation. Furthermore, the exact process of its moderation can be tweaked to include effective human collaboration and improved reliability.
For example, if the algorithm calculates a low trust score for a given bridge, that bridge can be instantly flagged and sent to a human moderator to determine proper action.
Flaws: The algorithm itself is a source of bias. Is CNN a reputable source? Is Fox News? Breitbart? Is the data from which the algorithm learns which sources are reputable and disreputable are a reliable source itself? Are the biases of the humans creating this algorithm diminishing its objectivity? What *is* the objective truth that this algorithm is based on?
Conclusion:
We have the technology to build new fact-checking protocols for the Internet and meet the instantaneous supply of new online information.
However, as those solutions involves more crowdsourced participation and machine learning (and a higher volume of fact-checking activity in general), a new challenge comes to mind.
*How can we fact-check the fact checks?*
This is a complex problem facing all technologies in the anti-misinformation space, with the consequences affecting all of us who rely on digital information.
We've shared some of our ideas for solutions -- and now, we'd love to hear yours.
Share feedback on our proposals, and propose any ideas you have to improve the speed and reliability of next-gen fact-checking tools.