Four Ways to Deal with Fake News Online

“Fake news” is the phrase du jour across the political, media, and technology domains over the past couple of weeks, as a number of people have suggested false news stories may have swung the result of the US presidential election. There seems to be widespread agreement something more needs to be done and, though initial comments from Facebook CEO Mark Zuckerberg suggested he didn’t think it was a serious problem, Facebook now appears to be taking things more seriously. Even with this consensus on the nature and seriousness of the problem, there’s little consensus so far on how it’s to be solved.

As I see it, there are four main approaches Facebook and, to some extent, other companies which are major conduits for news can take at this point:

  • Do nothing – keep things more or less as they are
  • Leverage algorithms and artificial intelligence – put computers to work to detect and block false stories
  • Use human curation by employees – put teams of people to work on detecting and squashing false stories
  • Use human curation by users – leverage the user base to flag and block false content.

Let’s look at each of these in turn.

Do nothing

This is in many ways the status quo, though it’s becoming increasingly untenable. But, through a combination of commitment to free and open speech, a degree of apathy, and perhaps even despair at finding workable solutions, many sites and services have simply kept the doors open to any and all content, with no attempt to detect or downgrade that which is not truthful. Mark Zuckerberg has offered in Facebook’s defense the argument that truth is in the eye of the beholder and that to take sides would be a political statement in at least some cases. There is real merit to this argument – not all the content some people might consider false is factually so and, in some cases, the falsehood is more a matter of opinion. But the reality is that much of the content likely to have most swayed votes is demonstrably incorrect, so this argument has its limits. No one is arguing Facebook attempt to divide one set of op-eds from another, merely it stop allowing clearly false and, in some cases, libelous content.

Put the computers to work

When every big technology company under the sun is talking up its AI chops, it seems high time to put machine learning and other computing technology to work on detecting and blocking fake news. If AI can analyze the content of your emails or Facebook posts to serve up more relevant ads, then surely the same AI can be trained to analyze the content of a news article and determine whether it’s true or not. I am, of course, being slightly facetious here – we’ve already seen the failure of Facebook’s Trending Stories algorithm to filter out fake stories. But the reality is computers likely could go a long way to making some of these determinations. Both Google and Facebook have now banned their ad networks from being used on fake news sites, so it’s clear they have some idea of how to determine whether entire sites fit into that category. It shouldn’t be too much of a leap to apply the same algorithms to the News Feed and Trending Stories. But it’s likely computers by themselves will find both false positives and false negatives. The answer almost certainly isn’t to rely entirely on machines to make these determinations.

Human curation by employees

The next option is to put employees to work on this problem, scanning popular articles to see whether they are fundamentally based fact or fiction. That might work at a very high level, focusing only on those articles being shared by the greatest number of people but it obviously wouldn’t work for the long tail of content – the sheer volume would be overwhelming. Facebook, in particular, has tried this approach with Trending Stories and then, in the face of criticism of perceived political bias, fired its curation team. Accusations of political bias are certainly worth considering here – any set of human beings may be subject to their own personal interpretations. However, given clear guidelines that err on the side of letting content slip through the net, they should not be prohibitive. The reality is, any algorithm will have to be trained by human beings in the first place so the human element can never be eliminated entirely.


The last option (and I need to give my friend Aaron Miller some credit for these ideas)  is to allow users to play a role. Mark Zuckerberg hinted in a Facebook post this week the company is working on some projects to allow users to flag content as being false, so it’s likely this is part of Facebook’s plan. How many of us during this election cycle have seen friends share content we know to be fake but were loath to leave a comment pointing this out for fear of being sucked into a political argument? On the other hand, the option to anonymously flag to Facebook, if not to the user, that the content being shared was fake, might be more palatable. If Facebook could aggregate this feedback in such a way the data would eventually be fed back to those sharing or viewing the content, it could make a real difference.

Such content could come with a “health warning” of sorts – rather than being blocked, it would simply be accompanied by a statement suggesting a significant number of users had marked it as potentially being false. In an ideal world, the system would go further still and allow users (or Facebook employees) to suggest sources providing evidence of the falsehood, including myth-debunking sites such as Snopes or simply mainstream, respectable news sources. These could then appear alongside the content being shared as a counterpoint.

Experimentation is the key

Facebook’s internal motto for developers for a long time was “move fast and break things” though it’s since been replaced by the much less iconoclastic “move fast with stable infrastructure”. The reality is news sharing on Facebook is already broken, so moving fast and experimenting with various solutions isn’t likely to make things any worse. The answer to the fake news problem probably doesn’t actually lie in any of the four approaches I’ve proposed but in a combination of them. Computers have a vital role to play but need to be trained and supervised by human employees. For any of this to work at scale, the computers likely also need training from users, too. But doing nothing can no longer be the default option. Facebook and others need to move quickly to find solutions to these problems. There will be teething problems along the way, but it’s better to work through some challenges than throw our hands up in despair and walk away.

Published by

Jan Dawson

Jan Dawson is Founder and Chief Analyst at Jackdaw Research, a technology research and consulting firm focused on consumer technology. During his sixteen years as a technology analyst, Jan has covered everything from DSL to LTE, and from policy and regulation to smartphones and tablets. As such, he brings a unique perspective to the consumer technology space, pulling together insights on communications and content services, device hardware and software, and online services to provide big-picture market analysis and strategic advice to his clients. Jan has worked with many of the world’s largest operators, device and infrastructure vendors, online service providers and others to shape their strategies and help them understand the market. Prior to founding Jackdaw, Jan worked at Ovum for a number of years, most recently as Chief Telecoms Analyst, responsible for Ovum’s telecoms research agenda globally.

19 thoughts on “Four Ways to Deal with Fake News Online”

  1. Were there’s a will there’s a way, but… is there a will ?
    1- Producers of fake news are obviously happy with them. The problem is not even web-only, what about press, radio, TV, political speech.
    2- Readers of fake news are happy with them. They could get Real News from trustworthy outfits simply by switching channels or typing a different URL. They choose not to.

    And intermediaries between fake news producers and consumers probably have no power to enforce any type of truth-checking, lest they be disintermediated into irrelevance or avoided the way old-school news sources have been. Welcome to Nwitter, the Twitter for nutters, and to RageBook. Alienating nutters and cutting them off from the sensible stuff even more would actually be counterproductive. I sometimes lurk on the Free Republic forum… people there are dissociated from any kind of reality. You want to reel them in, not cut them off.

    The issue is deeper than flagging nutty news on the web. When the US lets creationism, intelligent design, … be taught alongside evolution as science not religion, it’s a whole country giving up on Reason. When politicians, and old media are allowed to blatantly lie with no repercussions, that sets the tone for what everyone else / new media can (should ? that brings in a lot of money !) do. An education system that doesn’t teach critical thinking, the difference between science and religion, between feelings and facts, sets the stage for what happened.

    France has a few limits on free speech, mostly about denying the holocaust and inciting racial (I think LGBT too now) hatred. Obviously not ideal, but probably better than trying to have the profit motive (Google and FB ads) contort back into supporting common sense when it has been and still is supporting nuts.

    Side note: I find that adding “must be successful” to define “innovation” in a way no sensible dictionnary ever has follows the same pattern, heh ?

    1. Very true.
      Upon his defeat in his parties primary for yet another term as NY Mayor, the late Ed Koch was asked if he would run as an independent. Using his signature whit he replied….

      “No the people have spoken, and the people….should be punished.”

      And there you have it.

  2. Depriving purveyors of fake news from a revenue stream is a useful first step, but it won’t stop politically motivated operators. Rather than focussing on individual fake news items, you would need to focus on the overall phenomenon of how fake news spreads. There are several ways of preventing obviously fake news from going viral:
    1) Stop algorithms from spreading obviously fake news sources in the “stuff you might like” category, which is not actually recommended by your friends.
    2) Delay the obviously fake news recommendations from actual friends by 6-24 hours. This slows down the spread considerably without raising any freedom of speech concerns.
    3) Only load obviously fake news from friends after tapping the “this might be fake” button. The power of the default will cause many people to skip the extra step and move on.
    4) Mark obviously fake news as being fake, so that readers know not to take it seriously and might feel embarrassed to share it unless they feel very strongly about it.
    5) Make it just a little bit harder to “share” or “like” obviously fake news, by creating an artificial delay, asking “are you sure”, putting the share button in a hamburger menu.
    The internet removed the friction in a lot of communication, which promotes the spreading of dangerous and damaging nonsense stories. Facebook and others can easily put back some friction that makes launching viral nonsense much harder (i.e. take away that digital bullhorn). They can easily achieve an 80% reduction without treading on anybody’s right to free speech and without investing much time and money.

    1. How about “fake but entertaining”? I am subscribed to several news feeds in Facebook related to engineering and I know some of the postings there are “fake news”, but they are entertaining and believable, so I would recommend them to friends.

      1. Your fake-but-funny engineering posts might end up being delivered a bit slow, so what. And, yes, the April’s Fools joke industry might suffer a bit of a setback if the “jokes” are labelled as “fake”. Again, is this much of a loss?

  3. When judging accusations, I think it’s a real good idea to judge not only the accused, but the quality of the accuser’s arguments and motives, as well as the listener’s.

    All parties are responsible. As always, I consider the “Analogue Version”, which simply means finding applicable cases within existing law. “Do Nothing” already does a great deal. More often than not we don’t really need “new laws for a digital age”.

    My vote… “Do Nothing”
    Do we data mine rumor propagation between people on the telephone? Of course not.
    We do have slander and libel laws however…

  4. “…many sites and services have simply kept the doors open to any and all content, with no attempt to detect or downgrade that which is not truthful.”

    Do you mean factual? I think the only thing we may require from the news story that it might be factual, not truthful. Even if the facts are correct, the author can be choosy and present them in a certain a way to prove his opinion (“lie by omission”). No one has a monopoly on the truth (no AI, human curation, users…), so asking from an article to be truthful in an absolute sense is too much I think. Users should use their common sense to distinguish between a truth and a false.

    Even New York Times, who has a mission of “never fail” missed the pollsters. Their digital editors say they focus more on “making your life better” than making a real investigative story. (see I think they should do more legwork if they want to keep up with the facts and not sit in a digital news room and rely on the technology to make stories.

    So all things considered, I wonder if fighting with “fake news” is similar to fighting windmills.

      1. Thank you, I will look for it. On my side I recommend a story by Mark Twain “On the decay of the art of lying”.

    1. That’s what good news is: not only true, but relevant/prioritized. I think before becoming fake, news became relatable rather than important. Once the nightly news becomes entertainment not information, fake news follow.

      1. “Once the nightly news becomes entertainment not information, fake news follow.”

        Haha, I am curious if we can take fake news just as a sentiment in this case, since their authors did not have time to fact check the information overnight and fell for the “hotness” factor? So much for the number of likes…

        Truth (in a poetic sense) is like a water, there is no harm in consuming it and the bodies consist of 80% of it. So if the general population likes carbonated sugary drinks or beer or an alcohol, what we can do is to tax them and in “writing terms” it means asking for a critical feedback.

      2. To a lot of people relatable news is intrinsically important. Since news has always been a form of story telling (no one ever gives “just the facts”, ever) it is always entertainment at some level. There has to be some reason why a particular set of facts are important vs another set of facts that don’t get reported.

        And nightly news and the news media in general always needs money, so there has to be something compelling for segmented viewers to make it marketable to advertisers.

        Traditional news sources are facing (along with many 20th century institutions) the same “establishment” issues Hillary faced and Trump and Brexit seemed the answer to (as yet TBD, in reality).


  5. I believe it is nothing new as far as Apple is concerned, they have too many fake news concerning them for a very long time especially the Apple is doomed one.

    So how did we deal with it – nothing.

  6. I don’t think there is anything we can do. You can’t underestimate the ability of people to get around obstacles. As soon as you make something idiot proof, someone will make a better idiot. The irony is as fake news rises, so is Snopes traffic.

    According to Snopes ( fake news isn’t the problem. It is a failing of the media industry.

    There are a number of problems. Fake news is merely a symptom. What’s the real problem? The snake oil salesman or the buyer?


    1. Plus non-fake news isn’t very good either. I just caught a moment of D Trump’s “60 Minutes”: “I don’t want to sue the Clintons, they’re good people”. Obvious follow-up: “Didn’t you call Hillary C. The Devil just last week ?” except, not…

    2. “As soon as you make something idiot proof, someone will make a better idiot.”

      Apparently the problem is in idiotic news sharing practices ( I am guilty in it too).

      “If you are the kind of person who is inclined to like Donald Trump but also who is inclined to like the pope, the stories that you need to see are the psychologically difficult ones that pick at the tension between your identity as a Republican partisan and your identity as a Catholic.”

Leave a Reply

Your email address will not be published. Required fields are marked *