A New AI “Journalist” Is Rewriting the News to Remove Bias

by Tim Sloane 0

3d rendering of human brain on technology background

ProPublica has reported the very real problem that AI, based on poor training data, often incorporates our biases and automates them, as reported in the article “Machine Bias Risk Assessments in Criminal Sentencing.” In a total reverse, this article describes AI that purports to evaluate a news story from multiple sources and then writes an article that is neutral and fact-based:

“Want your news delivered with the icy indifference of a literal robot? You might want to bookmark the newly launched site Knowhere News. Knowhere is a startup that combines machine learning technologies and human journalists to deliver the facts on popular news stories.

Here’s how it works. First, the site’s artificial intelligence (AI) chooses a story based on what’s popular on the internet right now. Once it picks a topic, it looks at more than a thousand news sources to gather details. Left-leaning sites, right-leaning sites – the AI looks at them all.

Then, the AI writes its own “impartial” version of the story based on what it finds (sometimes in as little as 60 seconds). This take on the news contains the most basic facts, with the AI striving to remove any potential bias. The AI also takes into account the “trustworthiness” of each source, something Knowhere’s co-founders preemptively determined. This ensures a site with a stellar reputation for accuracy isn’t overshadowed by one that plays a little fast and loose with the facts.

For some of the more political stories, the AI produces two additional versions labeled “Left” and “Right.” Those skew pretty much exactly how you’d expect from their headlines:

  • Impartial: “US to add citizenship question to 2020 census”
  • Left: “California sues Trump administration over census citizenship question”
  • Right: “Liberals object to inclusion of citizenship question on 2020 census”

Some controversial but not necessarily political stories receive “Positive” and “Negative” spins:

  • Impartial: “Facebook scans things you send on messenger, Mark Zuckerberg admits”
  • Positive: “Facebook reveals that it scans Messenger for inappropriate content”
  • Negative: “Facebook admits to spying on Messenger, ‘scanning’ private images and
    links”

Even the images used with the stories occasionally reflect the content’s bias. The “Positive” Facebook story features CEO Mark Zuckerberg grinning, while the “Negative” one has him looking like his dog just died.”

Of course this is unlikely to remediate the spread of fake news, after all an analysis of 4.5 million tweets shows falsehoods are 70 percent more likely to get shared! That said, many people just don’t have the time to validate facts in every news item they read, so perhaps AI can help.

Overview by Tim Sloane, VP, Payments Innovation at Mercator Advisory Group

Read the quoted story here

Featured Content