How AI Can Moderate Comments, Eliminate Trolls


(Image: pixone/iStockphoto)

(Image: pixone/iStockphoto)

As any online publisher or even blogger knows, reader comments are a great sign of engagement. But the comments also open up the possibility for spam and verbal abuse. Consequently, many publishers restrict comments or keep them on hold until a human moderator can assess whether or not they should be posted. Now AI can help speed up that process tremendously.

The problem with human moderators is that they have human limitations that cannot keep up with a huge influx of comments. That was the problem the New York Times faced in balancing reader comments demand with an editorial standard of civility for all published comments. Its solution was a partnership with , an incubator owned by Google's parent company, Alphabet.

Back in September, the Times announced the partnership in an article that set out the challenge faced by its 14 moderators tasked with reviewing the comments on the 10% of articles that do allow them. That alone amounted to 11,000 comments a day. As the article invited readers to try their hand at moderating, it was entitled Approve or Reject: Can You Moderate Five New York Times Comments?

I took the test. The official summary of my results were: "You moderated 4 out of 5 comments as the Community desk would have, and it took you 81 seconds. Moderating the 11,000 comments posted to nytimes.com each day would take you 49.5 hours."

That summation was followed by this: "Don't feel too bad; reviewing all of these comments takes us a long time, too." According to my calculations, however, the Times actually allows more time for their moderators. Given 14 people working 8 hours a day, the number of working hours each day would be 112, or more than twice the number of hours they said would be required for my rate of moderation.

That investment of so many hours is not something they regret, as they regard it as a requisite part of building up "one of the best communities on the web." However, they recognize that needs must dictate a new approach. That's where the machine assistance enters into the picture, enabling the same number of humans to effectively moderate a much larger number of comments and reduce the delay for reviewing time.

Flash forward to June 13, 2017, and the Gray Lady herself announces: The Times Sharply Increases Articles Open for Comments, Using Google's Technology. Using what they call "Moderator," the digital paper now allows comment on "all our top stories" for a span of eight hours and extending 24 hours for "comments in both the News and Opinion sections."

This comment expansion is made possible by the addition of Jigsaw's machine learning technology that can "prioritize comments for moderation," and even let comments post without human intervention. "Its judgments are based on more than 16 million moderated Times comments, going back to 2007." That formed the basis for Jigsaw's "machine learning algorithm that predicts what a Times moderator might do with future comments."

As with most machine learning projects, Moderator will evolve. Initially, the majority of comment are to be assigned a "summary score." That is based on "three factors: their potential for obscenity, toxicity, and likelihood to be rejected." But as the system continues to learn, and the Times editors believes they can rely on it, the plan is to advance to allowing automated moderation for the comments that show strong indications of qualifying as approved.

FastCompany explained that in this scoring system, zero is the best and 100 is the worst comment ranking. Either extreme wouldn't need further moderation, but the human moderators can work more efficiently if they read over the comments that fall into a certain range. Bassey Etim, the Community Editor for nytimes.com, who wrote the Times piece introducing Moderator, expects that would result in an eight to 10 times increase of efficiency for the humans involved (and that explains how the online paper can increase comments from 10% to 80% of its content).

Certainly, that algorithm benefits the Times in fostering engagement without allowing it to run amok. But what does Jigsaw gain from the partnership aside from a huge amount of data to play with in developing its machine learning algorithm? According to FastCompany it's about common goals. The article quotes Patricia Georgiou, Jigsaw's head of partnership, saying that this media outlet like a few others it selected to work with "aligned with our goal," namely, to find a way to "improve online conversation."

Georgiou also clarified that it is challenging to train the machine learning to recognize "what is a toxic comment," as well as the attributes within "a comment that would cause somebody leave the conversation." That's why the algorithm requires such a large body of data that extends beyond the Times' comments to include input from other publishers that have contributed to the project.

One interesting note about the state of online article comments emerges in contrasting the FastCompany article and the Times article that address the same topic and were published the same day. The former has no comments; the latter has close to 500.

While I didn't read through all the comments, I did read through a number of them. Some pointed out that the machine learning will absorb whatever biases the human moderators may have if it is trained on what they have approved or disproved. That is a valid point, though I think it is naïve for anyone to consider a media outlet to be completely objective.

I find that those comments that extend the conversation on the topic raised in the article really enhance the reader experience. It's a shame to have that spoiled by people who cannot distinguish between reasoned argument and ad hominem attacks.

In my view, even if the AI system is influenced human biases, it is still a good thing if it allows the community members to comment. What's your view? You can comment here and be heard.

Ariella Brown,

Ariella Brown is a social media consultant, editor, and freelance writer who frequently writes about the application of technology to business. She holds a PhD in English from the City University of New York. Her Twitter handle is @AriellaBrown.

How AI Can Moderate Comments, Eliminate Trolls

The New York Times has only allowed comments to be posted on about 10% of its stories because moderating such comments is a slow process. Now the media company is using artificial intelligence to speed up the process.

Machine Learning Taps Power of Curiosity

Learning can be motivated by rewards, but mostly it is motivated by curiosity. New research examines whether machine learning can embed curiosity, too.


Re: Active Discussion
  • 7/2/2017 11:54:01 PM
NO RATINGS

@ Broadway:  During the election I had a couple of friends announce via Facebook that they were going to avoid the social site until the election was over.  Which then they got into a conversation with their 90 to 500 friends why they were leaving and all the while posting other memes and such.  

It's kind of like Jodi Foster thanking millions of viewers for respecting her privacy each and every time she recieves an award. 

Re: Active Discussion
  • 6/30/2017 3:07:38 PM
NO RATINGS

'I think it's an unfortunate outlet for those who might have some type of mental illness.' Seth, couldn't agree more. Outlet is the operative word here. Such an easy access for venting sentiments, void of any clear ideas and no intellectual benefit.

Re: Active Discussion
  • 6/28/2017 11:30:40 PM
NO RATINGS

Seth, there seems to be a steady drumbeat of bad research and news about how social media is bad for us. As well as the editorials from people claiming that they've had enough and that they're quiting for their mental well being. Just over the weekend, I read a piece where the author compared Twitter to pornography, but whereas porn distorts our sexual impressions, Twitter impacted our political understandings. I agree ... though I'm not ready to quit!

Re: Active Discussion
  • 6/28/2017 7:52:34 AM
NO RATINGS

@SethBreedlove My duaghter must have seen some of those sites. She refers to some conspiracy theories that seem rather off-the-wall to me. But on the internet you can find all kinds of people, so you really have to learn to filter out the deranged from the merely quirky.

Re: Active Discussion
  • 6/27/2017 9:41:45 PM
NO RATINGS

Social media can be somewhat addictive and I think it's an unfortunate outlet for those who might have some type of mental illness.    I've seen some Facebook pages that were akin to UFO conspiracy sites.  I've been in a few discussions where once I've seen another person's page I just politely drop the conversation. 

Re: Active Discussion
  • 6/26/2017 6:25:52 PM
NO RATINGS

@Broadway it certainly seems that way. I enjoy a good debate myself but not the kind of thing in which everything boils down to ad hominem attacks. 

Re: Active Discussion
  • 6/26/2017 6:22:00 PM
NO RATINGS

Ariella, it is certainly a sport and hobby for many many enthusiast trolls these days. A lot of languishing writing and entertainment career ambitions get nourished by amateur procateurs on discussion boards, dueling back and forth for hours in their parents' basements.

Re: Active Discussion
  • 6/26/2017 9:17:24 AM
NO RATINGS

@Lyndon_Henry Yes, better imprefect moderation than no discussion in my view. SEveral months back IMDB removed its discussion sections without giving a reason. But I suspect it was because often people started trading insults with those who diagreed with their points of view. The site probably did not want to invest in either the manpower or the AI to effectively monitor the comments that went up and so opted to eliminate them instead. It's a a shame because th site is much less interesting now.

Re: Active Discussion
  • 6/26/2017 9:11:01 AM
NO RATINGS

@Broadway Oh, it's still up, it's just quite a number of years old now, and so not something that would likely come up on a first or even second page of Google searches.  What I don't htink is still up is an online board I worked for as a volunteer moderator way, way back. I don't remember any major issues, though there were two reasons for that: One: that was an academic board, so only those with academic interests were reading and posting. Two: it  was still early days for online discussion, so people did not yet regard the internet asa forum for trading insults the way some regard the comment section on YouTube and the like. 

 

Re: Active Discussion
  • 6/26/2017 7:05:40 AM
NO RATINGS

@Lyndon,  I agree.  I just found some valid emails in my spam file run by AI machine.

Page 1 / 5   >   >>
INFORMATION RESOURCES
ANALYTICS IN ACTION
CARTERTOONS
VIEW ALL +
QUICK POLL
VIEW ALL +