Friday, April 19, 2024

Filter by positivity: This new A.I. could detoxify online comment threads

Share

How do you solve a problem like the internet? It’s a question that, frankly, would have made little sense even a quarter of a century ago. The internet, with its ability to spread both information and democratic values to every far-flung corner of the Earth, was the answer.

Asking for a cure for the internet was like asking for a cure for the cure for cancer. Here in 2020, the picture is a bit more muddied. Yes, the internet is astonishingly brilliant for all sorts of things. But it also poses problems, from the spread of fake news to, well, the digital cesspit that is every YouTube comments section ever. To put it another way, the internet can be all kinds of toxic. How do we clean it up?

Getty

There are no simple answers here. Is algorithmic or human-driven censorship the answer? Should we close all comments sections on controversial topics? Does a privately-owned platform really need to feel obligated to provide everyone with a voice? How does blocking fringe opinions for the public good tally with the internet’s dream of giving a voice to everyone?

Researchers at Carnegie Mellon University have created an intriguing new tool they believe might help. It’s an artificial intelligence algorithm which works not by blocking negative speech, but rather by highlighting or amplifying “help speech” to make it easier to find. In the process they hope it might assist with the cybertopian ambition of better making the internet a voice for empowering the voiceless.

A voice for the voiceless

Rohingya refugee camp

The A.I. devised by the team, from Carnegie Mellon’s Language Technologies Institute, sifts through YouTube comments and highlights comments that defend or sympathize with, in this instance, disenfranchised minorities such as the Rohingya community. The Muslim Rohingya people have been subject to a series of largely ongoing persecutions by the Myanmar government since October 2016. The genocidal crisis has forced more than a million Rohingyas to flee to neighboring countries. It’s a desperate plight involving religious persecution and ethnic cleansing — but you wouldn’t necessarily know it from many of the comments which have shown up on local social media; overwhelming the number of comments on the other side of the issue.

“We developed a framework for championing the cause of a disenfranchised minority — in this case the Rohingyas — to automatically detect web content supporting them,” Ashique Khudabukhsh, a project scientist in the Computer Science Department at Carnegie Mellon, told Digital Trends. “We focused on YouTube, a social media platform immensely popular in South Asia. Our analyses revealed that a large number of comments about the Rohingyas were disparaging to them. We developed an automated method to detect comments championing their cause which would otherwise be drowned out by a vast number of harsh, negative comments.”

“From a general framework perspective, our work differs from traditional hate speech detection work where the main focus is on blocking the negative content, [although this is] an active and highly important research area,” Khudabukhsh continued. “In contrast, our work of detecting supportive comments — what we call help speech — marks a new direction of improving online experience through amplifying the positives.”

To train their A.I. filtering system, the researchers gathered up more than a quarter of a million YouTube comments. Using cutting edge linguistic modeling tech, they created an algorithm that can scour these comments to rapidly highlight comments which side with the Rohingya community. Automated semantic analysis of user comments is, as you might expect, not easy. In the Indian subcontinent alone, there are 22 major languages. There are also frequently spelling mistakes and non-standard spelling variations to deal with when it comes to assessing language.

Accentuate the positive

Nonetheless, the A.I. developed by the team were able to greatly increase the visibility of positive comments. More importantly, it was able to do this far more rapidly than would be possible for a human moderator, who would be unable to manually through large amounts of comments in real-time and pin particular comments. This could be particularly important in scenarios in which one side may have limited skills in a dominant language, limited access to the internet, or higher priority issues (read: avoiding persecution) which might take precedence over participating in online conversations.

“What if you are not there in a global discussion about you, and cannot defend yourself?”

“We have all experienced being that one friend who stood up for another friend in their absence,” Khudabukhsh continued. “Now consider this at a global scale. What if you are not there in a global discussion about you, and cannot defend yourself? How can A.I. help in this situation? We call this a 21st century problem: migrant crises in the era of ubiquitous internet where refugee voices are few and far between. Going forward, we feel that geopolitical issues, climate and resource-driven reasons may trigger new migrant crises and our work to defend at-risk communities in the online world is highly important.”

But is simply highlighting certain minority voices enough, or is this merely an algorithmic version of the trotted-out-every-few-years concept of launching a news outlet that tells only good news? Perhaps in some ways, but it also goes far beyond merely highlighting token comments without offering ways to address broader problems. With that in mind, the researchers have already expanded the project to look at ways in which A.I. can be used to amplify positive content in other different, but nonetheless high social impact scenarios. One example is online discussions during heightened political tension between nuclear adversaries. This work, which the team will present at the European Conference on Artificial Intelligence (ECAI 2020) in June, could be used to help detect and present hostility-diffusing content. Similar technology could be created for a wealth of other scenarios — with suitable tailoring for each.

These are the acceptance rates for #ECAI2020 contributions: – Full-papers: 26.8% – Highlight papers: 45%

Thank you so much for the effort that you put into the review process!

— ECAI2020 (@ECAI2020) January 15, 2020

“The basic premise of how a community can be helped depends on the community in question,” said Khudabukhsh. “Even different refugee crises would require different notions of helping. For instance, crises where contagious disease breakout is a major issue, providing medical assistance can be of immense help. For some economically disadvantaged group, highlighting success stories of people in the community could be a motivating factor. Hence, each community would require different nuanced help speech classifiers to find positive content automatically. Our work provides a blueprint for that.”

No easy fixes

As fascinating as this work is, there are no easy fixes when it comes to solving the problem of online speech. Part of the challenge is that the internet as it currently exists rewards loud voices. Google’s PageRank algorithm, for instance, ranks web pages on their perceived importance by counting the number and quality of links to a page. Trending topics on Twitter are dictated by what the largest number of people are tweeting about. Comments sections frequently highlight those opinions which provoke the strongest reactions.

The unimaginably large number of voices on the internet can drown out dissenting voices; often marginalizing voices that, at least in theory, have the same platform as anyone else.

Changing that is going to take a whole lot more than one cool YouTube comments-scouring algorithm. It’s not a bad start, though.

Read more

More News