Updated News Around the World

Users question AI’s ability to moderate online harassment

social media
Credit: Pixabay/CC0 Public Domain

New Cornell University research finds that both the type of moderator—human or AI—and the “temperature” of harassing content online influence people’s perception of the moderation decision and the moderation system.

Now published in Big Data & Society, the study used a custom social media site, on which people can post pictures of food and comment on other posts. The site contains a simulation engine, Truman, an open-source platform that mimics other users’ behaviors (likes, comments, posts) through preprogrammed bots created and curated by researchers.

The Truman platform—named after the 1998 film “The Truman Show”—was developed at the Cornell Social Media Lab led by Natalie Bazarova, professor of communication.

“The Truman platform allows researchers to create a controlled yet realistic social media experience for participants, with social and design versatility to examine a variety of research questions about human behaviors in social media,” Bazarova said. “Truman has been an incredibly useful tool, both for my group and other researchers to develop, implement and test designs and dynamic interventions, while allowing for the collection and observation of people’s behaviors on the site.”

For the study, nearly 400 participants were told they’d be beta-testing a new social media platform. They were randomly assigned to one of six experiment conditions, varying both the type of content moderation system (other users; AI; no source identified) and the type of harassment comment they saw (ambiguous or clear).

Participants were asked to log in at least twice a day for two days; they were exposed to a harassment comment, either ambiguous or clear, between two different users (bots) that was moderated by a human, AI or unknown source.

The researchers found that users are generally more likely to question AI moderators, especially how much they can trust their moderation decision and the moderation system’s accountability, but only when content is inherently ambiguous. For a more clearly harassment comment, trust in AI, human or an unknown source of moderation was approximately the same.

“It’s interesting to see that any kind of contextual ambiguity resurfaces inherent biases regarding potential machine errors,” said Marie Ozanne, the study’s first author and assistant professor of food and beverage management.

Ozanne said trust in the moderation decision and perception of system accountability—i.e., whether the system is perceived to act in the best interest of all users—are both subjective judgments, and “when there is doubt, an AI seems to be questioned more than a human or an unknown moderation source.”

The researchers suggest that future work should look at how social media users would react if they saw humans and AI moderators working together, with machines able to handle large amounts of data and humans able to parse comments and detect subtleties in language.

“Even if AI could effectively moderate content,” they wrote, “there is a [need for] human moderators as rules in community are constantly changing, and cultural contexts differ.”


Moderating online content increases accountability, but can harm some platform users


More information:
Marie Ozanne et al, Shall AI moderators be made visible? Perception of accountability and trust in moderation systems on social media platforms, Big Data & Society (2022). DOI: 10.1177/20539517221115666

Provided by
Cornell University


Citation:
Users question AI’s ability to moderate online harassment (2022, October 31)
retrieved 31 October 2022
from https://techxplore.com/news/2022-10-users-ai-ability-moderate-online.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsUpdate is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.