Free Imvu Credits✌✌Imvu Credits Hack

10 views
Skip to first unread message

Mubeen Khan

unread,
Oct 21, 2022, 7:03:30 AM10/21/22
to Free Imvu Credits✌✌Imvu Credits Hack
  Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Register today. Spectrum Labs has created natural language AI that can parse online text for toxic behavior. And now it has figured out how to identify healthy online behavior and reward it.


With 80% of U.S. online gamers having experienced harassment, this integration amplifies existing content moderation AI, which removes toxic content, by creating the ability to
recognize and reward pro-social behaviors by users.
Since implementing Spectrum Labs’ toxic content moderation AI, Together Labs has had directionally positive results when comparing retention in moderated and unmoderated rooms, said Maura Welch, vice president of marketing at Together Labs, in an email to GamesBeat.
Event
GamesBeat Summit Next 2022
Join gaming leaders live this October 25-26 in San Francisco to examine the next big opportunities within the gaming industry.
Register Here
“We have been known as the company that deployed AI for the purposes of detecting the really bad stuff on the platform,” Davis said. “And it takes a lot of R&D investment to detect any given behavior in any given language up to a certain level of acceptable standards so that a customer can not only learn from the data that’s created from that, but really take action against users in a in a scalable manner.”
Davis said that with Spectrum Labs’ help, Together Labs has been able to enjoy a sizable increase in user retention.
“We’ve just done a really good job of deploying our technology in a way that drives user engagement and retention,” Davis said. “As you know, customer acquisition costs have never been higher, the demand for people’s attention has never been higher. And so once you get someone on your platform, you’ve got to do a really good job at making sure that they have a great first time experience on that platform. And if you do a good job with that first time visit per user, they’re typically six times more likely to return.”
That comes back to benefit revenue and it serves to remove toxic users.
“But you don’t build a positive or healthy community just by removing the bad,” Davis said. “We started brainstorming in Los Angeles on how we take that technology and apply it for identifying not the bad behaviors and bad users but to identify the good people.”
Early results show Average Revenue per Paying User (ARPPU) was 12% higher for specific users that were protected from seeing toxic speech, Davis said. Both companies believe that combining toxic content moderation AI with the ability to also recognize positive user behavior will result in better first-time engagement, retention and revenue.

Reply all
Reply to author
Forward
0 new messages