Reddit teams up with Oterlu for content moderation

author-image
Social Samosa
New Update
Reddit updates


Reddit has announced that the developer of machine learning-powered content moderation tools, Oterlu, will be joining them in an effort to maintain safety and efficient content moderation.

Co-founded by former Google Trust & Safety lead Alex Gee, Ortelu is dedicated to monitoring content on platforms and exercising content policies, as a part of the partnership would commit to ensuring that the communities on the Reddit platform stay safe, healthy, and authentic.

This development is a part of Reddit's efforts to stay invested in and grow the internal Safety teams who oversee the Content Policy and work to equip volunteer moderators with new and advanced safety tooling. According to Reddit, these investments have led to reductions in the amount of harmful content on the platform and would stay consistent with looking for more ways to enhance the ability to prevent, detect, and remove harmful content.

Also Read: Reddit has introduced new controls for Home Feed

As part of Reddit’s Safety team, the new employees from Oterlu will develop native machine-learning moderation models that can speedily and accurately detect harmful content across various languages. Their experience will also be leveraged to equip Reddit moderators with new advanced safety tools.

The Oterlu team brings expertise in building algorithms that use natural language processing technology, artificial intelligence models, and machine learning to detect nuances in unwanted behavior such as bullying, harassment, and grooming. The Oterlu team on board would attempt to help accelerate the scale, sophistication, and internationalization of Reddit’s automated safety capabilities internally and for moderators.

Social media Digital Media Social Media news