You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To create safe online spaces, they report problematic content to Social Media Platforms like Meta, Snapchat etc. This work involves reviewing large amount of videos on the internet that might contain intimate content. Since this work can be stressful and traumatic, we want to evaluate the feasibility of building an automated solution that reduces the manual labour needed to do this.
Please note that the goal of the task is not to give a classification labels like nsfw or porn to an entire video. Instead the goal is to detect portions where non consensual sexual imagery might be present so that it can be reviewed by humans.
The scope of this task would be to
Review literature about Computer vision and Machine Learning techniques that would be suitable for this task
Evaluate off the shelf pre-trained models or available FOSS software projects
Find responsible and ethical ways of sourcing data (if needed) for this
Present your work to RATI foundation to get feedback on concerns around victim privacy and safety in your work
Integrate the tool as an operator in Feluda
Safe Practices
Because of the nature of media involved in this work, we recommend only starting your work after discussing your approach with us and RATI. Feel free to comment on the issue to schedule an introductory call with us.
Potential Impact
Social media platforms rely on take down requests from users and organizations like RATI to ensure their platforms are safe. The amount of user generated content being uploaded and shared on the internet is increasingly exponentially. Manual review of this content is one of the bottlenecks to scaling up any efforts to take down this content. Having an open source solution that can be shown to use technology and community for the use case of RATI would setup a good precedent in developing community managed FOSS software for online safety.
Domains
Online Trust and Safety, Content Moderation, Machine Learning, Social Science
The text was updated successfully, but these errors were encountered:
Overview
RATI (Rights. Action. Technology. Inclusion.) Foundation is based out of Mumbai and works to address the issue of violence against children & women both in onground & online spaces.
To create safe online spaces, they report problematic content to Social Media Platforms like Meta, Snapchat etc. This work involves reviewing large amount of videos on the internet that might contain intimate content. Since this work can be stressful and traumatic, we want to evaluate the feasibility of building an automated solution that reduces the manual labour needed to do this.
Please note that the goal of the task is not to give a classification labels like nsfw or porn to an entire video. Instead the goal is to detect portions where non consensual sexual imagery might be present so that it can be reviewed by humans.
The scope of this task would be to
Safe Practices
Because of the nature of media involved in this work, we recommend only starting your work after discussing your approach with us and RATI. Feel free to comment on the issue to schedule an introductory call with us.
Potential Impact
Social media platforms rely on take down requests from users and organizations like RATI to ensure their platforms are safe. The amount of user generated content being uploaded and shared on the internet is increasingly exponentially. Manual review of this content is one of the bottlenecks to scaling up any efforts to take down this content. Having an open source solution that can be shown to use technology and community for the use case of RATI would setup a good precedent in developing community managed FOSS software for online safety.
Domains
Online Trust and Safety, Content Moderation, Machine Learning, Social Science
The text was updated successfully, but these errors were encountered: