Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retrieve portions containing non consensual sexual imagery in a video #364

Open
dennyabrain opened this issue Jul 16, 2024 · 1 comment
Open
Labels
enhancement New feature or request help wanted Extra attention is needed ml

Comments

@dennyabrain
Copy link
Contributor

Overview

RATI (Rights. Action. Technology. Inclusion.) Foundation is based out of Mumbai and works to address the issue of violence against children & women both in onground & online spaces.

To create safe online spaces, they report problematic content to Social Media Platforms like Meta, Snapchat etc. This work involves reviewing large amount of videos on the internet that might contain intimate content. Since this work can be stressful and traumatic, we want to evaluate the feasibility of building an automated solution that reduces the manual labour needed to do this.

Please note that the goal of the task is not to give a classification labels like nsfw or porn to an entire video. Instead the goal is to detect portions where non consensual sexual imagery might be present so that it can be reviewed by humans.

The scope of this task would be to

  1. Review literature about Computer vision and Machine Learning techniques that would be suitable for this task
  2. Evaluate off the shelf pre-trained models or available FOSS software projects
  3. Find responsible and ethical ways of sourcing data (if needed) for this
  4. Present your work to RATI foundation to get feedback on concerns around victim privacy and safety in your work
  5. Integrate the tool as an operator in Feluda

Safe Practices

Because of the nature of media involved in this work, we recommend only starting your work after discussing your approach with us and RATI. Feel free to comment on the issue to schedule an introductory call with us.

Potential Impact

Social media platforms rely on take down requests from users and organizations like RATI to ensure their platforms are safe. The amount of user generated content being uploaded and shared on the internet is increasingly exponentially. Manual review of this content is one of the bottlenecks to scaling up any efforts to take down this content. Having an open source solution that can be shown to use technology and community for the use case of RATI would setup a good precedent in developing community managed FOSS software for online safety.

Domains

Online Trust and Safety, Content Moderation, Machine Learning, Social Science

@dennyabrain dennyabrain added enhancement New feature or request help wanted Extra attention is needed ml labels Jul 16, 2024
@dennyabrain
Copy link
Contributor Author

dennyabrain commented Oct 20, 2024

Came across this "pretrained model for detecting lewd images" developed and open sourced by Bumble.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed ml
Projects
None yet
Development

No branches or pull requests

1 participant