When Roblox moderators dream of superpowers, they dream of Rotector. A powerful application built with Go that uses AI and smart algorithms to find inappropriate Roblox accounts.
Important
This project is currently in an ALPHA state with frequent breaking changes - do not use this in production yet. This is a community-driven initiative and is not affiliated with, endorsed by, or sponsored by Roblox Corporation. More details in the Disclaimer section.
π beta is coming...
- π Features
- π¦ Prerequisites
- π Architecture
- β‘ Efficiency
- π Reviewing
- π£οΈ Roadmap
- β FAQ
- π₯ Contributing
- π License
β οΈ Disclaimer
Fast AI-Assisted Workflow | In-Depth User Investigation |
---|---|
Easily review flagged accounts within seconds with the help of AI and an overview of profile details and violations, helping moderators make smart decisions. |
Moderators can easily explore a user's outfits, friends, and groups, providing an understanding of the user's activity. |
Multi-Format Translation | Activity Log Browser |
The review menu features translation capabilities, supporting natural languages, morse code, and binary, which ensures effective review of content across different languages and encodings. |
The log browser allows administrators to make detailed queries of moderation actions based on specific users, actions, or date ranges, which provides detailed audit trails. |
Streamer Mode | Session State Preservation |
Streamer mode provides additional privacy by censoring sensitive user information in the review menu. This feature is particularly useful for content creators and moderators who want to use the tool while maintaining confidentiality. |
With our modern infrastructure, review sessions are preserved across channels and servers, allowing moderators to seamlessly resume their work from where they left off. |
Training Mode | Review Modes |
Non-official moderators in the community can participate by upvoting/downvoting based on whether they think an account breaks the rules, helping to point out accounts that need urgent review. |
Moderators can switch between Standard Mode (ban/clear) and Review mode (downvote/upvote), and also switch between reviewing Flagged and Confirmed users. |
User Queue System | Recheck Users |
Want to manually check a specific user? Users can be added to priority queues for processing by workers to check for potential violations. |
Users can be rechecked if analysis is wrong or the user information is outdated right from the review menu. |
Appeal System | User/Group Lookup |
Users can appeal flagged accounts through an intuitive ticket system. The automated verification process ensures legitimate appeals, and moderators can efficiently process appeals with simple accept/reject actions. |
Moderators can quickly look up and review specific users or groups by providing their ID/UUID, allowing for targeted investigation of flagged accounts. |
Live Statistics Dashboard | AI Moderation Assistant |
The dashboard displays live hourly statistics showing an AI-generated analysis message, active reviewers, active workers, and various statistics for real-time performance tracking. |
Moderators can use an AI assistant to get guidance on moderation decisions, analyze user behavior patterns, and receive recommendations. |
...and so much more to come!
Warning
This tool requires significant resources and technical expertise to run properly. It is not recommended for casual users without the necessary infrastructure.
- Go 1.23.X
- PostgreSQL 17.2 (with TimescaleDB 2.17.1 extension)
- DragonflyDB 1.25.X or Redis 7.4.X
- Google AI Studio Paid API key (uses Gemini 1.5 Flash-8B by default)
- Proxies to avoid rate limits (recommended 40 per worker)
- Discord Bot token
Rotector uses a multi-worker system to process and analyze Roblox accounts efficiently, with each type of worker responsible for different parts of the detection and maintenance processes.
Tip
Interested in seeing how well it performs? Check out our test results in the Efficiency section.
AI Friend Worker
The AI friend worker systematically analyzes user networks to identify inappropriate content and behavior patterns. Here's how it works:
flowchart TB
Start([Start Worker]) --> GetBatch[Get Next Batch<br>of Users]
subgraph Processing [User Processing]
direction TB
subgraph DataCollection [Data Collection]
direction LR
FetchInfo[Fetch Basic Info] --> |Parallel| GetGroups[Groups]
FetchInfo --> |Parallel| GetFriends[Friends]
FetchInfo --> |Parallel| GetGames[Games]
end
subgraph Analysis [Content Analysis]
direction LR
GroupCheck[Check Groups<br>for Flags] --> FriendCheck[Check Friends<br>for Flags] --> Translate[Translate<br>Description] --> AICheck[AI Content<br>Analysis]
end
DataCollection --> Analysis
Analysis --> Validation{Validate<br>Results}
Validation -->|Failed| RetryQueue[Add to<br>Retry Queue]
subgraph GroupTracking [Group Tracking]
direction LR
TrackGroups[Track User's<br>Groups]
end
subgraph EnrichData [Data Enrichment]
direction LR
GetThumbnails[Fetch Thumbnails] --> GetOutfits[Fetch Outfits]
GetOutfits --> GetFollowers[Get Follower<br>Count]
GetFollowers --> GetFollowing[Get Following<br>Count]
end
Validation -->|Passed| GroupTracking
GroupTracking --> EnrichData
EnrichData --> PopularCheck{Popular User<br>Check}
PopularCheck -->|Yes| HighConfidence[Set High<br>Confidence Flag]
PopularCheck -->|No| SaveDB[(Save to<br>Database)]
HighConfidence --> SaveDB
end
GetBatch --> Processing
RetryQueue --> GetBatch
SaveDB --> GetBatch
The worker continuously processes users in batches, with built-in safeguards:
- Pauses when flagged user count exceeds threshold
- Validates AI results against original content
- Maintains retry queue for failed validations
- Enriches flagged users with additional data for review
Going into more detail about the detection process:
-
Smart Scoring: We analyze multiple factors including friend networks, group memberships, and account information to identify patterns of inappropriate content. Our system is tuned to catch both clear and subtle violations while minimizing false positives.
-
AI Analysis: Our AI only flags accounts with evidence of violations. While this means some borderline cases might be missed, it ensures high confidence in flagged accounts.
-
Validation System: When the AI flags content, we validate that it exists on the user's profile. This extra verification step helps prevent false positives and maintains system reliability.
What We Don't Flag:
- Accounts just for having only one flagged friend/follower
- Normal friendship conversations
- Regular emojis or internet slang
- Art without inappropriate themes
- Gender/orientation discussions
- Normal roleplay activities
- Regular bad language (handled by Roblox filters)
AI Group Worker
The AI group worker analyzes group member lists to identify inappropriate accounts. Here's how it works:
flowchart TB
Start([Start Worker]) --> GetGroup[Get Next Group<br>to Process]
subgraph Processing [User Processing]
direction TB
subgraph DataCollection [Data Collection]
direction LR
FetchMembers[Fetch Member List] --> |For each member| FetchInfo[Fetch Basic Info]
FetchInfo --> |Parallel| GetGroups[Groups]
FetchInfo --> |Parallel| GetFriends[Friends]
FetchInfo --> |Parallel| GetGames[Games]
end
subgraph Analysis [Content Analysis]
direction LR
GroupCheck[Check Groups<br>for Flags] --> FriendCheck[Check Friends<br>for Flags] --> Translate[Translate<br>Description] --> AICheck[AI Content<br>Analysis]
end
DataCollection --> Analysis
Analysis --> Validation{Validate<br>Results}
Validation -->|Failed| RetryQueue[Add to<br>Retry Queue]
subgraph GroupTracking [Group Tracking]
direction LR
TrackGroups[Track User's<br>Groups]
end
subgraph EnrichData [Data Enrichment]
direction LR
GetThumbnails[Fetch Thumbnails] --> GetOutfits[Fetch Outfits]
GetOutfits --> GetFollowers[Get Follower<br>Count]
GetFollowers --> GetFollowing[Get Following<br>Count]
end
Validation -->|Passed| GroupTracking
GroupTracking --> EnrichData
EnrichData --> PopularCheck{Popular User<br>Check}
PopularCheck -->|Yes| HighConfidence[Set High<br>Confidence Flag]
PopularCheck -->|No| SaveDB[(Save to<br>Database)]
HighConfidence --> SaveDB
end
GetGroup --> Processing
RetryQueue --> GetGroup
SaveDB --> GetGroup
The key difference from the friend worker is that it:
- Processes members from inappropriate groups
- Uses cursor pagination to handle large member lists
Going into more detail about the detection process:
-
Group Analysis: The system tracks the groups each flagged user is in. For the groups that exceed a certain threshold of flagged members, they are flagged for review.
-
False Positives: Large groups like fan groups may be flagged due to their member count. After manual review, cleared groups are whitelisted to prevent future flags, though administrators can reverse this status if needed.
Maintenance Worker
The maintenance worker maintains database hygiene by cleaning up old data, checking for banned/locked accounts, and flagging groups:
flowchart TB
Start([Start Worker]) --> Loop[Start Maintenance Cycle]
subgraph Processing [Maintenance Processing]
direction TB
subgraph BannedUsers [Process Banned Users]
direction LR
GetUsers[Get Users to Check] --> CheckBanned[Check for<br>Banned Users]
CheckBanned --> RemoveBanned[Move to<br>Banned Table]
end
subgraph LockedGroups [Process Locked Groups]
direction LR
GetGroups[Get Groups to Check] --> CheckLocked[Check for<br>Locked Groups]
CheckLocked --> RemoveLocked[Move to<br>Locked Table]
end
subgraph ClearedItems [Process Cleared Items]
direction LR
PurgeUsers[Remove Old<br>Cleared Users] --> PurgeGroups[Remove Old<br>Cleared Groups]
end
subgraph Tracking [Process Group Tracking]
direction LR
GetTracking[Get Groups to<br>Track] --> FetchInfo[Fetch Group Info<br>from API]
FetchInfo --> CheckThresholds[Check Percentage<br>Thresholds]
CheckThresholds --> |Exceeds Threshold| SaveGroups[Save Flagged<br>Groups]
end
subgraph UserThumbnails [Process User Thumbnails]
direction LR
GetUserBatch[Get Users for<br>Thumbnail Update] --> FetchUserThumbs[Fetch User<br>Thumbnails]
FetchUserThumbs --> UpdateUserThumbs[Update User<br>Thumbnails]
end
subgraph GroupThumbnails [Process Group Thumbnails]
direction LR
GetGroupBatch[Get Groups for<br>Thumbnail Update] --> FetchGroupThumbs[Fetch Group<br>Thumbnails]
FetchGroupThumbs --> UpdateGroupThumbs[Update Group<br>Thumbnails]
end
BannedUsers --> LockedGroups
LockedGroups --> ClearedItems
ClearedItems --> Tracking
Tracking --> UserThumbnails
UserThumbnails --> GroupThumbnails
end
Loop --> Processing
Processing --> Wait[Wait 5 Minutes]
Wait --> Loop
The worker continuously:
- Checks for and removes banned users
- Checks for and removes locked groups
- Purges old cleared users/groups
- Flag groups with flagged users
- Runs every 1 minute
Queue Worker
The queue worker processes user verification requests from different priority queues:
flowchart TB
Start([Start Worker]) --> GetBatch[Get Next Batch<br>Max 50 Items]
subgraph Processing [Queue Processing]
direction TB
subgraph QueueCheck [Queue Management]
direction TB
CheckHigh[Check High Priority] --> RemainingH{Batch<br>Full?}
RemainingH -->|No| CheckNormal[Check Normal Priority<br>Get up to Remaining]
RemainingH -->|Yes| Process
CheckNormal --> RemainingN{Batch<br>Full?}
RemainingN -->|No| CheckLow[Check Low Priority<br>Get up to Remaining]
RemainingN -->|Yes| Process
CheckLow --> Process[Process<br>Batch]
end
subgraph ItemProcess [Item Processing]
direction LR
UpdateStatus[Set Status to<br>Processing] --> FetchInfo[Fetch User<br>Information]
FetchInfo --> AICheck[Run AI<br>Analysis]
AICheck --> Validate{Validate<br>Results}
Validate -->|Failed| RetryQueue[Add to<br>Retry Queue]
Validate -->|Passed| UpdateQueue[Update Queue<br>Status]
end
QueueCheck --> ItemProcess
end
GetBatch --> Processing
RetryQueue --> GetBatch
UpdateQueue --> GetBatch
The worker:
- Processes items in priority order (High β Normal β Low)
- Updates queue status for tracking
- Handles validation failures with retries
- Runs continuously with smart batching
Stats Worker
The stats worker collects and processes statistical data for analysis:
flowchart TB
Start([Start Worker]) --> WaitHour[Wait for Next Hour]
subgraph Processing [Stats Processing]
direction TB
subgraph Collection [Data Collection]
direction LR
GetStats[Get Current Stats] --> SaveStats[Save Hourly<br>Snapshot]
end
subgraph Analysis [Stats Analysis]
direction LR
GetHistory[Get Historical<br>Stats] --> AIAnalysis[Generate AI<br>Analysis]
AIAnalysis --> UpdateMessage[Update Welcome<br>Message]
end
subgraph Cleanup [Data Cleanup]
PurgeOld[Remove Old Stats<br>>30 Days]
end
Collection --> Analysis
Analysis --> Cleanup
end
WaitHour --> Processing
Processing --> WaitHour
The worker:
- Runs hourly statistical snapshots
- Generates AI analysis of trends
- Updates welcome messages
- Cleans up old data
Middleware Layers
Rotector uses a sophisticated middleware chain to ensure reliable and efficient API interactions. Here's how requests are processed:
flowchart TB
Start([API Request]) --> Layer1
subgraph Layer1 [Layer 1: Proxy Routing]
ProxyLayer[Load Distribution<br>Endpoint Cooldowns]
end
subgraph Layer2 [Layer 2: Caching]
RedisCache[Redis Cache<br>1 Hour TTL]
end
subgraph Layer3 [Layer 3: Efficiency]
SingleFlight[Single Flight<br>Deduplicates Concurrent<br>Requests]
end
subgraph Layer4 [Layer 4: Reliability]
RetryLogic[Retry with<br>Exponential Backoff]
end
subgraph Layer5 [Layer 5: Fault Tolerance]
CircuitBreaker[Circuit Breaker<br>Prevents Cascading Failures]
end
Layer1 --> Layer2
Layer2 --> Layer3
Layer3 --> Layer4
Layer4 --> Layer5
Layer5 --> RobloxAPI[(Roblox API)]
style RobloxAPI fill:#f96,stroke:#333
Each layer serves a specific purpose:
-
Proxy Routing (Layer 1)
- Distributes requests across multiple proxies
- Manages endpoint-specific cooldowns per proxy
- Helps avoid IP-based rate limits
-
Redis Caching (Layer 2)
- Caches responses for 1 hour
- Reduces load on Roblox API
- Improves response times
-
Request Deduplication (Layer 3)
- Combines identical concurrent requests
- Reduces unnecessary API calls
- Uses Go's singleflight pattern
-
Retry Logic (Layer 4)
- Handles transient failures
- Uses exponential backoff
- Configurable retry limits
-
Circuit Breaker (Layer 5)
- Prevents cascading failures
- Automatic recovery after timeout
- Configurable failure thresholds
The middleware chain processes requests, with each middleware layer adding its optimization, which ensures maximum efficiency while maintaining reliability.
Rotector is built to efficiently handle large amounts of data while keeping resource usage at a reasonable level. Here's a performance snapshot from one of our test runs on a shared VPS:
Note
These results should be viewed as illustrative rather than definitive. Performance can vary significantly due to various factors such as API response times, proxy performance, system resources, configuration, and more. Not all of the VPS resources were used.
- OS: Ubuntu 24.04
- CPU: Intel Xeon Gold 6150 with 8 vCores @ 2.693GHz
- RAM: 24 GB
- Network: 1 Gbit/s
- Location: Germany
- Version:
bd7281c
- Time Given: 1 hour
- Workers: 15 AI friend workers, 5 maintenance workers
- Proxies: 500 shared proxies
Metric | Current Run | Previous Run |
---|---|---|
Users Scanned | 740 | 1,001 |
Users Flagged | 12,427 | 14,800 |
Groups Flagged | 95 | 167 |
Requests Sent | 79,082 | 300,195 |
Bandwidth Used | 932.09 MB | 2.83 GB |
Avg Concurrent Requests | 653 | 1,060 |
Avg Requests Per Second | 6 | 12 |
Avg Bandwith Per Request | 12.07 KB | 9.88 KB |
AI Cost | $0.16 | $0.07 |
AI Calls (CT) | 17,845 | 13,089 |
AI Calls (GC) | 6,158 | 5,720 |
AI Latency (CT) | ~0.017s | ~0.017s |
AI Latency (GC) | ~1.265s | ~1.038s |
Redis Memory Usage | 1.48 GB | 702.62 MB |
Redis Key Count | 385,700 | 204,172 |
Note
CT and GC in the metrics refer to CountTokens and GenerateContent calls to the Gemini API respectively.
At the current rate, a 24-hour runtime would theoretically flag approximately 298,248 users, with AI costing only $3.84. However, the number of flagged users would probably be lower as more users are added to the database. If Rotector maintained this detection rate, it could potentially flag hundreds of thousands of inappropriate accounts in just a week!
A brief analysis of the results shows that almost all users were flagged accurately, with some false positives, which is to be expected. These false positives are borderline cases or too vague to be considered inappropriate.
We discovered several large groups of inappropriate accounts that have managed to avoid detection by traditional moderation techniques:
- Group with 1934 flagged users (34XXXX55)
- Group with 1719 flagged users (45XXXX3)
- Group with 1680 flagged users (34XXXX41)
- Group with 1521 flagged users (65XXXX7)
- Group with 1401 flagged users (34XXXX64)
- Group with 1063 flagged users (35XXXX31)
- ... and many more with hundreds of flagged users
Smaller groups have also been identified by our detection algorithm, which also considers the percentage of flagged users in a group instead of just raw numbers. This includes small ERP communities and pools of alt account that conventional moderation methods might normally overlook. All groups were accurately flagged with no false positives.
The current run displays fewer users and groups flagged compared to the previous run, which was expected as improvements were made to the detection algorithm and resulted in fewer false positives.
We've also made significant improvements to the networking side. With optimizations in request patterns and strategies, the current run used only roughly a third of the bandwidth compared to the previous run (932.09 MB vs 2.83 GB).
These results are constantly getting better as we improve the detection algorithm and networking side. However, the biggest limitation is the number of proxies available due to their high costs. These proxies are necessary as workers need to process users and gather all necessary data upfront which makes many requests per second. This pre-loading approach means that when moderators review flagged accounts, they get near-instant access to all user information without waiting for additional API requests.
With more proxies or even a special way to get past rate limits, we could potentially scan over 100 times more users per hour instead of the current rate given the current VPS resources. This would theoretically be possible as Rotector is built with performance in mind.
Rotector has two methods for reviewing flagged accounts: one designed for community members and another for official moderators. This dual approach promotes community involvement while allowing official moderators to handle the final decisions.
Anyone can assist in reviewing flagged accounts through a specially designed Training Mode. To ensure confidentiality, this mode censors user information and hides external links. Anyone can participate by upvoting/downvoting based on their assessment of whether an account violates the rules, which helps point out accounts that need urgent review by official moderators.
This system helps official moderators in several ways:
- Finds the most serious cases quickly
- Gives moderators extra input for their decisions
- Helps train new moderators
Official moderators have better tools and permissions for reviewing accounts. They are able to:
- Access all account information (unless they turn on streamer mode)
- Request workers to recheck accounts
- View logs of all moderation activities
- Toggle between standard mode and training mode
- Make changes to the database
What sets this mode apart is that moderators have the authority to take all necessary actions regarding flagged accounts. While community votes provide input, it is the moderators who ultimately decide the fate of these accounts.
This roadmap shows our major upcoming features, but we've got even more in the works! We're always adding new features based on what the community suggests.
-
π₯ Moderation Tools
- Appeal process system
- Inventory viewer
-
π Scanning Capabilities
- Group content detection (wall posts, names, descriptions)
-
π Public API (Available in Beta)
- RPC/REST API for developers to integrate with
- Script for Roblox game developers to integrate with
How do I set this up myself?
Detailed setup instructions will be available during the beta phase when the codebase is more stable. During alpha, we're focusing on making frequent changes, which makes maintaining documentation difficult.
What's the story behind Rotector?
Rotector started when jaxron developed two important libraries on September 23, 2024: RoAPI.go and axonet which became the backbone of Rotector's networking and API interaction capabilities.
Rotector's official development began secretly on October 13, 2024, due to his concerns about inappropriate behavior on Roblox and a desire to help protect young players. The project was made public for the alpha testing phase on November 8, 2024.
While Roblox already has moderators, there are so many users that it's hard to catch every inappropriate account easily. Some Roblox staff have also acknowledged that it's difficult to handle all the reports they get. Sometimes, inappropriate accounts and groups stay active even after being reported.
Rotector helps by finding these accounts automatically. Our goal is to make moderation easier and help keep the Roblox community, especially young players, safer.
Why is Rotector open-sourced?
We believe in transparency and the power of open source. By making our code public, anyone can understand how the tool works and it's also a great way for people to learn about online safety and moderation tools.
While we welcome feedback, ideas, and contributions, this open-source release is mainly to show how the tool works and help others learn from it.
Can I use Rotector without the Discord bot?
Yes, but the Discord bot makes reviewing accounts much easier. The main features (finding and flagging inappropriate accounts) work fine without Discord but you'll need to create your own way to review the accounts that get flagged. All flagged users and groups are stored in the flagged_users
and flagged_groups
tables in the database.
Why use Discord instead of a custom web interface?
Discord already has everything we need for reviewing accounts - buttons, dropdowns, forms, and rich embeds. Using Discord lets us focus on making Rotector better instead of building a whole new interface from scratch.
Are proxies and cookies necessary to use Rotector?
Proxies are required as Rotector makes lots of requests per second. While cookies are mentioned in the settings, we don't use them for anything at the moment.
The config.toml
file includes cooldown settings for each endpoint that lets you control how many requests Rotector makes to Roblox's API.
Will users who have stopped their inappropriate behavior be removed from the database?
No, past rule violations remain in the database, even if users say they've changed. This can be useful for law enforcement investigations and for future safety concerns.
Some users try to clean up their profiles temporarily, only to return to breaking rules later. It isn't about preventing second chances but rather about keeping the platform safe, especially for young users.
Why did Rotector switch from GPT-4o mini to Gemini?
We made the switch to Gemini because it is 4 times cheaper than GPT-4o mini, offers 5 times faster output speed, and has 2 times lower latency, while maintaining the same level of accuracy in identifying inappropriate content. This change allows us to achieve more with a smaller budget, introduce new features that were previously unaffordable, and ensure the project's long-term sustainability.
Who inspired the creation of Rotector?
Ruben Sim, a YouTuber and former game developer, helped inspire Rotector. His work exposing Roblox's moderation problems, especially through the Moderation for Dummies Twitter account, showed what one person could do even without special tools. We are deeply grateful for his contributions which helped pave the way for our project.
How did "Rotector" get its name?
The name comes from three ideas:
- Protector: We want to protect Roblox players from inappropriate content
- Detector: We find inappropriate accounts
- "Ro-" prefix: From "Roblox", the platform we work with
We follow the Contributor Covenant Code of Conduct. If you're interested in contributing to this project, please abide by its terms.
If you're feeling extra supportive, you can always buy us a coffee! β
This project is licensed under the GNU General Public License v2.0 - see the LICENSE file for details.
Roblox is a registered trademark of Roblox Corporation. "Rotector" and the Rotector logo are not affiliated with, endorsed by, or sponsored by Roblox Corporation.
Rotector is free software: you can redistribute it under the terms of the GNU General Public License version 2 as published by the Free Software Foundation. You may modify the software for your own use. If you distribute modified versions, you must do so under the same GPL v2 license and make the source code of your modifications available.
While Rotector only accesses publicly available information through Roblox's API, users should be aware that:
- This tool should not be used to harass or target specific users
- Any automated scanning and excessive requests may violate Roblox's Terms of Service
- Users are responsible for respecting the rate limit
π Powered by modern technologies.