Security monitoring tool that helps protect AI models from common attacks.
Author: DeWitt Gibson https://www.linkedin.com/in/dewitt-gibson/
The tool provides these key security features:
Size limits Value range checks Null/empty input detection Format validation
Adversarial pattern detection Unusual input structure detection Gradient analysis
Request logging Rate monitoring Repeated input detection Pattern analysis
Validation results Detected issues Request analysis Security recommendations
A comprehensive security toolkit for protecting AI model deployments, including model protection, threat monitoring, and security assessment capabilities.
- AIModelProtector: Real-time protection and monitoring for AI model endpoints
- AISecurityMonitor: Advanced threat detection and team notification system
- ChatbotThreatModeler: Threat assessment and security evaluation for chatbot implementations
pip install -r requirements.txt
Required dependencies:
numpy>=1.21.0
pandas>=1.3.0
typing>=3.7.4
logging>=0.5.1.2
smtplib
email
from ai_model_protector import AIModelProtector
from ai_security_monitor import AISecurityMonitor
from chatbot_threat_modeler import ChatbotThreatModeler
# Initialize base protection
protector = AIModelProtector(
model_name="production_model",
input_constraints={
"max_size": 1000000,
"max_value": 1.0,
"min_value": -1.0,
"max_gradient": 50
}
)
# Setup security monitoring
monitor = AISecurityMonitor(
model_name="production_model",
alert_settings={
"email_recipients": ["security@company.com"],
"smtp_settings": {
"server": "smtp.company.com",
"port": 587,
"sender": "ai-alerts@company.com",
"use_tls": True
},
"alert_thresholds": {
"max_requests_per_minute": 100,
"suspicious_pattern_threshold": 0.8
}
}
)
# Initialize threat modeling
threat_modeler = ChatbotThreatModeler()
def process_model_request(input_data, request_metadata):
# 1. Check security protections
security_check = protector.protect(input_data)
if not security_check["allow_inference"]:
return {"error": "Security check failed", "details": security_check}
# 2. Monitor for threats
threat_assessment = monitor.detect_threat({
"ip_address": request_metadata["ip"],
"input_data": input_data
})
if threat_assessment["severity"] == "high":
return {"error": "Request blocked due to security risk"}
# 3. Process request if safe
try:
prediction = model.predict(input_data)
return {"prediction": prediction}
except Exception as e:
monitor.log_incident({
"type": "inference_error",
"details": str(e)
})
return {"error": "Processing failed"}
- Input validation and sanitization
- Pattern analysis for adversarial attacks
- Request rate limiting
- Anomaly detection
- Real-time threat detection
- Team notifications
- Incident logging
- Traffic analysis
- IP-based monitoring
- Security control assessment
- Risk scoring
- Threat identification
- Compliance checking
- Recommendation generation
SECURITY_LOG_PATH=/path/to/logs
ALERT_SMTP_SERVER=smtp.company.com
ALERT_SMTP_PORT=587
ALERT_SENDER=ai-security@company.com
ALERT_RECIPIENTS=security-team@company.com
SECURITY_CONFIG = {
"max_requests_per_minute": 100,
"suspicious_pattern_threshold": 0.8,
"max_failed_attempts": 5,
"session_timeout": 3600,
"min_request_interval": 1.0
}
-
API Key Management
- Rotate keys regularly
- Use separate keys for different environments
- Monitor key usage
-
Logging
- Enable comprehensive logging
- Store logs securely
- Implement log rotation
-
Monitoring
- Set up alerts for suspicious activities
- Monitor resource usage
- Track error rates
-
Regular Assessment
- Run threat modeling weekly
- Review security controls
- Update security thresholds
Please see CONTRIBUTING.md for guidelines on contributing to this project.
MIT License - See LICENSE.md for details