diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-aws-bedrock-invocations-without-guardrails-detected-by-a-single-user-over-a-session.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-aws-bedrock-invocations-without-guardrails-detected-by-a-single-user-over-a-session.asciidoc new file mode 100644 index 0000000000..12521aac97 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-aws-bedrock-invocations-without-guardrails-detected-by-a-single-user-over-a-session.asciidoc @@ -0,0 +1,118 @@ +[[prebuilt-rule-8-15-12-aws-bedrock-invocations-without-guardrails-detected-by-a-single-user-over-a-session]] +=== AWS Bedrock Invocations without Guardrails Detected by a Single User Over a Session + +Identifies multiple AWS Bedrock executions in a one minute time window without guardrails by the same user in the same account over a session. Multiple consecutive executions implies that a user may be intentionally attempting to bypass security controls, by not routing the requests with the desired guardrail configuration in order to access sensitive information, or possibly exploit a vulnerability in the system. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 10m + +*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-components.html +* https://atlas.mitre.org/techniques/AML.T0051 +* https://atlas.mitre.org/techniques/AML.T0054 +* https://www.elastic.co/security-labs/elastic-advances-llm-security + +*Tags*: + +* Domain: LLM +* Data Source: AWS Bedrock +* Data Source: AWS S3 +* Resources: Investigation Guide +* Use Case: Policy Violation +* Mitre Atlas: T0051 +* Mitre Atlas: T0054 + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Amazon Bedrock Invocations without Guardrails Detected by a Single User Over a Session.* + + +Using Amazon Bedrock Guardrails during model invocation is critical for ensuring the safe, reliable, and ethical use of AI models. +Guardrails help manage risks associated with AI usage and ensure the output aligns with desired policies and standards. + + +*Possible investigation steps* + + +- Identify the user account that caused multiple model violations over a session without desired guardrail configuration and whether it should perform this kind of action. +- Investigate the user activity that might indicate a potential brute force attack. +- Investigate other alerts associated with the user account during the past 48 hours. +- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day? +- Examine the account's prompts and responses in the last 24 hours. +- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours. + + +*False positive analysis* + + +- Verify the user account that caused multiple policy violations by a single user over session, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Disable or limit the account during the investigation and response. +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services. + - Identify any regulatory or legal ramifications related to this activity. +- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed. +- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Setup + + + +*Setup* + + +This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation: + +https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-create.html + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-aws_bedrock.invocation-* +// create time window buckets of 1 minute +| eval time_window = date_trunc(1 minute, @timestamp) +| where gen_ai.guardrail_id is NULL +| KEEP @timestamp, time_window, gen_ai.guardrail_id , user.id +| stats model_invocation_without_guardrails = count() by user.id +| where model_invocation_without_guardrails > 5 +| sort model_invocation_without_guardrails desc + +---------------------------------- diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-aws-iam-login-profile-added-for-root.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-aws-iam-login-profile-added-for-root.asciidoc new file mode 100644 index 0000000000..710bcebb22 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-aws-iam-login-profile-added-for-root.asciidoc @@ -0,0 +1,169 @@ +[[prebuilt-rule-8-15-12-aws-iam-login-profile-added-for-root]] +=== AWS IAM Login Profile Added for Root + +Detects when an AWS IAM login profile is added to a root user account and is self-assigned. Adversaries, with temporary access to the root account, may add a login profile to the root user account to maintain access even if the original access key is rotated or disabled. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Cloud +* Data Source: AWS +* Data Source: Amazon Web Services +* Data Source: AWS IAM +* Use Case: Identity and Access Audit +* Tactic: Persistence +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Investigating AWS IAM Login Profile Added for Root* + + +This rule detects when a login profile is added to the AWS root account. Adding a login profile to the root account, especially if self-assigned, is highly suspicious as it might indicate an adversary trying to establish persistence in the environment. + + +*Possible Investigation Steps* + + +- **Identify the Source and Context of the Action**: + - Examine the `source.address` field to identify the IP address from which the request originated. + - Check the geographic location (`source.address`) to determine if the access is from an expected or unexpected region. + - Look at the `user_agent.original` field to identify the tool or browser used for this action. + - For example, a user agent like `Mozilla/5.0` might indicate interactive access, whereas `aws-cli` or SDKs suggest scripted activity. + +- **Confirm Root User and Request Details**: + - Validate the root user's identity through `aws.cloudtrail.user_identity.arn` and ensure this activity aligns with legitimate administrative actions. + - Review `aws.cloudtrail.user_identity.access_key_id` to identify if the action was performed using temporary or permanent credentials. This access key could be used to pivot into other actions. + +- **Analyze the Login Profile Creation**: + - Review the `aws.cloudtrail.request_parameters` and `aws.cloudtrail.response_elements` fields for details of the created login profile. + - For example, confirm the `userName` of the profile and whether `passwordResetRequired` is set to `true`. + - Compare the `@timestamp` of this event with other recent actions by the root account to identify potential privilege escalation or abuse. + +- **Correlate with Other Events**: + - Investigate for related IAM activities, such as: + - `CreateAccessKey` or `AttachUserPolicy` events targeting the root account. + - Unusual data access, privilege escalation, or management console logins. + - Check for any anomalies involving the same `source.address` or `aws.cloudtrail.user_identity.access_key_id` in the environment. + +- **Evaluate Policy and Permissions**: + - Verify the current security policies for the root account: + - Ensure password policies enforce complexity and rotation requirements. + - Check if MFA is enforced on the root account. + - Assess the broader IAM configuration for deviations from least privilege principles. + + +*False Positive Analysis* + + +- **Routine Administrative Tasks**: Adding a login profile might be a legitimate action during certain administrative processes. Verify with the relevant AWS administrators if this event aligns with routine account maintenance or emergency recovery scenarios. + +- **Automation**: If the action is part of an approved automation process (e.g., account recovery workflows), consider excluding these activities from alerting using specific user agents, IP addresses, or session attributes. + + +*Response and Remediation* + + +- **Immediate Access Review**: + - Disable the newly created login profile (`aws iam delete-login-profile`) if it is determined to be unauthorized. + - Rotate or disable the credentials associated with the root account to prevent further abuse. + +- **Enhance Monitoring and Alerts**: + - Enable real-time monitoring and alerting for IAM actions involving the root account. + - Increase the logging verbosity for root account activities. + +- **Review and Update Security Policies**: + - Enforce MFA for all administrative actions, including root account usage. + - Restrict programmatic access to the root account by disabling access keys unless absolutely necessary. + +- **Conduct Post-Incident Analysis**: + - Investigate how the credentials for the root account were compromised or misused. + - Strengthen the security posture by implementing account-specific guardrails and continuous monitoring. + + +*Additional Resources* + + +- AWS documentation on https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreateLoginProfile.html[Login Profile Management]. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-aws.cloudtrail* metadata _id, _version, _index +| where + // filter for CloudTrail logs from IAM + event.dataset == "aws.cloudtrail" + and event.provider == "iam.amazonaws.com" + + // filter for successful CreateLoginProfile API call + and event.action == "CreateLoginProfile" + and event.outcome == "success" + + // filter for Root member account + and aws.cloudtrail.user_identity.type == "Root" + + // filter for an access key existing which sources from AssumeRoot + and aws.cloudtrail.user_identity.access_key_id IS NOT NULL + + // filter on the request parameters not including UserName which assumes self-assignment + and NOT TO_LOWER(aws.cloudtrail.request_parameters) LIKE "*username*" +| keep + @timestamp, + aws.cloudtrail.request_parameters, + aws.cloudtrail.response_elements, + aws.cloudtrail.user_identity.type, + aws.cloudtrail.user_identity.arn, + aws.cloudtrail.user_identity.access_key_id, + cloud.account.id, + event.action, + source.address + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-aws-iam-user-created-access-keys-for-another-user.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-aws-iam-user-created-access-keys-for-another-user.asciidoc new file mode 100644 index 0000000000..e466ba93cc --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-aws-iam-user-created-access-keys-for-another-user.asciidoc @@ -0,0 +1,166 @@ +[[prebuilt-rule-8-15-12-aws-iam-user-created-access-keys-for-another-user]] +=== AWS IAM User Created Access Keys For Another User + +An adversary with access to a set of compromised credentials may attempt to persist or escalate privileges by creating a new set of credentials for an existing user. This rule looks for use of the IAM `CreateAccessKey` API operation to create new programmatic access keys for another IAM user. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-6m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://hackingthe.cloud/aws/exploitation/iam_privilege_escalation/#iamcreateaccesskey +* https://cloud.hacktricks.xyz/pentesting-cloud/aws-security/aws-persistence/aws-iam-persistence +* https://permiso.io/blog/lucr-3-scattered-spider-getting-saas-y-in-the-cloud +* https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreateAccessKey.html + +*Tags*: + +* Domain: Cloud +* Data Source: AWS +* Data Source: Amazon Web Services +* Data Source: AWS IAM +* Use Case: Identity and Access Audit +* Tactic: Privilege Escalation +* Tactic: Persistence +* Resources: Investigation Guide + +*Version*: 5 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating AWS IAM User Created Access Keys For Another User* + + +AWS access keys created for IAM users or root user are long-term credentials that provide programmatic access to AWS. +With access to the `iam:CreateAccessKey` permission, a set of compromised credentials could be used to create a new +set of credentials for another user for privilege escalation or as a means of persistence. This rule uses https://www.elastic.co/guide/en/security/master/rules-ui-create.html#create-esql-rule[ES|QL] +to look for use of the `CreateAccessKey` operation where the user.name is different from the user.target.name. + + + +*Possible investigation steps* + + +- Identify both related accounts and their role in the environment. +- Review IAM permission policies for the user identities. +- Identify the applications or users that should use these accounts. +- Investigate other alerts associated with the accounts during the past 48 hours. +- Investigate abnormal values in the `user_agent.original` field by comparing them with the intended and authorized usage and historical data. Suspicious user agent values include non-SDK, AWS CLI, custom user agents, etc. +- Contact the account owners and confirm whether they are aware of this activity. +- Considering the source IP address and geolocation of the user who issued the command: + - Do they look normal for the calling user? + - If the source is an EC2 IP address, is it associated with an EC2 instance in one of your accounts or is the source IP from an EC2 instance that's not under your control? + - If it is an authorized EC2 instance, is the activity associated with normal behavior for the instance role or roles? Are there any other alerts or signs of suspicious activity involving this instance? +- If you suspect the account has been compromised, scope potentially compromised assets by tracking servers, services, and data accessed by the account in the last 24 hours. + - Determine what other API calls were made by the user. + - Assess whether this behavior is prevalent in the environment by looking for similar occurrences involving other users. + + +*False positive analysis* + + +- False positives may occur due to the intended usage of the IAM `CreateAccessKey` operation. Verify the `aws.cloudtrail.user_identity.arn` should use this operation against the `user.target.name` account. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Disable or limit the account during the investigation and response. + - Rotate user credentials + - Remove the newly created credentials from the affected user(s) +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Assess the criticality of affected services and servers. + - Work with your IT team to identify and minimize the impact on users. + - Identify if the attacker is moving laterally and compromising other accounts, servers, or services. + - Identify any regulatory or legal ramifications related to this activity. +- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. + - Rotate secrets or delete API keys as needed to revoke the attacker's access to the environment. + - Work with your IT teams to minimize the impact on business operations during these actions. +- Remove unauthorized new accounts, and request password resets for other IAM users. +- Consider enabling multi-factor authentication for users. +- Review the permissions assigned to the implicated user to ensure that the least privilege principle is being followed. +- Implement security best practices https://aws.amazon.com/premiumsupport/knowledge-center/security-best-practices/[outlined] by AWS. +- Take the actions needed to return affected systems, data, or services to their normal operational levels. +- Identify the initial vector abused by the attacker and take action to prevent reinfection via the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-aws.cloudtrail-* metadata _id, _version, _index +| where event.provider == "iam.amazonaws.com" + and event.action == "CreateAccessKey" + and event.outcome == "success" + and user.name != user.target.name +| keep + @timestamp, + cloud.region, + event.provider, + event.action, + event.outcome, + user.name, + source.address, + user.target.name, + user_agent.original, + aws.cloudtrail.request_parameters, + aws.cloudtrail.response_elements, + aws.cloudtrail.user_identity.arn, + aws.cloudtrail.user_identity.type + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Additional Cloud Credentials +** ID: T1098.001 +** Reference URL: https://attack.mitre.org/techniques/T1098/001/ +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Additional Cloud Credentials +** ID: T1098.001 +** Reference URL: https://attack.mitre.org/techniques/T1098/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-github-protected-branch-settings-changed.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-github-protected-branch-settings-changed.asciidoc new file mode 100644 index 0000000000..9a19caaa76 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-github-protected-branch-settings-changed.asciidoc @@ -0,0 +1,63 @@ +[[prebuilt-rule-8-15-12-github-protected-branch-settings-changed]] +=== GitHub Protected Branch Settings Changed + +This rule detects setting modifications for protected branches of a GitHub repository. Branch protection rules can be used to enforce certain workflows or requirements before a contributor can push changes to a branch in your repository. Changes to these protected branch settings should be investigated and verified as legitimate activity. Unauthorized changes could be used to lower your organization's security posture and leave you exposed for future attacks. + +*Rule type*: eql + +*Rule indices*: + +* logs-github.audit-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Cloud +* Use Case: Threat Detection +* Tactic: Defense Evasion +* Data Source: Github + +*Version*: 206 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Rule query + + +[source, js] +---------------------------------- +configuration where event.dataset == "github.audit" + and github.category == "protected_branch" and event.type == "change" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-github-repository-deleted.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-github-repository-deleted.asciidoc new file mode 100644 index 0000000000..34c225fa94 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-github-repository-deleted.asciidoc @@ -0,0 +1,59 @@ +[[prebuilt-rule-8-15-12-github-repository-deleted]] +=== GitHub Repository Deleted + +This rule detects when a GitHub repository is deleted within your organization. Repositories are a critical component used within an organization to manage work, collaborate with others and release products to the public. Any delete action against a repository should be investigated to determine it's validity. Unauthorized deletion of organization repositories could cause irreversible loss of intellectual property and indicate compromise within your organization. + +*Rule type*: eql + +*Rule indices*: + +* logs-github.audit-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Cloud +* Use Case: Threat Detection +* Use Case: UEBA +* Tactic: Impact +* Data Source: Github + +*Version*: 203 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Rule query + + +[source, js] +---------------------------------- +configuration where event.module == "github" and event.dataset == "github.audit" and event.action == "repo.destroy" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Impact +** ID: TA0040 +** Reference URL: https://attack.mitre.org/tactics/TA0040/ +* Technique: +** Name: Data Destruction +** ID: T1485 +** Reference URL: https://attack.mitre.org/techniques/T1485/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-high-number-of-cloned-github-repos-from-pat.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-high-number-of-cloned-github-repos-from-pat.asciidoc new file mode 100644 index 0000000000..d52ecc2bf4 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-high-number-of-cloned-github-repos-from-pat.asciidoc @@ -0,0 +1,61 @@ +[[prebuilt-rule-8-15-12-high-number-of-cloned-github-repos-from-pat]] +=== High Number of Cloned GitHub Repos From PAT + +Detects a high number of unique private repo clone events originating from a single personal access token within a short time period. + +*Rule type*: threshold + +*Rule indices*: + +* logs-github.audit-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-6m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Cloud +* Use Case: Threat Detection +* Use Case: UEBA +* Tactic: Execution +* Data Source: Github + +*Version*: 204 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:"github.audit" and event.category:"configuration" and event.action:"git.clone" and +github.programmatic_access_type:("OAuth access token" or "Fine-grained personal access token") and +github.repository_public:false + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Execution +** ID: TA0002 +** Reference URL: https://attack.mitre.org/tactics/TA0002/ +* Technique: +** Name: Serverless Execution +** ID: T1648 +** Reference URL: https://attack.mitre.org/techniques/T1648/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-possible-consent-grant-attack-via-azure-registered-application.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-possible-consent-grant-attack-via-azure-registered-application.asciidoc new file mode 100644 index 0000000000..9db9bb642d --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-possible-consent-grant-attack-via-azure-registered-application.asciidoc @@ -0,0 +1,147 @@ +[[prebuilt-rule-8-15-12-possible-consent-grant-attack-via-azure-registered-application]] +=== Possible Consent Grant Attack via Azure-Registered Application + +Detects when a user grants permissions to an Azure-registered application or when an administrator grants tenant-wide permissions to an application. An adversary may create an Azure-registered application that requests access to data such as contact information, email, or documents. + +*Rule type*: query + +*Rule indices*: + +* filebeat-* +* logs-azure* +* logs-o365* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/detect-and-remediate-illicit-consent-grants?view=o365-worldwide +* https://www.cloud-architekt.net/detection-and-mitigation-consent-grant-attacks-azuread/ +* https://docs.microsoft.com/en-us/defender-cloud-apps/investigate-risky-oauth#how-to-detect-risky-oauth-apps + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Data Source: Microsoft 365 +* Use Case: Identity and Access Audit +* Resources: Investigation Guide +* Tactic: Initial Access + +*Version*: 213 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Possible Consent Grant Attack via Azure-Registered Application* + + +In an illicit consent grant attack, the attacker creates an Azure-registered application that requests access to data such as contact information, email, or documents. The attacker then tricks an end user into granting that application consent to access their data either through a phishing attack, or by injecting illicit code into a trusted website. After the illicit application has been granted consent, it has account-level access to data without the need for an organizational account. Normal remediation steps like resetting passwords for breached accounts or requiring multi-factor authentication (MFA) on accounts are not effective against this type of attack, since these are third-party applications and are external to the organization. + +Official Microsoft guidance for detecting and remediating this attack can be found https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/detect-and-remediate-illicit-consent-grants[here]. + + +*Possible investigation steps* + + +- From the Azure AD portal, Review the application that was granted permissions: + - Click on the `Review permissions` button on the `Permissions` blade of the application. + - An app should require only permissions related to the app's purpose. If that's not the case, the app might be risky. + - Apps that require high privileges or admin consent are more likely to be risky. +- Investigate the app and the publisher. The following characteristics can indicate suspicious apps: + - A low number of downloads. + - Low rating or score or bad comments. + - Apps with a suspicious publisher or website. + - Apps whose last update is not recent. This might indicate an app that is no longer supported. +- Export and examine the https://docs.microsoft.com/en-us/defender-cloud-apps/manage-app-permissions#oauth-app-auditing[Oauth app auditing] to identify users affected. + + +*False positive analysis* + + +- This mechanism can be used legitimately. Malicious applications abuse the same workflow used by legitimate apps. Thus, analysts must review each app consent to ensure that only desired apps are granted access. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Assess the criticality of affected services and servers. + - Work with your IT team to identify and minimize the impact on users. + - Identify if the attacker is moving laterally and compromising other accounts, servers, or services. + - Identify any regulatory or legal ramifications related to this activity. +- Disable the malicious application to stop user access and the application access to your data. +- Revoke the application Oauth consent grant. The `Remove-AzureADOAuth2PermissionGrant` cmdlet can be used to complete this task. +- Remove the service principal application role assignment. The `Remove-AzureADServiceAppRoleAssignment` cmdlet can be used to complete this task. +- Revoke the refresh token for all users assigned to the application. Azure provides a https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Revoke-AADSignInSessions[playbook] for this task. +- https://docs.microsoft.com/en-us/defender-cloud-apps/manage-app-permissions#send-feedback[Report] the application as malicious to Microsoft. +- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords or delete API keys as needed to revoke the attacker's access to the environment. Work with your IT teams to minimize the impact on business operations during these actions. +- Investigate the potential for data compromise from the user's email and file sharing services. Activate your Data Loss incident response playbook. +- Disable the permission for a user to set consent permission on their behalf. + - Enable the https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-admin-consent-workflow[Admin consent request] feature. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:(azure.activitylogs or azure.auditlogs or o365.audit) and + ( + azure.activitylogs.operation_name:"Consent to application" or + azure.auditlogs.operation_name:"Consent to application" or + event.action:"Consent to application." + ) and + event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Phishing +** ID: T1566 +** Reference URL: https://attack.mitre.org/techniques/T1566/ +* Sub-technique: +** Name: Spearphishing Link +** ID: T1566.002 +** Reference URL: https://attack.mitre.org/techniques/T1566/002/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-unusual-high-confidence-content-filter-blocks-detected.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-unusual-high-confidence-content-filter-blocks-detected.asciidoc new file mode 100644 index 0000000000..920ace6a6e --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-unusual-high-confidence-content-filter-blocks-detected.asciidoc @@ -0,0 +1,124 @@ +[[prebuilt-rule-8-15-12-unusual-high-confidence-content-filter-blocks-detected]] +=== Unusual High Confidence Content Filter Blocks Detected + +Detects repeated high-confidence 'BLOCKED' actions coupled with specific 'Content Filter' policy violation having codes such as 'MISCONDUCT', 'HATE', 'SEXUAL', INSULTS', 'PROMPT_ATTACK', 'VIOLENCE' indicating persistent misuse or attempts to probe the model's ethical boundaries. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 10m + +*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-components.html +* https://atlas.mitre.org/techniques/AML.T0051 +* https://atlas.mitre.org/techniques/AML.T0054 +* https://www.elastic.co/security-labs/elastic-advances-llm-security + +*Tags*: + +* Domain: LLM +* Data Source: AWS Bedrock +* Data Source: AWS S3 +* Use Case: Policy Violation +* Mitre Atlas: T0051 +* Mitre Atlas: T0054 + +*Version*: 5 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Amazon Bedrock Guardrail High Confidence Content Filter Blocks.* + + +Amazon Bedrock Guardrail is a set of features within Amazon Bedrock designed to help businesses apply robust safety and privacy controls to their generative AI applications. + +It enables users to set guidelines and filters that manage content quality, relevancy, and adherence to responsible AI practices. + +Through Guardrail, organizations can enable Content filter for Hate, Insults, Sexual Violence and Misconduct along with Prompt Attack filters prompts +to prevent the model from generating content on specific, undesired subjects, and they can establish thresholds for harmful content categories. + + +*Possible investigation steps* + + +- Identify the user account whose prompts caused high confidence content filter blocks and whether it should perform this kind of action. +- Investigate other alerts associated with the user account during the past 48 hours. +- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day? +- Examine the account's prompts and responses in the last 24 hours. +- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours. + + +*False positive analysis* + + +- Verify the user account that queried denied topics, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Disable or limit the account during the investigation and response. +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services. + - Identify any regulatory or legal ramifications related to this activity. +- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed. +- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Setup + + + +*Setup* + + +This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation: + +https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-create.html + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-aws_bedrock.invocation-* +| MV_EXPAND gen_ai.compliance.violation_code +| MV_EXPAND gen_ai.policy.confidence +| MV_EXPAND gen_ai.policy.name +| where gen_ai.policy.action == "BLOCKED" and gen_ai.policy.name == "content_policy" and gen_ai.policy.confidence LIKE "HIGH" and gen_ai.compliance.violation_code IN ("HATE", "MISCONDUCT", "SEXUAL", "INSULTS", "PROMPT_ATTACK", "VIOLENCE") +| keep user.id, gen_ai.compliance.violation_code +| stats block_count_per_violation = count() by user.id, gen_ai.compliance.violation_code +| SORT block_count_per_violation DESC +| keep user.id, gen_ai.compliance.violation_code, block_count_per_violation +| STATS violation_count = SUM(block_count_per_violation) by user.id +| WHERE violation_count > 5 +| SORT violation_count DESC + +---------------------------------- diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-unusual-high-denied-sensitive-information-policy-blocks-detected.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-unusual-high-denied-sensitive-information-policy-blocks-detected.asciidoc new file mode 100644 index 0000000000..f845315248 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-unusual-high-denied-sensitive-information-policy-blocks-detected.asciidoc @@ -0,0 +1,119 @@ +[[prebuilt-rule-8-15-12-unusual-high-denied-sensitive-information-policy-blocks-detected]] +=== Unusual High Denied Sensitive Information Policy Blocks Detected + +Detects repeated compliance violation 'BLOCKED' actions coupled with specific policy name such as 'sensitive_information_policy', indicating persistent misuse or attempts to probe the model's denied topics. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 10m + +*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-components.html +* https://atlas.mitre.org/techniques/AML.T0051 +* https://atlas.mitre.org/techniques/AML.T0054 +* https://www.elastic.co/security-labs/elastic-advances-llm-security + +*Tags*: + +* Domain: LLM +* Data Source: AWS Bedrock +* Data Source: AWS S3 +* Use Case: Policy Violation +* Mitre Atlas: T0051 +* Mitre Atlas: T0054 + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Amazon Bedrock Guardrail High Sensitive Information Policy Blocks.* + + +Amazon Bedrock Guardrail is a set of features within Amazon Bedrock designed to help businesses apply robust safety and privacy controls to their generative AI applications. + +It enables users to set guidelines and filters that manage content quality, relevancy, and adherence to responsible AI practices. + +Through Guardrail, organizations can define "sensitive information filters" to prevent the model from generating content on specific, undesired subjects, +and they can establish thresholds for harmful content categories. + + +*Possible investigation steps* + + +- Identify the user account that queried sensitive information and whether it should perform this kind of action. +- Investigate other alerts associated with the user account during the past 48 hours. +- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day? +- Examine the account's prompts and responses in the last 24 hours. +- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours. + + +*False positive analysis* + + +- Verify the user account that queried denied topics, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Disable or limit the account during the investigation and response. +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services. + - Identify any regulatory or legal ramifications related to this activity. +- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed. +- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Setup + + + +*Setup* + + +This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation: + +https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-create.html + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-aws_bedrock.invocation-* +| MV_EXPAND gen_ai.policy.name +| where gen_ai.policy.action == "BLOCKED" and gen_ai.compliance.violation_detected == "true" and gen_ai.policy.name == "sensitive_information_policy" +| keep user.id +| stats sensitive_information_block = count() by user.id +| where sensitive_information_block > 5 +| sort sensitive_information_block desc + +---------------------------------- diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-unusual-high-denied-topic-blocks-detected.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-unusual-high-denied-topic-blocks-detected.asciidoc new file mode 100644 index 0000000000..3adbbacc7b --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-unusual-high-denied-topic-blocks-detected.asciidoc @@ -0,0 +1,119 @@ +[[prebuilt-rule-8-15-12-unusual-high-denied-topic-blocks-detected]] +=== Unusual High Denied Topic Blocks Detected + +Detects repeated compliance violation 'BLOCKED' actions coupled with specific policy name such as 'topic_policy', indicating persistent misuse or attempts to probe the model's denied topics. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 10m + +*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-components.html +* https://atlas.mitre.org/techniques/AML.T0051 +* https://atlas.mitre.org/techniques/AML.T0054 +* https://www.elastic.co/security-labs/elastic-advances-llm-security + +*Tags*: + +* Domain: LLM +* Data Source: AWS Bedrock +* Data Source: AWS S3 +* Use Case: Policy Violation +* Mitre Atlas: T0051 +* Mitre Atlas: T0054 + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Amazon Bedrock Guardrail High Denied Topic Blocks.* + + +Amazon Bedrock Guardrail is a set of features within Amazon Bedrock designed to help businesses apply robust safety and privacy controls to their generative AI applications. + +It enables users to set guidelines and filters that manage content quality, relevancy, and adherence to responsible AI practices. + +Through Guardrail, organizations can define "denied topics" to prevent the model from generating content on specific, undesired subjects, +and they can establish thresholds for harmful content categories, including hate speech, violence, or offensive language. + + +*Possible investigation steps* + + +- Identify the user account that queried denied topics and whether it should perform this kind of action. +- Investigate other alerts associated with the user account during the past 48 hours. +- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day? +- Examine the account's prompts and responses in the last 24 hours. +- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours. + + +*False positive analysis* + + +- Verify the user account that queried denied topics, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Disable or limit the account during the investigation and response. +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services. + - Identify any regulatory or legal ramifications related to this activity. +- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed. +- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Setup + + + +*Setup* + + +This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation: + +https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-create.html + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-aws_bedrock.invocation-* +| MV_EXPAND gen_ai.policy.name +| where gen_ai.policy.action == "BLOCKED" and gen_ai.compliance.violation_detected == "true" and gen_ai.policy.name == "topic_policy" +| keep user.id +| stats denied_topics = count() by user.id +| where denied_topics > 5 +| sort denied_topics desc + +---------------------------------- diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-unusual-high-word-policy-blocks-detected.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-unusual-high-word-policy-blocks-detected.asciidoc new file mode 100644 index 0000000000..dd2dfb4ef0 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rule-8-15-12-unusual-high-word-policy-blocks-detected.asciidoc @@ -0,0 +1,119 @@ +[[prebuilt-rule-8-15-12-unusual-high-word-policy-blocks-detected]] +=== Unusual High Word Policy Blocks Detected + +Detects repeated compliance violation 'BLOCKED' actions coupled with specific policy name such as 'word_policy', indicating persistent misuse or attempts to probe the model's denied topics. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 10m + +*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-components.html +* https://atlas.mitre.org/techniques/AML.T0051 +* https://atlas.mitre.org/techniques/AML.T0054 +* https://www.elastic.co/security-labs/elastic-advances-llm-security + +*Tags*: + +* Domain: LLM +* Data Source: AWS Bedrock +* Data Source: AWS S3 +* Use Case: Policy Violation +* Mitre Atlas: T0051 +* Mitre Atlas: T0054 + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Amazon Bedrock Guardrail High Word Policy Blocks.* + + +Amazon Bedrock Guardrail is a set of features within Amazon Bedrock designed to help businesses apply robust safety and privacy controls to their generative AI applications. + +It enables users to set guidelines and filters that manage content quality, relevancy, and adherence to responsible AI practices. + +Through Guardrail, organizations can define "word filters" to prevent the model from generating content on profanity, undesired subjects, +and they can establish thresholds for harmful content categories, including hate speech, violence, or offensive language. + + +*Possible investigation steps* + + +- Identify the user account whose prompts contained profanity and whether it should perform this kind of action. +- Investigate other alerts associated with the user account during the past 48 hours. +- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day? +- Examine the account's prompts and responses in the last 24 hours. +- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours. + + +*False positive analysis* + + +- Verify the user account that queried denied topics, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Disable or limit the account during the investigation and response. +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services. + - Identify any regulatory or legal ramifications related to this activity. +- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed. +- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Setup + + + +*Setup* + + +This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation: + +https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-create.html + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-aws_bedrock.invocation-* +| MV_EXPAND gen_ai.policy.name +| where gen_ai.policy.action == "BLOCKED" and gen_ai.compliance.violation_detected == "true" and gen_ai.policy.name == "word_policy" +| keep user.id +| stats profanity_words= count() by user.id +| where profanity_words > 5 +| sort profanity_words desc + +---------------------------------- diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rules-8-15-12-appendix.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rules-8-15-12-appendix.asciidoc new file mode 100644 index 0000000000..704f166568 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rules-8-15-12-appendix.asciidoc @@ -0,0 +1,17 @@ +["appendix",role="exclude",id="prebuilt-rule-8-15-12-prebuilt-rules-8-15-12-appendix"] += Downloadable rule update v8.15.12 + +This section lists all updates associated with version 8.15.12 of the Fleet integration *Prebuilt Security Detection Rules*. + + +include::prebuilt-rule-8-15-12-aws-iam-login-profile-added-for-root.asciidoc[] +include::prebuilt-rule-8-15-12-aws-bedrock-invocations-without-guardrails-detected-by-a-single-user-over-a-session.asciidoc[] +include::prebuilt-rule-8-15-12-unusual-high-denied-sensitive-information-policy-blocks-detected.asciidoc[] +include::prebuilt-rule-8-15-12-unusual-high-denied-topic-blocks-detected.asciidoc[] +include::prebuilt-rule-8-15-12-unusual-high-word-policy-blocks-detected.asciidoc[] +include::prebuilt-rule-8-15-12-aws-iam-user-created-access-keys-for-another-user.asciidoc[] +include::prebuilt-rule-8-15-12-unusual-high-confidence-content-filter-blocks-detected.asciidoc[] +include::prebuilt-rule-8-15-12-possible-consent-grant-attack-via-azure-registered-application.asciidoc[] +include::prebuilt-rule-8-15-12-github-protected-branch-settings-changed.asciidoc[] +include::prebuilt-rule-8-15-12-high-number-of-cloned-github-repos-from-pat.asciidoc[] +include::prebuilt-rule-8-15-12-github-repository-deleted.asciidoc[] diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rules-8-15-12-summary.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rules-8-15-12-summary.asciidoc new file mode 100644 index 0000000000..e1a5667687 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rules-8-15-12-summary.asciidoc @@ -0,0 +1,34 @@ +[[prebuilt-rule-8-15-12-prebuilt-rules-8-15-12-summary]] +[role="xpack"] +== Update v8.15.12 + +This section lists all updates associated with version 8.15.12 of the Fleet integration *Prebuilt Security Detection Rules*. + + +[width="100%",options="header"] +|============================================== +|Rule |Description |Status |Version + +|<> | Detects when an AWS IAM login profile is added to a root user account and is self-assigned. Adversaries, with temporary access to the root account, may add a login profile to the root user account to maintain access even if the original access key is rotated or disabled. | new | 1 + +|<> | Identifies multiple AWS Bedrock executions in a one minute time window without guardrails by the same user in the same account over a session. Multiple consecutive executions implies that a user may be intentionally attempting to bypass security controls, by not routing the requests with the desired guardrail configuration in order to access sensitive information, or possibly exploit a vulnerability in the system. | new | 1 + +|<> | Detects repeated compliance violation 'BLOCKED' actions coupled with specific policy name such as 'sensitive_information_policy', indicating persistent misuse or attempts to probe the model's denied topics. | new | 1 + +|<> | Detects repeated compliance violation 'BLOCKED' actions coupled with specific policy name such as 'topic_policy', indicating persistent misuse or attempts to probe the model's denied topics. | new | 1 + +|<> | Detects repeated compliance violation 'BLOCKED' actions coupled with specific policy name such as 'word_policy', indicating persistent misuse or attempts to probe the model's denied topics. | new | 1 + +|<> | An adversary with access to a set of compromised credentials may attempt to persist or escalate privileges by creating a new set of credentials for an existing user. This rule looks for use of the IAM `CreateAccessKey` API operation to create new programmatic access keys for another IAM user. | update | 5 + +|<> | Detects repeated high-confidence 'BLOCKED' actions coupled with specific 'Content Filter' policy violation having codes such as 'MISCONDUCT', 'HATE', 'SEXUAL', INSULTS', 'PROMPT_ATTACK', 'VIOLENCE' indicating persistent misuse or attempts to probe the model's ethical boundaries. | update | 5 + +|<> | Detects when a user grants permissions to an Azure-registered application or when an administrator grants tenant-wide permissions to an application. An adversary may create an Azure-registered application that requests access to data such as contact information, email, or documents. | update | 213 + +|<> | This rule detects setting modifications for protected branches of a GitHub repository. Branch protection rules can be used to enforce certain workflows or requirements before a contributor can push changes to a branch in your repository. Changes to these protected branch settings should be investigated and verified as legitimate activity. Unauthorized changes could be used to lower your organization's security posture and leave you exposed for future attacks. | update | 206 + +|<> | Detects a high number of unique private repo clone events originating from a single personal access token within a short time period. | update | 204 + +|<> | This rule detects when a GitHub repository is deleted within your organization. Repositories are a critical component used within an organization to manage work, collaborate with others and release products to the public. Any delete action against a repository should be investigated to determine it's validity. Unauthorized deletion of organization repositories could cause irreversible loss of intellectual property and indicate compromise within your organization. | update | 203 + +|============================================== diff --git a/docs/detections/prebuilt-rules/prebuilt-rules-downloadable-updates.asciidoc b/docs/detections/prebuilt-rules/prebuilt-rules-downloadable-updates.asciidoc index 9d700b57e0..3cb1510108 100644 --- a/docs/detections/prebuilt-rules/prebuilt-rules-downloadable-updates.asciidoc +++ b/docs/detections/prebuilt-rules/prebuilt-rules-downloadable-updates.asciidoc @@ -13,6 +13,10 @@ For previous rule updates, please navigate to the https://www.elastic.co/guide/e |Update version |Date | New rules | Updated rules | Notes +|<> | 10 Dec 2024 | 5 | 6 | +This release includes new rules for AWS, and AWS Bedrock integration. New rules for AWS include detection for persistence. New rules for AWS Bedrock include detection for LLM prompt injection and LLM jailbreak. Additionally, significant rule tuning for AWS, Github, AWS Bedrock and Azure rules has been added for better rule efficacy and performance. + + |<> | 27 Nov 2024 | 1 | 0 | This release includes a new rule for AWS integration privilege escalation detection. @@ -70,3 +74,4 @@ include::downloadable-packages/8-15-8/prebuilt-rules-8-15-8-summary.asciidoc[lev include::downloadable-packages/8-15-9/prebuilt-rules-8-15-9-summary.asciidoc[leveloffset=+1] include::downloadable-packages/8-15-10/prebuilt-rules-8-15-10-summary.asciidoc[leveloffset=+1] include::downloadable-packages/8-15-11/prebuilt-rules-8-15-11-summary.asciidoc[leveloffset=+1] +include::downloadable-packages/8-15-12/prebuilt-rules-8-15-12-summary.asciidoc[leveloffset=+1] diff --git a/docs/detections/prebuilt-rules/prebuilt-rules-reference.asciidoc b/docs/detections/prebuilt-rules/prebuilt-rules-reference.asciidoc index 698157fdea..78aedfefc6 100644 --- a/docs/detections/prebuilt-rules/prebuilt-rules-reference.asciidoc +++ b/docs/detections/prebuilt-rules/prebuilt-rules-reference.asciidoc @@ -28,6 +28,8 @@ and their rule type is `machine_learning`. |<> |Identifies multiple violations of AWS Bedrock guardrails by the same user in the same account over a session. Multiple violations implies that a user may be intentionally attempting to cirvumvent security controls, access sensitive information, or possibly exploit a vulnerability in the system. |[Domain: LLM], [Data Source: AWS Bedrock], [Data Source: AWS S3], [Resources: Investigation Guide], [Use Case: Policy Violation], [Mitre Atlas: T0051], [Mitre Atlas: T0054] |8.13.0 |4 +|<> |Identifies multiple AWS Bedrock executions in a one minute time window without guardrails by the same user in the same account over a session. Multiple consecutive executions implies that a user may be intentionally attempting to bypass security controls, by not routing the requests with the desired guardrail configuration in order to access sensitive information, or possibly exploit a vulnerability in the system. |[Domain: LLM], [Data Source: AWS Bedrock], [Data Source: AWS S3], [Resources: Investigation Guide], [Use Case: Policy Violation], [Mitre Atlas: T0051], [Mitre Atlas: T0054] |8.13.0 |1 + |<> |Detects the use of the AWS CLI with the `--endpoint-url` argument, which allows users to specify a custom endpoint URL for AWS services. This can be leveraged by adversaries to redirect API requests to non-standard or malicious endpoints, potentially bypassing typical security controls and logging mechanisms. This behavior may indicate an attempt to interact with unauthorized or compromised infrastructure, exfiltrate data, or perform other malicious activities under the guise of legitimate AWS operations. |[Data Source: Elastic Defend], [Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Command and Control] |None |1 |<> |Identifies the creation of an AWS log trail that specifies the settings for delivery of log data. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Use Case: Log Auditing], [Tactic: Collection] |None |207 @@ -112,6 +114,8 @@ and their rule type is `machine_learning`. |<> |Identifies the deletion of a specified AWS Identity and Access Management (IAM) resource group. Deleting a resource group does not delete resources that are members of the group; it only deletes the group structure. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS IAM], [Tactic: Impact] |None |206 +|<> |Detects when an AWS IAM login profile is added to a root user account and is self-assigned. Adversaries, with temporary access to the root account, may add a login profile to the root user account to maintain access even if the original access key is rotated or disabled. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS IAM], [Use Case: Identity and Access Audit], [Tactic: Persistence], [Resources: Investigation Guide] |8.13.0 |1 + |<> |Identifies when an AWS IAM login profile is added to a user. Adversaries may add a login profile to an IAM user who typically does not have one and is used only for programmatic access. This can be used to maintain access to the account even if the original access key is rotated or disabled. This is a building block rule and does not generate alerts on its own. It is meant to be used for correlation with other rules to detect suspicious activity. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS IAM], [Use Case: Identity and Access Audit], [Tactic: Persistence], [Rule Type: BBR] |None |2 |<> |Identifies AWS IAM password recovery requests. An adversary may attempt to gain unauthorized AWS access by abusing password recovery mechanisms. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS Signin], [Use Case: Identity and Access Audit], [Tactic: Initial Access] |None |206 @@ -124,7 +128,7 @@ and their rule type is `machine_learning`. |<> |Identifies the addition of a user to a specified group in AWS Identity and Access Management (IAM). |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Use Case: Identity and Access Audit], [Tactic: Credential Access], [Tactic: Persistence], [Resources: Investigation Guide] |None |209 -|<> |An adversary with access to a set of compromised credentials may attempt to persist or escalate privileges by creating a new set of credentials for an existing user. This rule looks for use of the IAM `CreateAccessKey` API operation to create new programmatic access keys for another IAM user. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS IAM], [Use Case: Identity and Access Audit], [Tactic: Privilege Escalation], [Tactic: Persistence], [Resources: Investigation Guide] |8.13.0 |4 +|<> |An adversary with access to a set of compromised credentials may attempt to persist or escalate privileges by creating a new set of credentials for an existing user. This rule looks for use of the IAM `CreateAccessKey` API operation to create new programmatic access keys for another IAM user. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS IAM], [Use Case: Identity and Access Audit], [Tactic: Privilege Escalation], [Tactic: Persistence], [Resources: Investigation Guide] |8.13.0 |5 |<> |Identifies attempts to disable or schedule the deletion of an AWS KMS Customer Managed Key (CMK). Deleting an AWS KMS key is destructive and potentially dangerous. It deletes the key material and all metadata associated with the KMS key and is irreversible. After a KMS key is deleted, the data that was encrypted under that KMS key can no longer be decrypted, which means that data becomes unrecoverable. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS KMS], [Use Case: Log Auditing], [Tactic: Impact] |None |106 @@ -262,9 +266,9 @@ and their rule type is `machine_learning`. |<> |Identifies a modification on the dsHeuristics attribute on the bit that holds the configuration of groups excluded from the SDProp process. The SDProp compares the permissions on protected objects with those defined on the AdminSDHolder object. If the permissions on any of the protected accounts and groups do not match, the permissions on the protected accounts and groups are reset to match those of the domain's AdminSDHolder object, meaning that groups excluded will remain unchanged. Attackers can abuse this misconfiguration to maintain long-term access to privileged accounts in these groups. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Persistence], [Data Source: Active Directory], [Resources: Investigation Guide], [Use Case: Active Directory Monitoring], [Data Source: System] |8.14.0 |212 -|<> |Detects when an administrator role is assigned to an Okta group. An adversary may attempt to assign administrator privileges to an Okta group in order to assign additional permissions to compromised user accounts and maintain access to their target organization. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Persistence] |8.14.0 |308 +|<> |Detects when an administrator role is assigned to an Okta group. An adversary may attempt to assign administrator privileges to an Okta group in order to assign additional permissions to compromised user accounts and maintain access to their target organization. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Persistence] |8.15.0 |409 -|<> |Identifies when an administrator role is assigned to an Okta user. An adversary may attempt to assign an administrator role to an Okta user in order to assign additional permissions to a user account and maintain access to their target's environment. |[Data Source: Okta], [Use Case: Identity and Access Audit], [Tactic: Persistence] |8.14.0 |308 +|<> |Identifies when an administrator role is assigned to an Okta user. An adversary may attempt to assign an administrator role to an Okta user in order to assign additional permissions to a user account and maintain access to their target's environment. |[Data Source: Okta], [Use Case: Identity and Access Audit], [Tactic: Persistence] |8.15.0 |409 |<> |Detects writing executable files that will be automatically launched by Adobe on launch. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Persistence], [Resources: Investigation Guide], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Sysmon], [Data Source: SentinelOne], [Data Source: Microsoft Defender for Endpoint] |8.14.0 |414 @@ -300,23 +304,23 @@ and their rule type is `machine_learning`. |<> |Monitors for the deletion of the kernel ring buffer events through dmesg. Attackers may clear kernel ring buffer events to evade detection after installing a Linux kernel module (LKM). |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Data Source: Elastic Endgame], [Data Source: Auditd Manager] |None |5 -|<> |Detects attempts to create an Okta API token. An adversary may create an Okta API token to maintain access to an organization's network while they work to achieve their objectives. An attacker may abuse an API token to execute techniques such as creating user accounts or disabling security rules or policies. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Persistence] |8.14.0 |308 +|<> |Detects attempts to create an Okta API token. An adversary may create an Okta API token to maintain access to an organization's network while they work to achieve their objectives. An attacker may abuse an API token to execute techniques such as creating user accounts or disabling security rules or policies. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Persistence] |8.15.0 |409 -|<> |Detects attempts to deactivate an Okta application. An adversary may attempt to modify, deactivate, or delete an Okta application in order to weaken an organization's security controls or disrupt their business operations. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Impact] |8.14.0 |309 +|<> |Detects attempts to deactivate an Okta application. An adversary may attempt to modify, deactivate, or delete an Okta application in order to weaken an organization's security controls or disrupt their business operations. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Impact] |8.15.0 |410 -|<> |Detects attempts to deactivate an Okta network zone. Okta network zones can be configured to limit or restrict access to a network based on IP addresses or geolocations. An adversary may attempt to modify, delete, or deactivate an Okta network zone in order to remove or weaken an organization's security controls. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Use Case: Network Security Monitoring], [Tactic: Defense Evasion] |8.14.0 |309 +|<> |Detects attempts to deactivate an Okta network zone. Okta network zones can be configured to limit or restrict access to a network based on IP addresses or geolocations. An adversary may attempt to modify, delete, or deactivate an Okta network zone in order to remove or weaken an organization's security controls. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Use Case: Network Security Monitoring], [Tactic: Defense Evasion] |8.15.0 |410 -|<> |Detects attempts to deactivate an Okta policy. An adversary may attempt to deactivate an Okta policy in order to weaken an organization's security controls. For example, an adversary may attempt to deactivate an Okta multi-factor authentication (MFA) policy in order to weaken the authentication requirements for user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Defense Evasion] |8.14.0 |309 +|<> |Detects attempts to deactivate an Okta policy. An adversary may attempt to deactivate an Okta policy in order to weaken an organization's security controls. For example, an adversary may attempt to deactivate an Okta multi-factor authentication (MFA) policy in order to weaken the authentication requirements for user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Defense Evasion] |8.15.0 |410 -|<> |Detects attempts to deactivate a rule within an Okta policy. An adversary may attempt to deactivate a rule within an Okta policy in order to remove or weaken an organization's security controls. |[Use Case: Identity and Access Audit], [Tactic: Defense Evasion], [Data Source: Okta] |8.14.0 |310 +|<> |Detects attempts to deactivate a rule within an Okta policy. An adversary may attempt to deactivate a rule within an Okta policy in order to remove or weaken an organization's security controls. |[Use Case: Identity and Access Audit], [Tactic: Defense Evasion], [Data Source: Okta] |8.15.0 |411 -|<> |Detects attempts to delete an Okta application. An adversary may attempt to modify, deactivate, or delete an Okta application in order to weaken an organization's security controls or disrupt their business operations. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Impact] |8.14.0 |308 +|<> |Detects attempts to delete an Okta application. An adversary may attempt to modify, deactivate, or delete an Okta application in order to weaken an organization's security controls or disrupt their business operations. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Impact] |8.15.0 |409 -|<> |Detects attempts to delete an Okta network zone. Okta network zones can be configured to limit or restrict access to a network based on IP addresses or geolocations. An adversary may attempt to modify, delete, or deactivate an Okta network zone in order to remove or weaken an organization's security controls. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Use Case: Network Security Monitoring], [Tactic: Defense Evasion] |8.14.0 |309 +|<> |Detects attempts to delete an Okta network zone. Okta network zones can be configured to limit or restrict access to a network based on IP addresses or geolocations. An adversary may attempt to modify, delete, or deactivate an Okta network zone in order to remove or weaken an organization's security controls. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Use Case: Network Security Monitoring], [Tactic: Defense Evasion] |8.15.0 |410 -|<> |Detects attempts to delete an Okta policy. An adversary may attempt to delete an Okta policy in order to weaken an organization's security controls. For example, an adversary may attempt to delete an Okta multi-factor authentication (MFA) policy in order to weaken the authentication requirements for user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Defense Evasion] |8.14.0 |309 +|<> |Detects attempts to delete an Okta policy. An adversary may attempt to delete an Okta policy in order to weaken an organization's security controls. For example, an adversary may attempt to delete an Okta multi-factor authentication (MFA) policy in order to weaken the authentication requirements for user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Defense Evasion] |8.15.0 |410 -|<> |Detects attempts to delete a rule within an Okta policy. An adversary may attempt to delete an Okta policy rule in order to weaken an organization's security controls. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Defense Evasion] |8.14.0 |309 +|<> |Detects attempts to delete a rule within an Okta policy. An adversary may attempt to delete an Okta policy rule in order to weaken an organization's security controls. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Defense Evasion] |8.15.0 |410 |<> |Adversaries may attempt to disable the Auditd service to evade detection. Auditd is a Linux service that provides system auditing and logging. Disabling the Auditd service can prevent the system from logging important security events, which can be used to detect malicious activity. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Data Source: Elastic Endgame] |None |1 @@ -334,31 +338,31 @@ and their rule type is `machine_learning`. |<> |Adversaries may install a root certificate on a compromised system to avoid warnings when connecting to their command and control servers. Root certificates are used in public key cryptography to identify a root certificate authority (CA). When a root certificate is installed, the system or application will trust certificates in the root's chain of trust that have been signed by the root certificate. |[Domain: Endpoint], [OS: macOS], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Defend] |None |106 -|<> |Detects attempts to modify an Okta application. An adversary may attempt to modify, deactivate, or delete an Okta application in order to weaken an organization's security controls or disrupt their business operations. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Impact] |8.14.0 |308 +|<> |Detects attempts to modify an Okta application. An adversary may attempt to modify, deactivate, or delete an Okta application in order to weaken an organization's security controls or disrupt their business operations. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Impact] |8.15.0 |409 -|<> |Detects attempts to modify an Okta network zone. Okta network zones can be configured to limit or restrict access to a network based on IP addresses or geolocations. An adversary may attempt to modify, delete, or deactivate an Okta network zone in order to remove or weaken an organization's security controls. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Use Case: Network Security Monitoring], [Tactic: Defense Evasion] |8.14.0 |309 +|<> |Detects attempts to modify an Okta network zone. Okta network zones can be configured to limit or restrict access to a network based on IP addresses or geolocations. An adversary may attempt to modify, delete, or deactivate an Okta network zone in order to remove or weaken an organization's security controls. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Use Case: Network Security Monitoring], [Tactic: Defense Evasion] |8.15.0 |410 -|<> |Detects attempts to modify an Okta policy. An adversary may attempt to modify an Okta policy in order to weaken an organization's security controls. For example, an adversary may attempt to modify an Okta multi-factor authentication (MFA) policy in order to weaken the authentication requirements for user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Defense Evasion] |8.14.0 |309 +|<> |Detects attempts to modify an Okta policy. An adversary may attempt to modify an Okta policy in order to weaken an organization's security controls. For example, an adversary may attempt to modify an Okta multi-factor authentication (MFA) policy in order to weaken the authentication requirements for user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Defense Evasion] |8.15.0 |410 -|<> |Detects attempts to modify a rule within an Okta policy. An adversary may attempt to modify an Okta policy rule in order to weaken an organization's security controls. |[Use Case: Identity and Access Audit], [Tactic: Defense Evasion], [Data Source: Okta] |8.14.0 |310 +|<> |Detects attempts to modify a rule within an Okta policy. An adversary may attempt to modify an Okta policy rule in order to weaken an organization's security controls. |[Use Case: Identity and Access Audit], [Tactic: Defense Evasion], [Data Source: Okta] |8.15.0 |411 |<> |Identifies the execution of macOS built-in commands to mount a Server Message Block (SMB) network share. Adversaries may use valid accounts to interact with a remote network share using SMB. |[Domain: Endpoint], [OS: macOS], [Use Case: Threat Detection], [Tactic: Lateral Movement], [Data Source: Elastic Defend] |None |107 -|<> |Detects attempts to reset an Okta user's enrolled multi-factor authentication (MFA) factors. An adversary may attempt to reset the MFA factors for an Okta user's account in order to register new MFA factors and abuse the account to blend in with normal activity in the victim's environment. |[Tactic: Persistence], [Use Case: Identity and Access Audit], [Data Source: Okta] |8.14.0 |309 +|<> |Detects attempts to reset an Okta user's enrolled multi-factor authentication (MFA) factors. An adversary may attempt to reset the MFA factors for an Okta user's account in order to register new MFA factors and abuse the account to blend in with normal activity in the victim's environment. |[Tactic: Persistence], [Use Case: Identity and Access Audit], [Data Source: Okta] |8.15.0 |410 |<> |Identifies discovery request `DescribeInstanceAttribute` with the attribute userData and instanceId in AWS CloudTrail logs. This may indicate an attempt to retrieve user data from an EC2 instance. Adversaries may use this information to gather sensitive data from the instance or to identify potential vulnerabilities. This is a building block rule that does not generate an alert on its own, but serves as a signal for anomalous activity. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: Amazon EC2], [Use Case: Log Auditing], [Tactic: Discovery], [Rule Type: BBR] |None |2 -|<> |Identifies attempts to revoke an Okta API token. An adversary may attempt to revoke or delete an Okta API token to disrupt an organization's business operations. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Impact] |8.14.0 |309 +|<> |Identifies attempts to revoke an Okta API token. An adversary may attempt to revoke or delete an Okta API token to disrupt an organization's business operations. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Impact] |8.15.0 |410 |<> |Identifies attempts to unload the Elastic Endpoint Security kernel extension via the kextunload command. |[Domain: Endpoint], [OS: macOS], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Defend] |None |106 -|<> |Detects attempts to bypass Okta multi-factor authentication (MFA). An adversary may attempt to bypass the Okta MFA policies configured for an organization in order to obtain unauthorized access to an application. |[Data Source: Okta], [Use Case: Identity and Access Audit], [Tactic: Credential Access] |8.14.0 |310 +|<> |Detects attempts to bypass Okta multi-factor authentication (MFA). An adversary may attempt to bypass the Okta MFA policies configured for an organization in order to obtain unauthorized access to an application. |[Data Source: Okta], [Use Case: Identity and Access Audit], [Tactic: Credential Access] |8.15.0 |411 |<> |Attackers may try to access private keys, e.g. ssh, in order to gain further authenticated access to the environment. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Credential Access], [Data Source: Elastic Defend], [Rule Type: BBR], [Data Source: Sysmon], [Data Source: Elastic Endgame], [Data Source: System] |8.14.0 |106 |<> |Identifies potential brute-force attempts against Microsoft 365 user accounts by detecting a high number of failed login attempts or login sources within a 30-minute window. Attackers may attempt to brute force user accounts to gain unauthorized access to Microsoft 365 services. |[Domain: Cloud], [Domain: SaaS], [Data Source: Microsoft 365], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Credential Access] |8.13.0 |311 -|<> |Identifies when an Okta user account is locked out 3 times within a 3 hour window. An adversary may attempt a brute force or password spraying attack to obtain unauthorized access to user accounts. The default Okta authentication policy ensures that a user account is locked out after 10 failed authentication attempts. |[Use Case: Identity and Access Audit], [Tactic: Credential Access], [Data Source: Okta] |8.14.0 |311 +|<> |Identifies when an Okta user account is locked out 3 times within a 3 hour window. An adversary may attempt a brute force or password spraying attack to obtain unauthorized access to user accounts. The default Okta authentication policy ensures that a user account is locked out after 10 failed authentication attempts. |[Use Case: Identity and Access Audit], [Tactic: Credential Access], [Data Source: Okta] |8.15.0 |412 |<> |This rule detects successful authentications via PAM grantors that are not commonly used. This could indicate an attacker is attempting to escalate privileges or maintain persistence on the system by modifying the default PAM configuration. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Credential Access], [Tactic: Persistence], [Data Source: Auditd Manager] |None |1 @@ -714,29 +718,29 @@ and their rule type is `machine_learning`. |<> |Finder Sync plugins enable users to extend Finder’s functionality by modifying the user interface. Adversaries may abuse this feature by adding a rogue Finder Plugin to repeatedly execute malicious payloads for persistence. |[Domain: Endpoint], [OS: macOS], [Use Case: Threat Detection], [Tactic: Persistence], [Data Source: Elastic Defend] |None |206 -|<> |Detects a first occurrence event for a personal access token (PAT) not seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Execution], [Rule Type: BBR], [Data Source: Github] |8.12.0 |103 +|<> |Detects a first occurrence event for a personal access token (PAT) not seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Execution], [Rule Type: BBR], [Data Source: Github] |8.13.0 |204 |<> |Identifies when a user is observed for the first time in the last 14 days authenticating using the deviceCode protocol. The device code authentication flow can be abused by attackers to phish users and steal access tokens to impersonate the victim. By its very nature, device code should only be used when logging in to devices without keyboards, where it is difficult to enter emails and passwords. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Use Case: Identity and Access Audit], [Tactic: Credential Access] |None |1 -|<> |Detects an interaction with a private GitHub repository from a new IP address not seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Execution], [Rule Type: BBR], [Data Source: Github] |8.12.0 |103 +|<> |Detects an interaction with a private GitHub repository from a new IP address not seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Execution], [Rule Type: BBR], [Data Source: Github] |8.13.0 |204 -|<> |Detects a new private repo interaction for a GitHub user not seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Execution], [Rule Type: BBR], [Data Source: Github] |8.12.0 |103 +|<> |Detects a new private repo interaction for a GitHub user not seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Execution], [Rule Type: BBR], [Data Source: Github] |8.13.0 |204 -|<> |Detects a new IP address used for a GitHub PAT not previously seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Initial Access], [Rule Type: BBR], [Data Source: Github] |8.12.0 |103 +|<> |Detects a new IP address used for a GitHub PAT not previously seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Initial Access], [Rule Type: BBR], [Data Source: Github] |8.13.0 |204 -|<> |Detects a new IP address used for a GitHub user not previously seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Initial Access], [Rule Type: BBR], [Data Source: Github] |8.12.0 |103 +|<> |Detects a new IP address used for a GitHub user not previously seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Initial Access], [Rule Type: BBR], [Data Source: Github] |8.13.0 |204 -|<> |Identifies the first occurrence of an Okta user session started via a proxy. |[Tactic: Initial Access], [Use Case: Identity and Access Audit], [Data Source: Okta] |8.14.0 |104 +|<> |Identifies the first occurrence of an Okta user session started via a proxy. |[Tactic: Initial Access], [Use Case: Identity and Access Audit], [Data Source: Okta] |8.15.0 |205 -|<> |A new PAT was used for a GitHub user not previously seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Persistence], [Rule Type: BBR], [Data Source: Github] |8.12.0 |103 +|<> |A new PAT was used for a GitHub user not previously seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Persistence], [Rule Type: BBR], [Data Source: Github] |8.13.0 |204 -|<> |Detects a new private repo interaction for a GitHub PAT not seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Execution], [Rule Type: BBR], [Data Source: Github] |8.12.0 |103 +|<> |Detects a new private repo interaction for a GitHub PAT not seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Execution], [Rule Type: BBR], [Data Source: Github] |8.13.0 |204 |<> |Identifies the first occurrence of an AWS Security Token Service (STS) `GetFederationToken` request made by a user within the last 10 days. The `GetFederationToken` API call allows users to request temporary security credentials to access AWS resources. Adversaries may use this API to obtain temporary credentials to access resources they would not normally have access to. |[Domain: Cloud], [Data Source: Amazon Web Services], [Data Source: AWS], [Data Source: AWS STS], [Use Case: Threat Detection], [Tactic: Defense Evasion] |None |1 -|<> |Detects a new user agent used for a GitHub PAT not previously seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Initial Access], [Rule Type: BBR], [Data Source: Github] |8.12.0 |103 +|<> |Detects a new user agent used for a GitHub PAT not previously seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Initial Access], [Rule Type: BBR], [Data Source: Github] |8.13.0 |204 -|<> |Detects a new user agent used for a GitHub user not previously seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Initial Access], [Rule Type: BBR], [Data Source: Github] |8.12.0 |103 +|<> |Detects a new user agent used for a GitHub user not previously seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Initial Access], [Rule Type: BBR], [Data Source: Github] |8.13.0 |204 |<> |This rule detects the first time a principal calls AWS Cloudwatch `CreateStack` or `CreateStackSet` API. Cloudformation is used to create a single collection of cloud resources called a stack, via a defined template file. An attacker with the appropriate privileges could leverage Cloudformation to create specific resources needed to further exploit the environment. This is a new terms rule that looks for the first instance of this behavior in the last 10 days for a role or IAM user within a particular account. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: Cloudformation], [Use Case: Asset Visibility], [Tactic: Execution] |None |1 @@ -812,21 +816,21 @@ and their rule type is `machine_learning`. |<> |This rule detects a suspicious egress network connection attempt from a Git hook script. Git hooks are scripts that Git executes before or after events such as: commit, push, and receive. An attacker can abuse these features to execute arbitrary commands on the system, establish persistence or to initialize a network connection to a remote server and exfiltrate data or download additional payloads. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence], [Tactic: Execution], [Tactic: Defense Evasion], [Data Source: Elastic Defend] |None |2 -|<> |Detects the deletion of a GitHub app either from a repo or an organization. |[Domain: Cloud], [Use Case: Threat Detection], [Tactic: Execution], [Data Source: Github] |8.12.0 |103 +|<> |Detects the deletion of a GitHub app either from a repo or an organization. |[Domain: Cloud], [Use Case: Threat Detection], [Tactic: Execution], [Data Source: Github] |8.13.0 |204 -|<> |This rule detects when a member is granted the organization owner role of a GitHub organization. This role provides admin level privileges. Any new owner role should be investigated to determine its validity. Unauthorized owner roles could indicate compromise within your organization and provide unlimited access to data and settings. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Persistence], [Data Source: Github] |8.12.0 |105 +|<> |This rule detects when a member is granted the organization owner role of a GitHub organization. This role provides admin level privileges. Any new owner role should be investigated to determine its validity. Unauthorized owner roles could indicate compromise within your organization and provide unlimited access to data and settings. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Persistence], [Data Source: Github] |8.13.0 |206 -|<> |Access to private GitHub organization resources was revoked for a PAT. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Impact], [Rule Type: BBR], [Data Source: Github] |8.12.0 |103 +|<> |Access to private GitHub organization resources was revoked for a PAT. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Impact], [Rule Type: BBR], [Data Source: Github] |8.13.0 |204 -|<> |This rule detects setting modifications for protected branches of a GitHub repository. Branch protection rules can be used to enforce certain workflows or requirements before a contributor can push changes to a branch in your repository. Changes to these protected branch settings should be investigated and verified as legitimate activity. Unauthorized changes could be used to lower your organization's security posture and leave you exposed for future attacks. |[Domain: Cloud], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Github] |8.12.0 |105 +|<> |This rule detects setting modifications for protected branches of a GitHub repository. Branch protection rules can be used to enforce certain workflows or requirements before a contributor can push changes to a branch in your repository. Changes to these protected branch settings should be investigated and verified as legitimate activity. Unauthorized changes could be used to lower your organization's security posture and leave you exposed for future attacks. |[Domain: Cloud], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Github] |8.13.0 |206 -|<> |A new GitHub repository was created. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Execution], [Rule Type: BBR], [Data Source: Github] |8.12.0 |103 +|<> |A new GitHub repository was created. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Execution], [Rule Type: BBR], [Data Source: Github] |8.13.0 |204 -|<> |This rule detects when a GitHub repository is deleted within your organization. Repositories are a critical component used within an organization to manage work, collaborate with others and release products to the public. Any delete action against a repository should be investigated to determine it's validity. Unauthorized deletion of organization repositories could cause irreversible loss of intellectual property and indicate compromise within your organization. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Impact], [Data Source: Github] |8.12.0 |102 +|<> |This rule detects when a GitHub repository is deleted within your organization. Repositories are a critical component used within an organization to manage work, collaborate with others and release products to the public. Any delete action against a repository should be investigated to determine it's validity. Unauthorized deletion of organization repositories could cause irreversible loss of intellectual property and indicate compromise within your organization. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Impact], [Data Source: Github] |8.13.0 |203 -|<> |This rule is part of the "GitHub UEBA - Unusual Activity from Account Pack", and leverages alert data to determine when multiple alerts are executed by the same user in a timespan of one hour. Analysts can use this to prioritize triage and response, as these alerts are a higher indicator of compromised user accounts or PATs. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Execution], [Rule Type: Higher-Order Rule], [Data Source: Github] |None |1 +|<> |This rule is part of the "GitHub UEBA - Unusual Activity from Account Pack", and leverages alert data to determine when multiple alerts are executed by the same user in a timespan of one hour. Analysts can use this to prioritize triage and response, as these alerts are a higher indicator of compromised user accounts or PATs. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Execution], [Rule Type: Higher-Order Rule], [Data Source: Github] |8.13.0 |101 -|<> |A GitHub user was blocked from access to an organization. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Impact], [Rule Type: BBR], [Data Source: Github] |8.12.0 |103 +|<> |A GitHub user was blocked from access to an organization. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Impact], [Rule Type: BBR], [Data Source: Github] |8.13.0 |204 |<> |Drive and Docs is a Google Workspace service that allows users to leverage Google Drive and Google Docs. Access to files is based on inherited permissions from the child organizational unit the user belongs to which is scoped by administrators. Typically if a user is removed, their files can be transferred to another user by the administrator. This service can also be abused by adversaries to transfer files to an adversary account for potential exfiltration. |[Domain: Cloud], [Data Source: Google Workspace], [Tactic: Collection], [Resources: Investigation Guide] |None |107 @@ -874,11 +878,11 @@ and their rule type is `machine_learning`. |<> |A machine learning job has detected unusually high mean of RDP session duration. Long RDP sessions can be used to evade detection mechanisms via session persistence, and might be used to perform tasks such as lateral movement, that might require uninterrupted access to a compromised machine. |[Use Case: Lateral Movement Detection], [Rule Type: ML], [Rule Type: Machine Learning], [Tactic: Lateral Movement] |None |4 -|<> |Detects a high number of unique private repo clone events originating from a single personal access token within a short time period. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Execution], [Data Source: Github] |8.12.0 |103 +|<> |Detects a high number of unique private repo clone events originating from a single personal access token within a short time period. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Execution], [Data Source: Github] |8.13.0 |204 -|<> |Detects when an Okta client address has a certain threshold of Okta user authentication events with multiple device token hashes generated for single user authentication. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access] |8.14.0 |103 +|<> |Detects when an Okta client address has a certain threshold of Okta user authentication events with multiple device token hashes generated for single user authentication. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access] |8.15.0 |203 -|<> |Identifies a high number of Okta user password reset or account unlock attempts. An adversary may attempt to obtain unauthorized access to Okta user accounts using these methods and attempt to blend in with normal activity in their target's environment and evade detection. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Defense Evasion] |8.14.0 |311 +|<> |Identifies a high number of Okta user password reset or account unlock attempts. An adversary may attempt to obtain unauthorized access to Okta user accounts using these methods and attempt to blend in with normal activity in their target's environment and evade detection. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Defense Evasion] |8.15.0 |412 |<> |This rule identifies a high number (10) of process terminations via pkill from the same host within a short time period. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Impact], [Resources: Investigation Guide], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Auditd Manager] |None |112 @@ -1018,7 +1022,7 @@ and their rule type is `machine_learning`. |<> |Indicates the creation of a scheduled task. Adversaries can use these to establish persistence, move laterally, and/or escalate privileges. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Persistence], [Data Source: Elastic Defend], [Data Source: Sysmon] |8.14.0 |208 -|<> |Detects multi-factor authentication (MFA) deactivation with no subsequent re-activation for an Okta user account. An adversary may deactivate MFA for an Okta user account in order to weaken the authentication requirements for the account. |[Tactic: Persistence], [Use Case: Identity and Access Audit], [Data Source: Okta], [Domain: Cloud] |8.14.0 |311 +|<> |Detects multi-factor authentication (MFA) deactivation with no subsequent re-activation for an Okta user account. An adversary may deactivate MFA for an Okta user account in order to weaken the authentication requirements for the account. |[Tactic: Persistence], [Use Case: Identity and Access Audit], [Data Source: Okta], [Domain: Cloud] |8.15.0 |412 |<> |Detects when multi-factor authentication (MFA) is disabled for a Google Workspace organization. An adversary may attempt to modify a password policy in order to weaken an organization’s security controls. |[Domain: Cloud], [Data Source: Google Workspace], [Use Case: Identity and Access Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |206 @@ -1042,7 +1046,7 @@ and their rule type is `machine_learning`. |<> |This rules identifies a process created from an executable with a space appended to the end of the filename. This may indicate an attempt to masquerade a malicious file as benign to gain user execution. When a space is added to the end of certain files, the OS will execute the file according to it's true filetype instead of it's extension. Adversaries can hide a program's true filetype by changing the extension of the file. They can then add a space to the end of the name so that the OS automatically executes the file when it's double-clicked. |[Domain: Endpoint], [OS: Linux], [OS: macOS], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Defend] |None |7 -|<> |A member was removed or their invitation to join was removed from a GitHub Organization. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Impact], [Rule Type: BBR], [Data Source: Github] |8.12.0 |103 +|<> |A member was removed or their invitation to join was removed from a GitHub Organization. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Impact], [Rule Type: BBR], [Data Source: Github] |8.13.0 |204 |<> |Identifies the creation of a memory dump file with an unusual extension, which can indicate an attempt to disguise a memory dump as another file type to bypass security defenses. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Credential Access], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Rule Type: BBR] |None |2 @@ -1140,7 +1144,7 @@ and their rule type is `machine_learning`. |<> |Identify the modification of the msPKIAccountCredentials attribute in an Active Directory User Object. Attackers can abuse the credentials roaming feature to overwrite an arbitrary file for privilege escalation. ms-PKI-AccountCredentials contains binary large objects (BLOBs) of encrypted credential objects from the credential manager store, private keys, certificates, and certificate requests. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Data Source: Active Directory], [Tactic: Privilege Escalation], [Use Case: Active Directory Monitoring], [Data Source: System] |8.14.0 |113 -|<> |Detects attempts to modify or delete a sign on policy for an Okta application. An adversary may attempt to modify or delete the sign on policy for an Okta application in order to remove or weaken an organization's security controls. |[Tactic: Persistence], [Use Case: Identity and Access Audit], [Data Source: Okta] |8.14.0 |309 +|<> |Detects attempts to modify or delete a sign on policy for an Okta application. An adversary may attempt to modify or delete the sign on policy for an Okta application in order to remove or weaken an organization's security controls. |[Tactic: Persistence], [Use Case: Identity and Access Audit], [Data Source: Okta] |8.15.0 |410 |<> |Managed Object Format (MOF) files can be compiled locally or remotely through mofcomp.exe. Attackers may leverage MOF files to build their own namespaces and classes into the Windows Management Instrumentation (WMI) repository, or establish persistence using WMI Event Subscription. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Execution], [Data Source: Elastic Defend], [Data Source: Microsoft Defender for Endpoint], [Data Source: Elastic Endgame], [Data Source: System], [Data Source: Crowdstrike] |None |4 @@ -1160,19 +1164,19 @@ and their rule type is `machine_learning`. |<> |This rule uses alert data to determine when multiple alerts in different phases of an attack involving the same host are triggered. Analysts can use this to prioritize triage and response, as these hosts are more likely to be compromised. |[Use Case: Threat Detection], [Rule Type: Higher-Order Rule] |None |4 -|<> |This rule detects when a specific Okta actor has multiple device token hashes for a single Okta session. This may indicate an authenticated session has been hijacked or is being used by multiple devices. Adversaries may hijack a session to gain unauthorized access to Okta admin console, applications, tenants, or other resources. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access], [Domain: SaaS] |8.14.0 |204 +|<> |This rule detects when a specific Okta actor has multiple device token hashes for a single Okta session. This may indicate an authenticated session has been hijacked or is being used by multiple devices. Adversaries may hijack a session to gain unauthorized access to Okta admin console, applications, tenants, or other resources. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access], [Domain: SaaS] |8.15.0 |304 |<> |Identifies multiple logon failures followed by a successful one from the same source address. Adversaries will often brute force login attempts across multiple users with a common or known password, in an attempt to gain access to accounts. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Credential Access], [Resources: Investigation Guide], [Data Source: System] |8.14.0 |111 |<> |Identifies multiple consecutive logon failures from the same source address and within a short time interval. Adversaries will often brute force login attempts across multiple users with a common or known password, in an attempt to gain access to accounts. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Credential Access], [Resources: Investigation Guide], [Data Source: System] |8.14.0 |110 -|<> |Detects when a user has started multiple Okta sessions with the same user account and different session IDs. This may indicate that an attacker has stolen the user's session cookie and is using it to access the user's account from a different location. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Lateral Movement] |8.14.0 |105 +|<> |Detects when a user has started multiple Okta sessions with the same user account and different session IDs. This may indicate that an attacker has stolen the user's session cookie and is using it to access the user's account from a different location. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Lateral Movement] |8.15.0 |206 -|<> |Detects when Okta user authentication events are reported for multiple users with the same device token hash behind a proxy. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access] |8.14.0 |105 +|<> |Detects when Okta user authentication events are reported for multiple users with the same device token hash behind a proxy. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access] |8.15.0 |206 -|<> |Detects when a certain threshold of Okta user authentication events are reported for multiple users from the same client address. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access] |8.14.0 |103 +|<> |Detects when a certain threshold of Okta user authentication events are reported for multiple users from the same client address. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access] |8.15.0 |203 -|<> |Detects when a high number of Okta user authentication events are reported for multiple users in a short time frame. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access] |8.14.0 |103 +|<> |Detects when a high number of Okta user authentication events are reported for multiple users in a short time frame. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access] |8.15.0 |203 |<> |Windows Credential Manager allows you to create, view, or delete saved credentials for signing into websites, connected applications, and networks. An adversary may abuse this to list or dump credentials stored in the Credential Manager for saved usernames and passwords. This may also be performed in preparation of lateral movement. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Credential Access], [Data Source: System] |8.14.0 |111 @@ -1226,15 +1230,15 @@ and their rule type is `machine_learning`. |<> |Identifies the use of the Exchange PowerShell cmdlet, Set-CASMailbox, to add a new ActiveSync allowed device. Adversaries may target user email to collect sensitive information. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Persistence], [Tactic: Execution], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: System], [Data Source: Microsoft Defender for Endpoint], [Data Source: Sysmon], [Data Source: SentinelOne], [Data Source: Crowdstrike] |8.14.0 |311 -|<> |This rule detects when a new GitHub App has been installed in your organization account. GitHub Apps extend GitHub's functionality both within and outside of GitHub. When an app is installed it is granted permissions to read or modify your repository and organization data. Only trusted apps should be installed and any newly installed apps should be investigated to verify their legitimacy. Unauthorized app installation could lower your organization's security posture and leave you exposed for future attacks. |[Domain: Cloud], [Use Case: Threat Detection], [Tactic: Execution], [Data Source: Github] |8.12.0 |103 +|<> |This rule detects when a new GitHub App has been installed in your organization account. GitHub Apps extend GitHub's functionality both within and outside of GitHub. When an app is installed it is granted permissions to read or modify your repository and organization data. Only trusted apps should be installed and any newly installed apps should be investigated to verify their legitimacy. Unauthorized app installation could lower your organization's security posture and leave you exposed for future attacks. |[Domain: Cloud], [Use Case: Threat Detection], [Tactic: Execution], [Data Source: Github] |8.13.0 |204 -|<> |Detects when a new member is added to a GitHub organization as an owner. This role provides admin level privileges. Any new owner roles should be investigated to determine it's validity. Unauthorized owner roles could indicate compromise within your organization and provide unlimited access to data and settings. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Persistence], [Data Source: Github] |8.12.0 |105 +|<> |Detects when a new member is added to a GitHub organization as an owner. This role provides admin level privileges. Any new owner roles should be investigated to determine it's validity. Unauthorized owner roles could indicate compromise within your organization and provide unlimited access to data and settings. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Persistence], [Data Source: Github] |8.13.0 |206 -|<> |Detects events where Okta behavior detection has identified a new authentication behavior. |[Use Case: Identity and Access Audit], [Tactic: Initial Access], [Data Source: Okta] |8.14.0 |105 +|<> |Detects events where Okta behavior detection has identified a new authentication behavior. |[Use Case: Identity and Access Audit], [Tactic: Initial Access], [Data Source: Okta] |8.15.0 |206 -|<> |Detects the creation of a new Identity Provider (IdP) by a Super Administrator or Organization Administrator within Okta. |[Use Case: Identity and Access Audit], [Tactic: Persistence], [Data Source: Okta] |8.14.0 |104 +|<> |Detects the creation of a new Identity Provider (IdP) by a Super Administrator or Organization Administrator within Okta. |[Use Case: Identity and Access Audit], [Tactic: Persistence], [Data Source: Okta] |8.15.0 |205 -|<> |A new user was added to a GitHub organization. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Persistence], [Rule Type: BBR], [Data Source: Github] |8.12.0 |103 +|<> |A new user was added to a GitHub organization. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Persistence], [Rule Type: BBR], [Data Source: Github] |8.13.0 |204 |<> |Identifies a new or modified federation domain, which can be used to create a trust between O365 and an external identity provider. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Identity and Access Audit], [Tactic: Privilege Escalation] |None |207 @@ -1252,17 +1256,17 @@ and their rule type is `machine_learning`. |<> |Identifies the modification of the Microsoft Office "Office Test" Registry key, a registry location that can be used to specify a DLL which will be executed every time an MS Office application is started. Attackers can abuse this to gain persistence on a compromised host. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Persistence], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Data Source: Elastic Endgame], [Data Source: Microsoft Defender for Endpoint], [Data Source: SentinelOne] |8.13.0 |103 -|<> |Identifies a high number of failed Okta user authentication attempts from a single IP address, which could be indicative of a brute force or password spraying attack. An adversary may attempt a brute force or password spraying attack to obtain unauthorized access to user accounts. |[Use Case: Identity and Access Audit], [Tactic: Credential Access], [Data Source: Okta] |8.14.0 |311 +|<> |Identifies a high number of failed Okta user authentication attempts from a single IP address, which could be indicative of a brute force or password spraying attack. An adversary may attempt a brute force or password spraying attack to obtain unauthorized access to user accounts. |[Use Case: Identity and Access Audit], [Tactic: Credential Access], [Data Source: Okta] |8.15.0 |412 -|<> |Detects when Okta FastPass prevents a user from authenticating to a phishing website. |[Tactic: Initial Access], [Use Case: Identity and Access Audit], [Data Source: Okta] |8.14.0 |206 +|<> |Detects when Okta FastPass prevents a user from authenticating to a phishing website. |[Tactic: Initial Access], [Use Case: Identity and Access Audit], [Data Source: Okta] |8.15.0 |307 -|<> |Detects sign-in events where authentication is carried out via a third-party Identity Provider (IdP). |[Use Case: Identity and Access Audit], [Tactic: Initial Access], [Data Source: Okta] |8.14.0 |105 +|<> |Detects sign-in events where authentication is carried out via a third-party Identity Provider (IdP). |[Use Case: Identity and Access Audit], [Tactic: Initial Access], [Data Source: Okta] |8.15.0 |206 -|<> |Okta ThreatInsight is a feature that provides valuable debug data regarding authentication and authorization processes, which is logged in the system. Within this data, there is a specific field called threat_suspected, which represents Okta's internal evaluation of the authentication or authorization workflow. When this field is set to True, it suggests the presence of potential credential access techniques, such as password-spraying, brute-forcing, replay attacks, and other similar threats. |[Use Case: Identity and Access Audit], [Data Source: Okta] |8.14.0 |308 +|<> |Okta ThreatInsight is a feature that provides valuable debug data regarding authentication and authorization processes, which is logged in the system. Within this data, there is a specific field called threat_suspected, which represents Okta's internal evaluation of the authentication or authorization workflow. When this field is set to True, it suggests the presence of potential credential access techniques, such as password-spraying, brute-forcing, replay attacks, and other similar threats. |[Use Case: Identity and Access Audit], [Data Source: Okta] |8.15.0 |409 -|<> |A user has initiated a session impersonation granting them access to the environment with the permissions of the user they are impersonating. This would likely indicate Okta administrative access and should only ever occur if requested and expected. |[Use Case: Identity and Access Audit], [Tactic: Credential Access], [Data Source: Okta] |8.14.0 |310 +|<> |A user has initiated a session impersonation granting them access to the environment with the permissions of the user they are impersonating. This would likely indicate Okta administrative access and should only ever occur if requested and expected. |[Use Case: Identity and Access Audit], [Tactic: Credential Access], [Data Source: Okta] |8.15.0 |411 -|<> |Detects when a specific Okta actor has multiple sessions started from different geolocations. Adversaries may attempt to launch an attack by using a list of known usernames and passwords to gain unauthorized access to user accounts from different locations. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Initial Access] |8.14.0 |203 +|<> |Detects when a specific Okta actor has multiple sessions started from different geolocations. Adversaries may attempt to launch an attack by using a list of known usernames and passwords to gain unauthorized access to user accounts from different locations. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Initial Access] |8.15.0 |303 |<> |Identifies the occurence of files uploaded to OneDrive being detected as Malware by the file scanning engine. Attackers can use File Sharing and Organization Repositories to spread laterally within the company and amplify their access. Users can inadvertently share these files without knowing their maliciousness, giving adversaries opportunity to gain initial access to other endpoints in the environment. |[Domain: Cloud], [Data Source: Microsoft 365], [Tactic: Lateral Movement] |None |206 @@ -1316,11 +1320,11 @@ and their rule type is `machine_learning`. |<> |Identifies the creation of a new port forwarding rule. An adversary may abuse this technique to bypass network segmentation restrictions. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Command and Control], [Tactic: Defense Evasion], [Resources: Investigation Guide], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Sysmon], [Data Source: SentinelOne], [Data Source: Microsoft Defender for Endpoint] |8.14.0 |413 -|<> |Detects when a user grants permissions to an Azure-registered application or when an administrator grants tenant-wide permissions to an application. An adversary may create an Azure-registered application that requests access to data such as contact information, email, or documents. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft 365], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access] |None |212 +|<> |Detects when a user grants permissions to an Azure-registered application or when an administrator grants tenant-wide permissions to an application. An adversary may create an Azure-registered application that requests access to data such as contact information, email, or documents. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft 365], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access] |None |213 |<> |This rule detects a known command and control pattern in network events. The FIN7 threat group is known to use this command and control technique, while maintaining persistence in their target's network. |[Use Case: Threat Detection], [Tactic: Command and Control], [Domain: Endpoint], [Data Source: PAN-OS] |None |106 -|<> |Detects possible Denial of Service (DoS) attacks against an Okta organization. An adversary may attempt to disrupt an organization's business operations by performing a DoS attack against its Okta service. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Impact] |8.14.0 |308 +|<> |Detects possible Denial of Service (DoS) attacks against an Okta organization. An adversary may attempt to disrupt an organization's business operations by performing a DoS attack against its Okta service. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Impact] |8.15.0 |409 |<> |Active Directory Integrated DNS (ADIDNS) is one of the core components of AD DS, leveraging AD's access control and replication to maintain domain consistency. It stores DNS zones as AD objects, a feature that, while robust, introduces some security issues, such as wildcard records, mainly because of the default permission (Any authenticated users) to create DNS-named records. Attackers can create wildcard records to redirect traffic that doesn't explicitly match records contained in the zone, becoming the Man-in-the-Middle and being able to abuse DNS similarly to LLMNR/NBNS spoofing. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Credential Access], [Data Source: Active Directory], [Use Case: Active Directory Monitoring], [Data Source: System] |8.14.0 |103 @@ -1486,7 +1490,7 @@ and their rule type is `machine_learning`. |<> |Identifies potentially malicious processes communicating via a port paring typically not associated with SSH. For example, SSH over port 2200 or port 2222 as opposed to the traditional port 22. Adversaries may make changes to the standard port a protocol uses to bypass filtering or muddle analysis/parsing of network data. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Command and Control], [OS: macOS], [Data Source: Elastic Defend] |None |6 -|<> |Detects when an attacker abuses the Multi-Factor authentication mechanism by repeatedly issuing login requests until the user eventually accepts the Okta push notification. An adversary may attempt to bypass the Okta MFA policies configured for an organization to obtain unauthorized access. |[Use Case: Identity and Access Audit], [Tactic: Credential Access], [Data Source: Okta] |8.14.0 |106 +|<> |Detects when an attacker abuses the Multi-Factor authentication mechanism by repeatedly issuing login requests until the user eventually accepts the Okta push notification. An adversary may attempt to bypass the Okta MFA policies configured for an organization to obtain unauthorized access. |[Use Case: Identity and Access Audit], [Tactic: Credential Access], [Data Source: Okta] |8.15.0 |207 |<> |Identifies a Secure Shell (SSH) client or server process creating or writing to a known SSH backdoor log file. Adversaries may modify SSH related binaries for persistence or credential access via patching sensitive functions to enable unauthorized access or to log SSH credentials for exfiltration. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence], [Tactic: Credential Access], [Data Source: Elastic Endgame], [Data Source: Elastic Defend] |None |110 @@ -1614,7 +1618,7 @@ and their rule type is `machine_learning`. |<> |This rule monitors for the execution of a suspicious sudo command that is leveraged in CVE-2019-14287 to escalate privileges to root. Sudo does not verify the presence of the designated user ID and proceeds to execute using a user ID that can be chosen arbitrarily. By using the sudo privileges, the command "sudo -u#-1" translates to an ID of 0, representing the root user. This exploit may work for sudo versions prior to v1.28. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Data Source: Elastic Defend], [Use Case: Vulnerability], [Data Source: Elastic Endgame], [Data Source: Auditd Manager] |None |4 -|<> |This rule detects potential sudo token manipulation attacks through process injection by monitoring the use of a debugger (gdb) process followed by a successful uid change event during the execution of the sudo process. A sudo token manipulation attack is performed by injecting into a process that has a valid sudo token, which can then be used by attackers to activate their own sudo token. This attack requires ptrace to be enabled in conjunction with the existence of a living process that has a valid sudo token with the same uid as the current user. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Data Source: Elastic Defend] |None |5 +|<> |This rule detects potential sudo token manipulation attacks through process injection by monitoring the use of a debugger (gdb) process followed by a successful uid change event during the execution of the sudo process. A sudo token manipulation attack is performed by injecting into a process that has a valid sudo token, which can then be used by attackers to activate their own sudo token. This attack requires ptrace to be enabled in conjunction with the existence of a living process that has a valid sudo token with the same uid as the current user. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Data Source: Elastic Defend] |None |7 |<> |This rule monitors for the usage of the built-in Linux DebugFS utility to access a disk device without root permissions. Linux users that are part of the "disk" group have sufficient privileges to access all data inside of the machine through DebugFS. Attackers may leverage DebugFS in conjunction with "disk" permissions to read sensitive files owned by root, such as the shadow file, root ssh private keys or other sensitive files that may allow them to further escalate privileges. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Data Source: Elastic Defend] |None |6 @@ -1642,7 +1646,7 @@ and their rule type is `machine_learning`. |<> |Identifies a privilege escalation attempt via exploiting CVE-2022-38028 to hijack the print spooler service execution. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Tactic: Defense Evasion], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Sysmon], [Data Source: Microsoft Defender for Endpoint], [Data Source: SentinelOne] |8.14.0 |203 -|<> |Detects when an attacker abuses the Multi-Factor authentication mechanism by repeatedly issuing login requests until the user eventually accepts the Okta push notification. An adversary may attempt to bypass the Okta MFA policies configured for an organization to obtain unauthorized access. |[Use Case: Identity and Access Audit], [Tactic: Credential Access], [Data Source: Okta] |8.14.0 |312 +|<> |Detects when an attacker abuses the Multi-Factor authentication mechanism by repeatedly issuing login requests until the user eventually accepts the Okta push notification. An adversary may attempt to bypass the Okta MFA policies configured for an organization to obtain unauthorized access. |[Use Case: Identity and Access Audit], [Tactic: Credential Access], [Data Source: Okta] |8.15.0 |413 |<> |This rule monitors for the execution of suspicious commands via screen and tmux. When launching a command and detaching directly, the commands will be executed in the background via its parent process. Attackers may leverage screen or tmux to execute commands while attempting to evade detection. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Data Source: Elastic Endgame] |None |5 @@ -1710,7 +1714,7 @@ and their rule type is `machine_learning`. |<> |Identifies modifications to the root crontab file. Adversaries may overwrite this file to gain code execution with root privileges by exploiting privileged file write or move related vulnerabilities. |[Domain: Endpoint], [OS: macOS], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Data Source: Elastic Defend] |None |106 -|<> |Identifies instances where a process is executed with user/group ID 0 (root), and a real user/group ID that is not 0. This is indicative of a process that has been granted SUID/SGID permissions, allowing it to run with elevated privileges. Attackers may leverage a misconfiguration for exploitation in order to escalate their privileges to root, or establish a backdoor for persistence. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Tactic: Persistence], [Data Source: Elastic Defend] |None |3 +|<> |Identifies instances where a process is executed with user/group ID 0 (root), and a real user/group ID that is not 0. This is indicative of a process that has been granted SUID/SGID permissions, allowing it to run with elevated privileges. Attackers may leverage a misconfiguration for exploitation in order to escalate their privileges to root, or establish a backdoor for persistence. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Tactic: Persistence], [Data Source: Elastic Defend] |None |5 |<> |Identifies a privilege escalation attempt via a rogue Windows directory (Windir) environment variable. This is a known primitive that is often combined with other vulnerabilities to elevate privileges. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Sysmon], [Data Source: Microsoft Defender for Endpoint], [Data Source: SentinelOne] |8.14.0 |308 @@ -1976,11 +1980,11 @@ and their rule type is `machine_learning`. |<> |A statistical model has identified command-and-control (C2) beaconing activity with high confidence. Beaconing can help attackers maintain stealthy communication with their C2 servers, receive instructions and payloads, exfiltrate data and maintain persistence in a network. |[Domain: Network], [Use Case: C2 Beaconing Detection], [Tactic: Command and Control] |None |5 -|<> |Detects a sequence of suspicious activities on Windows hosts indicative of credential compromise, followed by efforts to undermine multi-factor authentication (MFA) and single sign-on (SSO) mechanisms for an Okta user account. |[Tactic: Persistence], [Use Case: Identity and Access Audit], [Data Source: Okta], [Data Source: Elastic Defend], [Rule Type: Higher-Order Rule], [Domain: Endpoint], [Domain: Cloud] |8.14.0 |104 +|<> |Detects a sequence of suspicious activities on Windows hosts indicative of credential compromise, followed by efforts to undermine multi-factor authentication (MFA) and single sign-on (SSO) mechanisms for an Okta user account. |[Tactic: Persistence], [Use Case: Identity and Access Audit], [Data Source: Okta], [Data Source: Elastic Defend], [Rule Type: Higher-Order Rule], [Domain: Endpoint], [Domain: Cloud] |8.15.0 |205 |<> |Adversaries may create or modify the Sublime application plugins or scripts to execute a malicious payload each time the Sublime application is started. |[Domain: Endpoint], [OS: macOS], [Use Case: Threat Detection], [Tactic: Persistence], [Data Source: Elastic Defend] |None |108 -|<> |Detects successful single sign-on (SSO) events to Okta applications from an unrecognized or "unknown" client device, as identified by the user-agent string. This activity may be indicative of exploitation of a vulnerability in Okta's Classic Engine, which could allow an attacker to bypass application-specific sign-on policies, such as device or network restrictions. The vulnerability potentially enables unauthorized access to applications using only valid, stolen credentials, without requiring additional authentication factors. |[Domain: SaaS], [Data Source: Okta], [Use Case: Threat Detection], [Use Case: Identity and Access Audit], [Tactic: Initial Access] |8.14.0 |103 +|<> |Detects successful single sign-on (SSO) events to Okta applications from an unrecognized or "unknown" client device, as identified by the user-agent string. This activity may be indicative of exploitation of a vulnerability in Okta's Classic Engine, which could allow an attacker to bypass application-specific sign-on policies, such as device or network restrictions. The vulnerability potentially enables unauthorized access to applications using only valid, stolen credentials, without requiring additional authentication factors. |[Domain: SaaS], [Data Source: Okta], [Use Case: Threat Detection], [Use Case: Identity and Access Audit], [Tactic: Initial Access] |8.15.0 |204 |<> |This rule monitors for the usage of the sudo -l command, which is used to list the allowed and forbidden commands for the invoking user. Attackers may execute this command to enumerate commands allowed to be executed with sudo permissions, potentially allowing to escalate privileges to root. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Discovery], [Data Source: Elastic Defend] |None |6 @@ -2000,7 +2004,7 @@ and their rule type is `machine_learning`. |<> |Identify read access to a high number of Active Directory object attributes. The knowledge of objects properties can help adversaries find vulnerabilities, elevate privileges or collect sensitive information. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Discovery], [Data Source: System], [Data Source: Active Directory], [Data Source: Windows] |8.14.0 |102 -|<> |Detects when a user reports suspicious activity for their Okta account. These events should be investigated, as they can help security teams identify when an adversary is attempting to gain access to their network. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Initial Access] |8.14.0 |308 +|<> |Detects when a user reports suspicious activity for their Okta account. These events should be investigated, as they can help security teams identify when an adversary is attempting to gain access to their network. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Initial Access] |8.15.0 |409 |<> |Identifies the creation of the Antimalware Scan Interface (AMSI) DLL in an unusual location. This may indicate an attempt to bypass AMSI by loading a rogue AMSI module instead of the legit one. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Endgame], [Resources: Investigation Guide], [Data Source: Elastic Defend], [Data Source: Sysmon], [Data Source: SentinelOne], [Data Source: Microsoft Defender for Endpoint] |8.14.0 |313 @@ -2182,7 +2186,7 @@ and their rule type is `machine_learning`. |<> |Identifies suspicious child processes of frequently targeted Microsoft Office applications (Word, PowerPoint, and Excel). These child processes are often launched during exploitation of Office applications or by documents with malicious macros. |[Domain: Endpoint], [OS: macOS], [Use Case: Threat Detection], [Tactic: Initial Access], [Data Source: Elastic Defend] |None |207 -|<> |Identifies a high volume of `pbpaste` executions, which may indicate a bash loop continuously collecting clipboard contents, potentially allowing an attacker to harvest user credentials or other sensitive information. |[Domain: Endpoint], [OS: macOS], [Use Case: Threat Detection], [Tactic: Credential Access], [Data Source: Jamf Protect], [Data Source: Elastic Defend] |8.12.0 |1 +|<> |Identifies a high volume of `pbpaste` executions, which may indicate a bash loop continuously collecting clipboard contents, potentially allowing an attacker to harvest user credentials or other sensitive information. |[Domain: Endpoint], [OS: macOS], [Use Case: Threat Detection], [Tactic: Credential Access], [Data Source: Jamf Protect], [Data Source: Elastic Defend] |None |1 |<> |This rule monitors the syslog log file for error messages related to the rc.local process. The rc.local file is a script that is executed during the boot process on Linux systems. Attackers may attempt to modify the rc.local file to execute malicious commands or scripts during system startup. This rule detects error messages such as "Connection refused," "No such file or directory," or "command not found" in the syslog log file, which may indicate that the rc.local file has been tampered with. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence] |None |2 @@ -2264,9 +2268,9 @@ and their rule type is `machine_learning`. |<> |Monitors for the elevation of regular user permissions to root permissions through a previously unknown executable. Attackers may attempt to evade detection by hijacking the execution flow and hooking certain functions/syscalls through a rootkit in order to provide easy access to root via a special modified command. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Tactic: Defense Evasion], [Data Source: Elastic Defend] |None |4 -|<> |Identifies unauthorized access attempts to Okta applications. |[Tactic: Initial Access], [Use Case: Identity and Access Audit], [Data Source: Okta] |8.14.0 |309 +|<> |Identifies unauthorized access attempts to Okta applications. |[Tactic: Initial Access], [Use Case: Identity and Access Audit], [Data Source: Okta] |8.15.0 |410 -|<> |Identifies a failed OAuth 2.0 token grant attempt for a public client app using client credentials. This event is generated when a public client app attempts to exchange a client credentials grant for an OAuth 2.0 access token, but the request is denied due to the lack of required scopes. This could indicate compromised client credentials in which an adversary is attempting to obtain an access token for unauthorized scopes. This is a [New Terms](https://www.elastic.co/guide/en/security/master/rules-ui-create.html#create-new-terms-rule) rule where the `okta.actor.display_name` field value has not been seen in the last 14 days regarding this event. |[Domain: SaaS], [Data Source: Okta], [Use Case: Threat Detection], [Use Case: Identity and Access Audit], [Tactic: Defense Evasion] |8.14.0 |104 +|<> |Identifies a failed OAuth 2.0 token grant attempt for a public client app using client credentials. This event is generated when a public client app attempts to exchange a client credentials grant for an OAuth 2.0 access token, but the request is denied due to the lack of required scopes. This could indicate compromised client credentials in which an adversary is attempting to obtain an access token for unauthorized scopes. This is a [New Terms](https://www.elastic.co/guide/en/security/master/rules-ui-create.html#create-new-terms-rule) rule where the `okta.actor.display_name` field value has not been seen in the last 14 days regarding this event. |[Domain: SaaS], [Data Source: Okta], [Use Case: Threat Detection], [Use Case: Identity and Access Audit], [Tactic: Defense Evasion] |8.15.0 |205 |<> |Detects changes to registry persistence keys that are not commonly used or modified by legitimate programs. This could be an indication of an adversary's attempt to persist in a stealthy manner. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Persistence], [Data Source: Elastic Defend], [Data Source: Sysmon] |8.14.0 |211 @@ -2320,7 +2324,13 @@ and their rule type is `machine_learning`. |<> |Identifies an unexpected file being modified by dns.exe, the process responsible for Windows DNS Server services, which may indicate activity related to remote code execution or other forms of exploitation. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Lateral Movement], [Data Source: Elastic Endgame], [Use Case: Vulnerability], [Data Source: Elastic Defend], [Data Source: Sysmon] |8.14.0 |211 -|<> |Detects repeated high-confidence 'BLOCKED' actions coupled with specific violation codes such as 'MISCONDUCT', indicating persistent misuse or attempts to probe the model's ethical boundaries. |[Domain: LLM], [Data Source: AWS Bedrock], [Data Source: AWS S3], [Use Case: Policy Violation], [Mitre Atlas: T0051], [Mitre Atlas: T0054] |8.13.0 |4 +|<> |Detects repeated high-confidence 'BLOCKED' actions coupled with specific 'Content Filter' policy violation having codes such as 'MISCONDUCT', 'HATE', 'SEXUAL', INSULTS', 'PROMPT_ATTACK', 'VIOLENCE' indicating persistent misuse or attempts to probe the model's ethical boundaries. |[Domain: LLM], [Data Source: AWS Bedrock], [Data Source: AWS S3], [Use Case: Policy Violation], [Mitre Atlas: T0051], [Mitre Atlas: T0054] |8.13.0 |5 + +|<> |Detects repeated compliance violation 'BLOCKED' actions coupled with specific policy name such as 'sensitive_information_policy', indicating persistent misuse or attempts to probe the model's denied topics. |[Domain: LLM], [Data Source: AWS Bedrock], [Data Source: AWS S3], [Use Case: Policy Violation], [Mitre Atlas: T0051], [Mitre Atlas: T0054] |8.13.0 |1 + +|<> |Detects repeated compliance violation 'BLOCKED' actions coupled with specific policy name such as 'topic_policy', indicating persistent misuse or attempts to probe the model's denied topics. |[Domain: LLM], [Data Source: AWS Bedrock], [Data Source: AWS S3], [Use Case: Policy Violation], [Mitre Atlas: T0051], [Mitre Atlas: T0054] |8.13.0 |1 + +|<> |Detects repeated compliance violation 'BLOCKED' actions coupled with specific policy name such as 'word_policy', indicating persistent misuse or attempts to probe the model's denied topics. |[Domain: LLM], [Data Source: AWS Bedrock], [Data Source: AWS S3], [Use Case: Policy Violation], [Mitre Atlas: T0051], [Mitre Atlas: T0054] |8.13.0 |1 |<> |A machine learning job detected a user logging in at a time of day that is unusual for the user. This can be due to credentialed access via a compromised account when the user and the threat actor are in different time zones. In addition, unauthorized user activity often takes place during non-business hours. |[Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Rule Type: ML], [Rule Type: Machine Learning], [Tactic: Initial Access], [Resources: Investigation Guide] |None |105 @@ -2432,7 +2442,7 @@ and their rule type is `machine_learning`. |<> |Identifies a user being added to a privileged group in Active Directory. Privileged accounts and groups in Active Directory are those to which powerful rights, privileges, and permissions are granted that allow them to perform nearly any action in Active Directory and on domain-joined systems. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Persistence], [Resources: Investigation Guide], [Use Case: Active Directory Monitoring], [Data Source: Active Directory], [Data Source: System] |8.14.0 |211 -|<> |Identifies users being added to the admin group. This could be an indication of privilege escalation activity. |[Domain: Endpoint], [OS: macOS], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Data Source: Jamf Protect] |8.12.0 |1 +|<> |Identifies users being added to the admin group. This could be an indication of privilege escalation activity. |[Domain: Endpoint], [OS: macOS], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Data Source: Jamf Protect] |None |1 |<> |Detects when a user account has the servicePrincipalName attribute modified. Attackers can abuse write privileges over a user to configure Service Principle Names (SPNs) so that they can perform Kerberoasting. Administrators can also configure this for legitimate purposes, exposing the account to Kerberoasting. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Credential Access], [Data Source: Active Directory], [Resources: Investigation Guide], [Use Case: Active Directory Monitoring], [Data Source: System] |8.14.0 |213 diff --git a/docs/detections/prebuilt-rules/rule-desc-index.asciidoc b/docs/detections/prebuilt-rules/rule-desc-index.asciidoc index 6d7cb231e3..856a3a3886 100644 --- a/docs/detections/prebuilt-rules/rule-desc-index.asciidoc +++ b/docs/detections/prebuilt-rules/rule-desc-index.asciidoc @@ -5,6 +5,7 @@ include::rule-details/aws-bedrock-detected-multiple-attempts-to-use-denied-model include::rule-details/aws-bedrock-detected-multiple-validation-exception-errors-by-a-single-user.asciidoc[] include::rule-details/aws-bedrock-guardrails-detected-multiple-policy-violations-within-a-single-blocked-request.asciidoc[] include::rule-details/aws-bedrock-guardrails-detected-multiple-violations-by-a-single-user-over-a-session.asciidoc[] +include::rule-details/aws-bedrock-invocations-without-guardrails-detected-by-a-single-user-over-a-session.asciidoc[] include::rule-details/aws-cli-command-with-custom-endpoint-url.asciidoc[] include::rule-details/aws-cloudtrail-log-created.asciidoc[] include::rule-details/aws-cloudtrail-log-deleted.asciidoc[] @@ -47,6 +48,7 @@ include::rule-details/aws-iam-customer-managed-policy-attached-to-role-by-rare-u include::rule-details/aws-iam-deactivation-of-mfa-device.asciidoc[] include::rule-details/aws-iam-group-creation.asciidoc[] include::rule-details/aws-iam-group-deletion.asciidoc[] +include::rule-details/aws-iam-login-profile-added-for-root.asciidoc[] include::rule-details/aws-iam-login-profile-added-to-user.asciidoc[] include::rule-details/aws-iam-password-recovery-requested.asciidoc[] include::rule-details/aws-iam-roles-anywhere-profile-creation.asciidoc[] @@ -1151,7 +1153,10 @@ include::rule-details/unusual-executable-file-creation-by-a-system-critical-proc include::rule-details/unusual-execution-via-microsoft-common-console-file.asciidoc[] include::rule-details/unusual-file-creation-alternate-data-stream.asciidoc[] include::rule-details/unusual-file-modification-by-dns-exe.asciidoc[] -include::rule-details/unusual-high-confidence-misconduct-blocks-detected.asciidoc[] +include::rule-details/unusual-high-confidence-content-filter-blocks-detected.asciidoc[] +include::rule-details/unusual-high-denied-sensitive-information-policy-blocks-detected.asciidoc[] +include::rule-details/unusual-high-denied-topic-blocks-detected.asciidoc[] +include::rule-details/unusual-high-word-policy-blocks-detected.asciidoc[] include::rule-details/unusual-hour-for-a-user-to-logon.asciidoc[] include::rule-details/unusual-instance-metadata-service-imds-api-request.asciidoc[] include::rule-details/unusual-interactive-shell-launched-from-system-user.asciidoc[] diff --git a/docs/detections/prebuilt-rules/rule-details/administrator-privileges-assigned-to-an-okta-group.asciidoc b/docs/detections/prebuilt-rules/rule-details/administrator-privileges-assigned-to-an-okta-group.asciidoc index 5a050d8a05..0e42d3e676 100644 --- a/docs/detections/prebuilt-rules/rule-details/administrator-privileges-assigned-to-an-okta-group.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/administrator-privileges-assigned-to-an-okta-group.asciidoc @@ -35,7 +35,7 @@ Detects when an administrator role is assigned to an Okta group. An adversary ma * Data Source: Okta * Tactic: Persistence -*Version*: 308 +*Version*: 409 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/administrator-role-assigned-to-an-okta-user.asciidoc b/docs/detections/prebuilt-rules/rule-details/administrator-role-assigned-to-an-okta-user.asciidoc index 9c2b9b498c..1ca69eb2a4 100644 --- a/docs/detections/prebuilt-rules/rule-details/administrator-role-assigned-to-an-okta-user.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/administrator-role-assigned-to-an-okta-user.asciidoc @@ -36,7 +36,7 @@ Identifies when an administrator role is assigned to an Okta user. An adversary * Use Case: Identity and Access Audit * Tactic: Persistence -*Version*: 308 +*Version*: 409 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-create-okta-api-token.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-create-okta-api-token.asciidoc index fcf6d35921..5c331a7041 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempt-to-create-okta-api-token.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-create-okta-api-token.asciidoc @@ -34,7 +34,7 @@ Detects attempts to create an Okta API token. An adversary may create an Okta AP * Data Source: Okta * Tactic: Persistence -*Version*: 308 +*Version*: 409 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-deactivate-an-okta-application.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-deactivate-an-okta-application.asciidoc index 27f313cdc5..d5c456a132 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempt-to-deactivate-an-okta-application.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-deactivate-an-okta-application.asciidoc @@ -35,7 +35,7 @@ Detects attempts to deactivate an Okta application. An adversary may attempt to * Data Source: Okta * Tactic: Impact -*Version*: 309 +*Version*: 410 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-deactivate-an-okta-network-zone.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-deactivate-an-okta-network-zone.asciidoc index a995ebcec7..5e54745d12 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempt-to-deactivate-an-okta-network-zone.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-deactivate-an-okta-network-zone.asciidoc @@ -36,7 +36,7 @@ Detects attempts to deactivate an Okta network zone. Okta network zones can be c * Use Case: Network Security Monitoring * Tactic: Defense Evasion -*Version*: 309 +*Version*: 410 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-deactivate-an-okta-policy-rule.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-deactivate-an-okta-policy-rule.asciidoc index 306aa1ba67..cf0a425b8b 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempt-to-deactivate-an-okta-policy-rule.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-deactivate-an-okta-policy-rule.asciidoc @@ -35,7 +35,7 @@ Detects attempts to deactivate a rule within an Okta policy. An adversary may at * Tactic: Defense Evasion * Data Source: Okta -*Version*: 310 +*Version*: 411 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-deactivate-an-okta-policy.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-deactivate-an-okta-policy.asciidoc index bf56a89a14..89cd0d5751 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempt-to-deactivate-an-okta-policy.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-deactivate-an-okta-policy.asciidoc @@ -35,7 +35,7 @@ Detects attempts to deactivate an Okta policy. An adversary may attempt to deact * Data Source: Okta * Tactic: Defense Evasion -*Version*: 309 +*Version*: 410 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-delete-an-okta-application.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-delete-an-okta-application.asciidoc index a58c8f8647..66158f7693 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempt-to-delete-an-okta-application.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-delete-an-okta-application.asciidoc @@ -34,7 +34,7 @@ Detects attempts to delete an Okta application. An adversary may attempt to modi * Data Source: Okta * Tactic: Impact -*Version*: 308 +*Version*: 409 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-delete-an-okta-network-zone.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-delete-an-okta-network-zone.asciidoc index 922ebba47c..e991aae67d 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempt-to-delete-an-okta-network-zone.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-delete-an-okta-network-zone.asciidoc @@ -36,7 +36,7 @@ Detects attempts to delete an Okta network zone. Okta network zones can be confi * Use Case: Network Security Monitoring * Tactic: Defense Evasion -*Version*: 309 +*Version*: 410 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-delete-an-okta-policy-rule.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-delete-an-okta-policy-rule.asciidoc index 733a355f93..a7fe2624ec 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempt-to-delete-an-okta-policy-rule.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-delete-an-okta-policy-rule.asciidoc @@ -35,7 +35,7 @@ Detects attempts to delete a rule within an Okta policy. An adversary may attemp * Data Source: Okta * Tactic: Defense Evasion -*Version*: 309 +*Version*: 410 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-delete-an-okta-policy.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-delete-an-okta-policy.asciidoc index 2887a47887..71bf6a9c3a 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempt-to-delete-an-okta-policy.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-delete-an-okta-policy.asciidoc @@ -35,7 +35,7 @@ Detects attempts to delete an Okta policy. An adversary may attempt to delete an * Data Source: Okta * Tactic: Defense Evasion -*Version*: 309 +*Version*: 410 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-modify-an-okta-application.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-modify-an-okta-application.asciidoc index 475e8744b7..6358914ab9 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempt-to-modify-an-okta-application.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-modify-an-okta-application.asciidoc @@ -35,7 +35,7 @@ Detects attempts to modify an Okta application. An adversary may attempt to modi * Data Source: Okta * Tactic: Impact -*Version*: 308 +*Version*: 409 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-modify-an-okta-network-zone.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-modify-an-okta-network-zone.asciidoc index 2aa8ad29d9..ebe8d28f68 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempt-to-modify-an-okta-network-zone.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-modify-an-okta-network-zone.asciidoc @@ -36,7 +36,7 @@ Detects attempts to modify an Okta network zone. Okta network zones can be confi * Use Case: Network Security Monitoring * Tactic: Defense Evasion -*Version*: 309 +*Version*: 410 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-modify-an-okta-policy-rule.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-modify-an-okta-policy-rule.asciidoc index 5353d3f6eb..b0855b4128 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempt-to-modify-an-okta-policy-rule.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-modify-an-okta-policy-rule.asciidoc @@ -35,7 +35,7 @@ Detects attempts to modify a rule within an Okta policy. An adversary may attemp * Tactic: Defense Evasion * Data Source: Okta -*Version*: 310 +*Version*: 411 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-modify-an-okta-policy.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-modify-an-okta-policy.asciidoc index d59cfe316a..d6b467cda7 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempt-to-modify-an-okta-policy.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-modify-an-okta-policy.asciidoc @@ -34,7 +34,7 @@ Detects attempts to modify an Okta policy. An adversary may attempt to modify an * Data Source: Okta * Tactic: Defense Evasion -*Version*: 309 +*Version*: 410 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-reset-mfa-factors-for-an-okta-user-account.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-reset-mfa-factors-for-an-okta-user-account.asciidoc index 67a2ccf9c5..7ff12ec7b7 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempt-to-reset-mfa-factors-for-an-okta-user-account.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-reset-mfa-factors-for-an-okta-user-account.asciidoc @@ -35,7 +35,7 @@ Detects attempts to reset an Okta user's enrolled multi-factor authentication (M * Use Case: Identity and Access Audit * Data Source: Okta -*Version*: 309 +*Version*: 410 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-revoke-okta-api-token.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-revoke-okta-api-token.asciidoc index 7f0a205470..cbbe72f824 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempt-to-revoke-okta-api-token.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-revoke-okta-api-token.asciidoc @@ -34,7 +34,7 @@ Identifies attempts to revoke an Okta API token. An adversary may attempt to rev * Data Source: Okta * Tactic: Impact -*Version*: 309 +*Version*: 410 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempted-bypass-of-okta-mfa.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempted-bypass-of-okta-mfa.asciidoc index 994d31bb27..8c252591e6 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempted-bypass-of-okta-mfa.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempted-bypass-of-okta-mfa.asciidoc @@ -35,7 +35,7 @@ Detects attempts to bypass Okta multi-factor authentication (MFA). An adversary * Use Case: Identity and Access Audit * Tactic: Credential Access -*Version*: 310 +*Version*: 411 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/attempts-to-brute-force-an-okta-user-account.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempts-to-brute-force-an-okta-user-account.asciidoc index 8af0cbe50a..162a6b7bf8 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempts-to-brute-force-an-okta-user-account.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempts-to-brute-force-an-okta-user-account.asciidoc @@ -34,7 +34,7 @@ Identifies when an Okta user account is locked out 3 times within a 3 hour windo * Tactic: Credential Access * Data Source: Okta -*Version*: 311 +*Version*: 412 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/aws-bedrock-invocations-without-guardrails-detected-by-a-single-user-over-a-session.asciidoc b/docs/detections/prebuilt-rules/rule-details/aws-bedrock-invocations-without-guardrails-detected-by-a-single-user-over-a-session.asciidoc new file mode 100644 index 0000000000..8c7b7270f5 --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/aws-bedrock-invocations-without-guardrails-detected-by-a-single-user-over-a-session.asciidoc @@ -0,0 +1,118 @@ +[[aws-bedrock-invocations-without-guardrails-detected-by-a-single-user-over-a-session]] +=== AWS Bedrock Invocations without Guardrails Detected by a Single User Over a Session + +Identifies multiple AWS Bedrock executions in a one minute time window without guardrails by the same user in the same account over a session. Multiple consecutive executions implies that a user may be intentionally attempting to bypass security controls, by not routing the requests with the desired guardrail configuration in order to access sensitive information, or possibly exploit a vulnerability in the system. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 10m + +*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-components.html +* https://atlas.mitre.org/techniques/AML.T0051 +* https://atlas.mitre.org/techniques/AML.T0054 +* https://www.elastic.co/security-labs/elastic-advances-llm-security + +*Tags*: + +* Domain: LLM +* Data Source: AWS Bedrock +* Data Source: AWS S3 +* Resources: Investigation Guide +* Use Case: Policy Violation +* Mitre Atlas: T0051 +* Mitre Atlas: T0054 + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Amazon Bedrock Invocations without Guardrails Detected by a Single User Over a Session.* + + +Using Amazon Bedrock Guardrails during model invocation is critical for ensuring the safe, reliable, and ethical use of AI models. +Guardrails help manage risks associated with AI usage and ensure the output aligns with desired policies and standards. + + +*Possible investigation steps* + + +- Identify the user account that caused multiple model violations over a session without desired guardrail configuration and whether it should perform this kind of action. +- Investigate the user activity that might indicate a potential brute force attack. +- Investigate other alerts associated with the user account during the past 48 hours. +- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day? +- Examine the account's prompts and responses in the last 24 hours. +- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours. + + +*False positive analysis* + + +- Verify the user account that caused multiple policy violations by a single user over session, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Disable or limit the account during the investigation and response. +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services. + - Identify any regulatory or legal ramifications related to this activity. +- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed. +- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Setup + + + +*Setup* + + +This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation: + +https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-create.html + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-aws_bedrock.invocation-* +// create time window buckets of 1 minute +| eval time_window = date_trunc(1 minute, @timestamp) +| where gen_ai.guardrail_id is NULL +| KEEP @timestamp, time_window, gen_ai.guardrail_id , user.id +| stats model_invocation_without_guardrails = count() by user.id +| where model_invocation_without_guardrails > 5 +| sort model_invocation_without_guardrails desc + +---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/aws-iam-login-profile-added-for-root.asciidoc b/docs/detections/prebuilt-rules/rule-details/aws-iam-login-profile-added-for-root.asciidoc new file mode 100644 index 0000000000..57f039659b --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/aws-iam-login-profile-added-for-root.asciidoc @@ -0,0 +1,169 @@ +[[aws-iam-login-profile-added-for-root]] +=== AWS IAM Login Profile Added for Root + +Detects when an AWS IAM login profile is added to a root user account and is self-assigned. Adversaries, with temporary access to the root account, may add a login profile to the root user account to maintain access even if the original access key is rotated or disabled. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Cloud +* Data Source: AWS +* Data Source: Amazon Web Services +* Data Source: AWS IAM +* Use Case: Identity and Access Audit +* Tactic: Persistence +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Investigating AWS IAM Login Profile Added for Root* + + +This rule detects when a login profile is added to the AWS root account. Adding a login profile to the root account, especially if self-assigned, is highly suspicious as it might indicate an adversary trying to establish persistence in the environment. + + +*Possible Investigation Steps* + + +- **Identify the Source and Context of the Action**: + - Examine the `source.address` field to identify the IP address from which the request originated. + - Check the geographic location (`source.address`) to determine if the access is from an expected or unexpected region. + - Look at the `user_agent.original` field to identify the tool or browser used for this action. + - For example, a user agent like `Mozilla/5.0` might indicate interactive access, whereas `aws-cli` or SDKs suggest scripted activity. + +- **Confirm Root User and Request Details**: + - Validate the root user's identity through `aws.cloudtrail.user_identity.arn` and ensure this activity aligns with legitimate administrative actions. + - Review `aws.cloudtrail.user_identity.access_key_id` to identify if the action was performed using temporary or permanent credentials. This access key could be used to pivot into other actions. + +- **Analyze the Login Profile Creation**: + - Review the `aws.cloudtrail.request_parameters` and `aws.cloudtrail.response_elements` fields for details of the created login profile. + - For example, confirm the `userName` of the profile and whether `passwordResetRequired` is set to `true`. + - Compare the `@timestamp` of this event with other recent actions by the root account to identify potential privilege escalation or abuse. + +- **Correlate with Other Events**: + - Investigate for related IAM activities, such as: + - `CreateAccessKey` or `AttachUserPolicy` events targeting the root account. + - Unusual data access, privilege escalation, or management console logins. + - Check for any anomalies involving the same `source.address` or `aws.cloudtrail.user_identity.access_key_id` in the environment. + +- **Evaluate Policy and Permissions**: + - Verify the current security policies for the root account: + - Ensure password policies enforce complexity and rotation requirements. + - Check if MFA is enforced on the root account. + - Assess the broader IAM configuration for deviations from least privilege principles. + + +*False Positive Analysis* + + +- **Routine Administrative Tasks**: Adding a login profile might be a legitimate action during certain administrative processes. Verify with the relevant AWS administrators if this event aligns with routine account maintenance or emergency recovery scenarios. + +- **Automation**: If the action is part of an approved automation process (e.g., account recovery workflows), consider excluding these activities from alerting using specific user agents, IP addresses, or session attributes. + + +*Response and Remediation* + + +- **Immediate Access Review**: + - Disable the newly created login profile (`aws iam delete-login-profile`) if it is determined to be unauthorized. + - Rotate or disable the credentials associated with the root account to prevent further abuse. + +- **Enhance Monitoring and Alerts**: + - Enable real-time monitoring and alerting for IAM actions involving the root account. + - Increase the logging verbosity for root account activities. + +- **Review and Update Security Policies**: + - Enforce MFA for all administrative actions, including root account usage. + - Restrict programmatic access to the root account by disabling access keys unless absolutely necessary. + +- **Conduct Post-Incident Analysis**: + - Investigate how the credentials for the root account were compromised or misused. + - Strengthen the security posture by implementing account-specific guardrails and continuous monitoring. + + +*Additional Resources* + + +- AWS documentation on https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreateLoginProfile.html[Login Profile Management]. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-aws.cloudtrail* metadata _id, _version, _index +| where + // filter for CloudTrail logs from IAM + event.dataset == "aws.cloudtrail" + and event.provider == "iam.amazonaws.com" + + // filter for successful CreateLoginProfile API call + and event.action == "CreateLoginProfile" + and event.outcome == "success" + + // filter for Root member account + and aws.cloudtrail.user_identity.type == "Root" + + // filter for an access key existing which sources from AssumeRoot + and aws.cloudtrail.user_identity.access_key_id IS NOT NULL + + // filter on the request parameters not including UserName which assumes self-assignment + and NOT TO_LOWER(aws.cloudtrail.request_parameters) LIKE "*username*" +| keep + @timestamp, + aws.cloudtrail.request_parameters, + aws.cloudtrail.response_elements, + aws.cloudtrail.user_identity.type, + aws.cloudtrail.user_identity.arn, + aws.cloudtrail.user_identity.access_key_id, + cloud.account.id, + event.action, + source.address + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ diff --git a/docs/detections/prebuilt-rules/rule-details/aws-iam-user-created-access-keys-for-another-user.asciidoc b/docs/detections/prebuilt-rules/rule-details/aws-iam-user-created-access-keys-for-another-user.asciidoc index f0be0f2325..831634cb6b 100644 --- a/docs/detections/prebuilt-rules/rule-details/aws-iam-user-created-access-keys-for-another-user.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/aws-iam-user-created-access-keys-for-another-user.asciidoc @@ -35,7 +35,7 @@ An adversary with access to a set of compromised credentials may attempt to pers * Tactic: Persistence * Resources: Investigation Guide -*Version*: 4 +*Version*: 5 *Rule authors*: @@ -134,7 +134,7 @@ from logs-aws.cloudtrail-* metadata _id, _version, _index aws.cloudtrail.request_parameters, aws.cloudtrail.response_elements, aws.cloudtrail.user_identity.arn, - aws.cloudtrail.user_identity.type, + aws.cloudtrail.user_identity.type ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/first-occurrence-github-event-for-a-personal-access-token-pat.asciidoc b/docs/detections/prebuilt-rules/rule-details/first-occurrence-github-event-for-a-personal-access-token-pat.asciidoc index 98c72279e7..b6cfbf755c 100644 --- a/docs/detections/prebuilt-rules/rule-details/first-occurrence-github-event-for-a-personal-access-token-pat.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/first-occurrence-github-event-for-a-personal-access-token-pat.asciidoc @@ -30,7 +30,7 @@ Detects a first occurrence event for a personal access token (PAT) not seen in t * Rule Type: BBR * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-github-repo-interaction-from-a-new-ip.asciidoc b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-github-repo-interaction-from-a-new-ip.asciidoc index 6a41471ead..1db5ee767c 100644 --- a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-github-repo-interaction-from-a-new-ip.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-github-repo-interaction-from-a-new-ip.asciidoc @@ -30,7 +30,7 @@ Detects an interaction with a private GitHub repository from a new IP address no * Rule Type: BBR * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-github-user-interaction-with-private-repo.asciidoc b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-github-user-interaction-with-private-repo.asciidoc index 031f1ceb01..02cbdbd583 100644 --- a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-github-user-interaction-with-private-repo.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-github-user-interaction-with-private-repo.asciidoc @@ -30,7 +30,7 @@ Detects a new private repo interaction for a GitHub user not seen in the last 14 * Rule Type: BBR * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-ip-address-for-github-personal-access-token-pat.asciidoc b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-ip-address-for-github-personal-access-token-pat.asciidoc index d552cc3c20..044ae2b60f 100644 --- a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-ip-address-for-github-personal-access-token-pat.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-ip-address-for-github-personal-access-token-pat.asciidoc @@ -30,7 +30,7 @@ Detects a new IP address used for a GitHub PAT not previously seen in the last 1 * Rule Type: BBR * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-ip-address-for-github-user.asciidoc b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-ip-address-for-github-user.asciidoc index 1a64a5fdb0..2fe3a5da29 100644 --- a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-ip-address-for-github-user.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-ip-address-for-github-user.asciidoc @@ -30,7 +30,7 @@ Detects a new IP address used for a GitHub user not previously seen in the last * Rule Type: BBR * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-okta-user-session-started-via-proxy.asciidoc b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-okta-user-session-started-via-proxy.asciidoc index a449ff9a68..8088deb9c2 100644 --- a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-okta-user-session-started-via-proxy.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-okta-user-session-started-via-proxy.asciidoc @@ -36,7 +36,7 @@ Identifies the first occurrence of an Okta user session started via a proxy. * Use Case: Identity and Access Audit * Data Source: Okta -*Version*: 104 +*Version*: 205 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-personal-access-token-pat-use-for-a-github-user.asciidoc b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-personal-access-token-pat-use-for-a-github-user.asciidoc index 3f502153f8..0c3c8094bd 100644 --- a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-personal-access-token-pat-use-for-a-github-user.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-personal-access-token-pat-use-for-a-github-user.asciidoc @@ -30,7 +30,7 @@ A new PAT was used for a GitHub user not previously seen in the last 14 days. * Rule Type: BBR * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-private-repo-event-from-specific-github-personal-access-token-pat.asciidoc b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-private-repo-event-from-specific-github-personal-access-token-pat.asciidoc index 4583868c4f..069e7ac701 100644 --- a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-private-repo-event-from-specific-github-personal-access-token-pat.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-private-repo-event-from-specific-github-personal-access-token-pat.asciidoc @@ -30,7 +30,7 @@ Detects a new private repo interaction for a GitHub PAT not seen in the last 14 * Rule Type: BBR * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-user-agent-for-a-github-personal-access-token-pat.asciidoc b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-user-agent-for-a-github-personal-access-token-pat.asciidoc index 946f59f56e..cb1afc58e1 100644 --- a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-user-agent-for-a-github-personal-access-token-pat.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-user-agent-for-a-github-personal-access-token-pat.asciidoc @@ -30,7 +30,7 @@ Detects a new user agent used for a GitHub PAT not previously seen in the last 1 * Rule Type: BBR * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-user-agent-for-a-github-user.asciidoc b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-user-agent-for-a-github-user.asciidoc index a555d734c6..2d226583c7 100644 --- a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-user-agent-for-a-github-user.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-user-agent-for-a-github-user.asciidoc @@ -30,7 +30,7 @@ Detects a new user agent used for a GitHub user not previously seen in the last * Rule Type: BBR * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/github-app-deleted.asciidoc b/docs/detections/prebuilt-rules/rule-details/github-app-deleted.asciidoc index a00a2e3974..10b959714f 100644 --- a/docs/detections/prebuilt-rules/rule-details/github-app-deleted.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/github-app-deleted.asciidoc @@ -28,7 +28,7 @@ Detects the deletion of a GitHub app either from a repo or an organization. * Tactic: Execution * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/github-owner-role-granted-to-user.asciidoc b/docs/detections/prebuilt-rules/rule-details/github-owner-role-granted-to-user.asciidoc index fc4402b4c3..1ad30d66e3 100644 --- a/docs/detections/prebuilt-rules/rule-details/github-owner-role-granted-to-user.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/github-owner-role-granted-to-user.asciidoc @@ -29,7 +29,7 @@ This rule detects when a member is granted the organization owner role of a GitH * Tactic: Persistence * Data Source: Github -*Version*: 105 +*Version*: 206 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/github-pat-access-revoked.asciidoc b/docs/detections/prebuilt-rules/rule-details/github-pat-access-revoked.asciidoc index 2549b2cebf..70a40598c4 100644 --- a/docs/detections/prebuilt-rules/rule-details/github-pat-access-revoked.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/github-pat-access-revoked.asciidoc @@ -30,7 +30,7 @@ Access to private GitHub organization resources was revoked for a PAT. * Rule Type: BBR * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/github-protected-branch-settings-changed.asciidoc b/docs/detections/prebuilt-rules/rule-details/github-protected-branch-settings-changed.asciidoc index bc34fc02a0..24bf33c7f6 100644 --- a/docs/detections/prebuilt-rules/rule-details/github-protected-branch-settings-changed.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/github-protected-branch-settings-changed.asciidoc @@ -28,7 +28,7 @@ This rule detects setting modifications for protected branches of a GitHub repos * Tactic: Defense Evasion * Data Source: Github -*Version*: 105 +*Version*: 206 *Rule authors*: @@ -42,7 +42,7 @@ This rule detects setting modifications for protected branches of a GitHub repos [source, js] ---------------------------------- -configuration where event.dataset == "github.audit" +configuration where event.dataset == "github.audit" and github.category == "protected_branch" and event.type == "change" ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/github-repo-created.asciidoc b/docs/detections/prebuilt-rules/rule-details/github-repo-created.asciidoc index 8dc744c451..4e8520f509 100644 --- a/docs/detections/prebuilt-rules/rule-details/github-repo-created.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/github-repo-created.asciidoc @@ -30,7 +30,7 @@ A new GitHub repository was created. * Rule Type: BBR * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/github-repository-deleted.asciidoc b/docs/detections/prebuilt-rules/rule-details/github-repository-deleted.asciidoc index 99f731d3ff..cfb13820b7 100644 --- a/docs/detections/prebuilt-rules/rule-details/github-repository-deleted.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/github-repository-deleted.asciidoc @@ -29,7 +29,7 @@ This rule detects when a GitHub repository is deleted within your organization. * Tactic: Impact * Data Source: Github -*Version*: 102 +*Version*: 203 *Rule authors*: @@ -43,7 +43,7 @@ This rule detects when a GitHub repository is deleted within your organization. [source, js] ---------------------------------- -configuration where event.module == "github" and event.action == "repo.destroy" +configuration where event.module == "github" and event.dataset == "github.audit" and event.action == "repo.destroy" ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/github-ueba-multiple-alerts-from-a-github-account.asciidoc b/docs/detections/prebuilt-rules/rule-details/github-ueba-multiple-alerts-from-a-github-account.asciidoc index 96696ab80c..087d00138f 100644 --- a/docs/detections/prebuilt-rules/rule-details/github-ueba-multiple-alerts-from-a-github-account.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/github-ueba-multiple-alerts-from-a-github-account.asciidoc @@ -30,7 +30,7 @@ This rule is part of the "GitHub UEBA - Unusual Activity from Account Pack", and * Rule Type: Higher-Order Rule * Data Source: Github -*Version*: 1 +*Version*: 101 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/github-user-blocked-from-organization.asciidoc b/docs/detections/prebuilt-rules/rule-details/github-user-blocked-from-organization.asciidoc index 8df7a900cc..a45d9d8de2 100644 --- a/docs/detections/prebuilt-rules/rule-details/github-user-blocked-from-organization.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/github-user-blocked-from-organization.asciidoc @@ -30,7 +30,7 @@ A GitHub user was blocked from access to an organization. * Rule Type: BBR * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/high-number-of-cloned-github-repos-from-pat.asciidoc b/docs/detections/prebuilt-rules/rule-details/high-number-of-cloned-github-repos-from-pat.asciidoc index 9e96b22b07..8d9fbec782 100644 --- a/docs/detections/prebuilt-rules/rule-details/high-number-of-cloned-github-repos-from-pat.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/high-number-of-cloned-github-repos-from-pat.asciidoc @@ -29,7 +29,7 @@ Detects a high number of unique private repo clone events originating from a sin * Tactic: Execution * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: @@ -43,8 +43,8 @@ Detects a high number of unique private repo clone events originating from a sin [source, js] ---------------------------------- -event.dataset:"github.audit" and event.category:"configuration" and event.action:"git.clone" and -github.programmatic_access_type:("OAuth access token" or "Fine-grained personal access token") and +event.dataset:"github.audit" and event.category:"configuration" and event.action:"git.clone" and +github.programmatic_access_type:("OAuth access token" or "Fine-grained personal access token") and github.repository_public:false ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/high-number-of-okta-device-token-cookies-generated-for-authentication.asciidoc b/docs/detections/prebuilt-rules/rule-details/high-number-of-okta-device-token-cookies-generated-for-authentication.asciidoc index 20ae40f230..76efcca9fa 100644 --- a/docs/detections/prebuilt-rules/rule-details/high-number-of-okta-device-token-cookies-generated-for-authentication.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/high-number-of-okta-device-token-cookies-generated-for-authentication.asciidoc @@ -33,7 +33,7 @@ Detects when an Okta client address has a certain threshold of Okta user authent * Data Source: Okta * Tactic: Credential Access -*Version*: 103 +*Version*: 203 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/high-number-of-okta-user-password-reset-or-unlock-attempts.asciidoc b/docs/detections/prebuilt-rules/rule-details/high-number-of-okta-user-password-reset-or-unlock-attempts.asciidoc index a0912955d5..06b607bea3 100644 --- a/docs/detections/prebuilt-rules/rule-details/high-number-of-okta-user-password-reset-or-unlock-attempts.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/high-number-of-okta-user-password-reset-or-unlock-attempts.asciidoc @@ -34,7 +34,7 @@ Identifies a high number of Okta user password reset or account unlock attempts. * Data Source: Okta * Tactic: Defense Evasion -*Version*: 311 +*Version*: 412 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/member-removed-from-github-organization.asciidoc b/docs/detections/prebuilt-rules/rule-details/member-removed-from-github-organization.asciidoc index 0965920bab..a4691e1b45 100644 --- a/docs/detections/prebuilt-rules/rule-details/member-removed-from-github-organization.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/member-removed-from-github-organization.asciidoc @@ -30,7 +30,7 @@ A member was removed or their invitation to join was removed from a GitHub Organ * Rule Type: BBR * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/mfa-deactivation-with-no-re-activation-for-okta-user-account.asciidoc b/docs/detections/prebuilt-rules/rule-details/mfa-deactivation-with-no-re-activation-for-okta-user-account.asciidoc index 6e56289f8e..3e72addebf 100644 --- a/docs/detections/prebuilt-rules/rule-details/mfa-deactivation-with-no-re-activation-for-okta-user-account.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/mfa-deactivation-with-no-re-activation-for-okta-user-account.asciidoc @@ -35,7 +35,7 @@ Detects multi-factor authentication (MFA) deactivation with no subsequent re-act * Data Source: Okta * Domain: Cloud -*Version*: 311 +*Version*: 412 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/modification-or-removal-of-an-okta-application-sign-on-policy.asciidoc b/docs/detections/prebuilt-rules/rule-details/modification-or-removal-of-an-okta-application-sign-on-policy.asciidoc index 26366939bf..388517d406 100644 --- a/docs/detections/prebuilt-rules/rule-details/modification-or-removal-of-an-okta-application-sign-on-policy.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/modification-or-removal-of-an-okta-application-sign-on-policy.asciidoc @@ -35,7 +35,7 @@ Detects attempts to modify or delete a sign on policy for an Okta application. A * Use Case: Identity and Access Audit * Data Source: Okta -*Version*: 309 +*Version*: 410 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/multiple-device-token-hashes-for-single-okta-session.asciidoc b/docs/detections/prebuilt-rules/rule-details/multiple-device-token-hashes-for-single-okta-session.asciidoc index c90b07bc7d..1d4632dbc4 100644 --- a/docs/detections/prebuilt-rules/rule-details/multiple-device-token-hashes-for-single-okta-session.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/multiple-device-token-hashes-for-single-okta-session.asciidoc @@ -34,7 +34,7 @@ This rule detects when a specific Okta actor has multiple device token hashes fo * Tactic: Credential Access * Domain: SaaS -*Version*: 204 +*Version*: 304 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/multiple-okta-sessions-detected-for-a-single-user.asciidoc b/docs/detections/prebuilt-rules/rule-details/multiple-okta-sessions-detected-for-a-single-user.asciidoc index a9bd80aa04..36e56b97a9 100644 --- a/docs/detections/prebuilt-rules/rule-details/multiple-okta-sessions-detected-for-a-single-user.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/multiple-okta-sessions-detected-for-a-single-user.asciidoc @@ -35,7 +35,7 @@ Detects when a user has started multiple Okta sessions with the same user accoun * Data Source: Okta * Tactic: Lateral Movement -*Version*: 105 +*Version*: 206 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-auth-events-with-same-device-token-hash-behind-a-proxy.asciidoc b/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-auth-events-with-same-device-token-hash-behind-a-proxy.asciidoc index 0e18df2950..c17e6ec824 100644 --- a/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-auth-events-with-same-device-token-hash-behind-a-proxy.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-auth-events-with-same-device-token-hash-behind-a-proxy.asciidoc @@ -35,7 +35,7 @@ Detects when Okta user authentication events are reported for multiple users wit * Data Source: Okta * Tactic: Credential Access -*Version*: 105 +*Version*: 206 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-authentication-events-with-client-address.asciidoc b/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-authentication-events-with-client-address.asciidoc index 59fc875b68..156fd78054 100644 --- a/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-authentication-events-with-client-address.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-authentication-events-with-client-address.asciidoc @@ -33,7 +33,7 @@ Detects when a certain threshold of Okta user authentication events are reported * Data Source: Okta * Tactic: Credential Access -*Version*: 103 +*Version*: 203 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-authentication-events-with-same-device-token-hash.asciidoc b/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-authentication-events-with-same-device-token-hash.asciidoc index c13f340cf9..b1d4dc3305 100644 --- a/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-authentication-events-with-same-device-token-hash.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-authentication-events-with-same-device-token-hash.asciidoc @@ -33,7 +33,7 @@ Detects when a high number of Okta user authentication events are reported for m * Data Source: Okta * Tactic: Credential Access -*Version*: 103 +*Version*: 203 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/new-github-app-installed.asciidoc b/docs/detections/prebuilt-rules/rule-details/new-github-app-installed.asciidoc index 0d2fb9f466..c82d537da4 100644 --- a/docs/detections/prebuilt-rules/rule-details/new-github-app-installed.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/new-github-app-installed.asciidoc @@ -28,7 +28,7 @@ This rule detects when a new GitHub App has been installed in your organization * Tactic: Execution * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/new-github-owner-added.asciidoc b/docs/detections/prebuilt-rules/rule-details/new-github-owner-added.asciidoc index fd383a384c..0cd9323fd4 100644 --- a/docs/detections/prebuilt-rules/rule-details/new-github-owner-added.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/new-github-owner-added.asciidoc @@ -29,7 +29,7 @@ Detects when a new member is added to a GitHub organization as an owner. This ro * Tactic: Persistence * Data Source: Github -*Version*: 105 +*Version*: 206 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/new-okta-authentication-behavior-detected.asciidoc b/docs/detections/prebuilt-rules/rule-details/new-okta-authentication-behavior-detected.asciidoc index bee838c809..9f075d916a 100644 --- a/docs/detections/prebuilt-rules/rule-details/new-okta-authentication-behavior-detected.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/new-okta-authentication-behavior-detected.asciidoc @@ -35,7 +35,7 @@ Detects events where Okta behavior detection has identified a new authentication * Tactic: Initial Access * Data Source: Okta -*Version*: 105 +*Version*: 206 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/new-okta-identity-provider-idp-added-by-admin.asciidoc b/docs/detections/prebuilt-rules/rule-details/new-okta-identity-provider-idp-added-by-admin.asciidoc index 3da062b9c9..3f93c509b5 100644 --- a/docs/detections/prebuilt-rules/rule-details/new-okta-identity-provider-idp-added-by-admin.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/new-okta-identity-provider-idp-added-by-admin.asciidoc @@ -35,7 +35,7 @@ Detects the creation of a new Identity Provider (IdP) by a Super Administrator o * Tactic: Persistence * Data Source: Okta -*Version*: 104 +*Version*: 205 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/new-user-added-to-github-organization.asciidoc b/docs/detections/prebuilt-rules/rule-details/new-user-added-to-github-organization.asciidoc index ac38b8c496..b1d0d5a8b7 100644 --- a/docs/detections/prebuilt-rules/rule-details/new-user-added-to-github-organization.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/new-user-added-to-github-organization.asciidoc @@ -30,7 +30,7 @@ A new user was added to a GitHub organization. * Rule Type: BBR * Data Source: Github -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/okta-brute-force-or-password-spraying-attack.asciidoc b/docs/detections/prebuilt-rules/rule-details/okta-brute-force-or-password-spraying-attack.asciidoc index 42d35b8cc3..234a27da7a 100644 --- a/docs/detections/prebuilt-rules/rule-details/okta-brute-force-or-password-spraying-attack.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/okta-brute-force-or-password-spraying-attack.asciidoc @@ -34,7 +34,7 @@ Identifies a high number of failed Okta user authentication attempts from a sing * Tactic: Credential Access * Data Source: Okta -*Version*: 311 +*Version*: 412 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/okta-fastpass-phishing-detection.asciidoc b/docs/detections/prebuilt-rules/rule-details/okta-fastpass-phishing-detection.asciidoc index 13e65046c8..a0abdaa8a4 100644 --- a/docs/detections/prebuilt-rules/rule-details/okta-fastpass-phishing-detection.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/okta-fastpass-phishing-detection.asciidoc @@ -35,7 +35,7 @@ Detects when Okta FastPass prevents a user from authenticating to a phishing web * Use Case: Identity and Access Audit * Data Source: Okta -*Version*: 206 +*Version*: 307 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/okta-sign-in-events-via-third-party-idp.asciidoc b/docs/detections/prebuilt-rules/rule-details/okta-sign-in-events-via-third-party-idp.asciidoc index 17a315cd39..edc7da7645 100644 --- a/docs/detections/prebuilt-rules/rule-details/okta-sign-in-events-via-third-party-idp.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/okta-sign-in-events-via-third-party-idp.asciidoc @@ -35,7 +35,7 @@ Detects sign-in events where authentication is carried out via a third-party Ide * Tactic: Initial Access * Data Source: Okta -*Version*: 105 +*Version*: 206 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/okta-threatinsight-threat-suspected-promotion.asciidoc b/docs/detections/prebuilt-rules/rule-details/okta-threatinsight-threat-suspected-promotion.asciidoc index 311b8bc4be..7106247850 100644 --- a/docs/detections/prebuilt-rules/rule-details/okta-threatinsight-threat-suspected-promotion.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/okta-threatinsight-threat-suspected-promotion.asciidoc @@ -34,7 +34,7 @@ Okta ThreatInsight is a feature that provides valuable debug data regarding auth * Use Case: Identity and Access Audit * Data Source: Okta -*Version*: 308 +*Version*: 409 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/okta-user-session-impersonation.asciidoc b/docs/detections/prebuilt-rules/rule-details/okta-user-session-impersonation.asciidoc index 8b96564187..a0aa9e092f 100644 --- a/docs/detections/prebuilt-rules/rule-details/okta-user-session-impersonation.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/okta-user-session-impersonation.asciidoc @@ -34,7 +34,7 @@ A user has initiated a session impersonation granting them access to the environ * Tactic: Credential Access * Data Source: Okta -*Version*: 310 +*Version*: 411 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/okta-user-sessions-started-from-different-geolocations.asciidoc b/docs/detections/prebuilt-rules/rule-details/okta-user-sessions-started-from-different-geolocations.asciidoc index 5868b3e4ba..49ca78f795 100644 --- a/docs/detections/prebuilt-rules/rule-details/okta-user-sessions-started-from-different-geolocations.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/okta-user-sessions-started-from-different-geolocations.asciidoc @@ -33,7 +33,7 @@ Detects when a specific Okta actor has multiple sessions started from different * Data Source: Okta * Tactic: Initial Access -*Version*: 203 +*Version*: 303 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/possible-consent-grant-attack-via-azure-registered-application.asciidoc b/docs/detections/prebuilt-rules/rule-details/possible-consent-grant-attack-via-azure-registered-application.asciidoc index 18ca9823e1..ba840514d1 100644 --- a/docs/detections/prebuilt-rules/rule-details/possible-consent-grant-attack-via-azure-registered-application.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/possible-consent-grant-attack-via-azure-registered-application.asciidoc @@ -36,7 +36,7 @@ Detects when a user grants permissions to an Azure-registered application or whe * Resources: Investigation Guide * Tactic: Initial Access -*Version*: 212 +*Version*: 213 *Rule authors*: @@ -116,8 +116,8 @@ The Azure Fleet integration, Filebeat module, or similarly structured data is re event.dataset:(azure.activitylogs or azure.auditlogs or o365.audit) and ( azure.activitylogs.operation_name:"Consent to application" or - azure.auditlogs.operation_name:"Consent to application" or - o365.audit.Operation:"Consent to application." + azure.auditlogs.operation_name:"Consent to application" or + event.action:"Consent to application." ) and event.outcome:(Success or success) diff --git a/docs/detections/prebuilt-rules/rule-details/possible-okta-dos-attack.asciidoc b/docs/detections/prebuilt-rules/rule-details/possible-okta-dos-attack.asciidoc index 0bdcebfa99..649a12bd82 100644 --- a/docs/detections/prebuilt-rules/rule-details/possible-okta-dos-attack.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/possible-okta-dos-attack.asciidoc @@ -34,7 +34,7 @@ Detects possible Denial of Service (DoS) attacks against an Okta organization. A * Data Source: Okta * Tactic: Impact -*Version*: 308 +*Version*: 409 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/potential-okta-mfa-bombing-via-push-notifications.asciidoc b/docs/detections/prebuilt-rules/rule-details/potential-okta-mfa-bombing-via-push-notifications.asciidoc index 7a24ecb0fb..cc7d31fa5f 100644 --- a/docs/detections/prebuilt-rules/rule-details/potential-okta-mfa-bombing-via-push-notifications.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/potential-okta-mfa-bombing-via-push-notifications.asciidoc @@ -35,7 +35,7 @@ Detects when an attacker abuses the Multi-Factor authentication mechanism by rep * Tactic: Credential Access * Data Source: Okta -*Version*: 106 +*Version*: 207 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/potential-sudo-token-manipulation-via-process-injection.asciidoc b/docs/detections/prebuilt-rules/rule-details/potential-sudo-token-manipulation-via-process-injection.asciidoc index d88fd80ba9..d5b374f14b 100644 --- a/docs/detections/prebuilt-rules/rule-details/potential-sudo-token-manipulation-via-process-injection.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/potential-sudo-token-manipulation-via-process-injection.asciidoc @@ -31,7 +31,7 @@ This rule detects potential sudo token manipulation attacks through process inje * Tactic: Privilege Escalation * Data Source: Elastic Defend -*Version*: 5 +*Version*: 7 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/potentially-successful-mfa-bombing-via-push-notifications.asciidoc b/docs/detections/prebuilt-rules/rule-details/potentially-successful-mfa-bombing-via-push-notifications.asciidoc index 58c5de2e0c..d09e010030 100644 --- a/docs/detections/prebuilt-rules/rule-details/potentially-successful-mfa-bombing-via-push-notifications.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/potentially-successful-mfa-bombing-via-push-notifications.asciidoc @@ -35,7 +35,7 @@ Detects when an attacker abuses the Multi-Factor authentication mechanism by rep * Tactic: Credential Access * Data Source: Okta -*Version*: 312 +*Version*: 413 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/privilege-escalation-via-suid-sgid.asciidoc b/docs/detections/prebuilt-rules/rule-details/privilege-escalation-via-suid-sgid.asciidoc index 98fd48205a..1a071912c1 100644 --- a/docs/detections/prebuilt-rules/rule-details/privilege-escalation-via-suid-sgid.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/privilege-escalation-via-suid-sgid.asciidoc @@ -33,7 +33,7 @@ Identifies instances where a process is executed with user/group ID 0 (root), an * Tactic: Persistence * Data Source: Elastic Defend -*Version*: 3 +*Version*: 5 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/stolen-credentials-used-to-login-to-okta-account-after-mfa-reset.asciidoc b/docs/detections/prebuilt-rules/rule-details/stolen-credentials-used-to-login-to-okta-account-after-mfa-reset.asciidoc index 12be2752fd..e6a0ebe3a3 100644 --- a/docs/detections/prebuilt-rules/rule-details/stolen-credentials-used-to-login-to-okta-account-after-mfa-reset.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/stolen-credentials-used-to-login-to-okta-account-after-mfa-reset.asciidoc @@ -41,7 +41,7 @@ Detects a sequence of suspicious activities on Windows hosts indicative of crede * Domain: Endpoint * Domain: Cloud -*Version*: 104 +*Version*: 205 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/successful-application-sso-from-rare-unknown-client-device.asciidoc b/docs/detections/prebuilt-rules/rule-details/successful-application-sso-from-rare-unknown-client-device.asciidoc index 40269ea4eb..4756768a96 100644 --- a/docs/detections/prebuilt-rules/rule-details/successful-application-sso-from-rare-unknown-client-device.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/successful-application-sso-from-rare-unknown-client-device.asciidoc @@ -32,7 +32,7 @@ Detects successful single sign-on (SSO) events to Okta applications from an unre * Use Case: Identity and Access Audit * Tactic: Initial Access -*Version*: 103 +*Version*: 204 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/suspicious-activity-reported-by-okta-user.asciidoc b/docs/detections/prebuilt-rules/rule-details/suspicious-activity-reported-by-okta-user.asciidoc index 659817c062..eea5b68c20 100644 --- a/docs/detections/prebuilt-rules/rule-details/suspicious-activity-reported-by-okta-user.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/suspicious-activity-reported-by-okta-user.asciidoc @@ -34,7 +34,7 @@ Detects when a user reports suspicious activity for their Okta account. These ev * Data Source: Okta * Tactic: Initial Access -*Version*: 308 +*Version*: 409 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/unauthorized-access-to-an-okta-application.asciidoc b/docs/detections/prebuilt-rules/rule-details/unauthorized-access-to-an-okta-application.asciidoc index 4ca38eaa05..9c12131b2d 100644 --- a/docs/detections/prebuilt-rules/rule-details/unauthorized-access-to-an-okta-application.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/unauthorized-access-to-an-okta-application.asciidoc @@ -34,7 +34,7 @@ Identifies unauthorized access attempts to Okta applications. * Use Case: Identity and Access Audit * Data Source: Okta -*Version*: 309 +*Version*: 410 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/unauthorized-scope-for-public-app-oauth2-token-grant-with-client-credentials.asciidoc b/docs/detections/prebuilt-rules/rule-details/unauthorized-scope-for-public-app-oauth2-token-grant-with-client-credentials.asciidoc index c3e28ec5d1..36e94c23fb 100644 --- a/docs/detections/prebuilt-rules/rule-details/unauthorized-scope-for-public-app-oauth2-token-grant-with-client-credentials.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/unauthorized-scope-for-public-app-oauth2-token-grant-with-client-credentials.asciidoc @@ -35,7 +35,7 @@ Identifies a failed OAuth 2.0 token grant attempt for a public client app using * Use Case: Identity and Access Audit * Tactic: Defense Evasion -*Version*: 104 +*Version*: 205 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/unusual-high-confidence-content-filter-blocks-detected.asciidoc b/docs/detections/prebuilt-rules/rule-details/unusual-high-confidence-content-filter-blocks-detected.asciidoc new file mode 100644 index 0000000000..6778dff88a --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/unusual-high-confidence-content-filter-blocks-detected.asciidoc @@ -0,0 +1,124 @@ +[[unusual-high-confidence-content-filter-blocks-detected]] +=== Unusual High Confidence Content Filter Blocks Detected + +Detects repeated high-confidence 'BLOCKED' actions coupled with specific 'Content Filter' policy violation having codes such as 'MISCONDUCT', 'HATE', 'SEXUAL', INSULTS', 'PROMPT_ATTACK', 'VIOLENCE' indicating persistent misuse or attempts to probe the model's ethical boundaries. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 10m + +*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-components.html +* https://atlas.mitre.org/techniques/AML.T0051 +* https://atlas.mitre.org/techniques/AML.T0054 +* https://www.elastic.co/security-labs/elastic-advances-llm-security + +*Tags*: + +* Domain: LLM +* Data Source: AWS Bedrock +* Data Source: AWS S3 +* Use Case: Policy Violation +* Mitre Atlas: T0051 +* Mitre Atlas: T0054 + +*Version*: 5 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Amazon Bedrock Guardrail High Confidence Content Filter Blocks.* + + +Amazon Bedrock Guardrail is a set of features within Amazon Bedrock designed to help businesses apply robust safety and privacy controls to their generative AI applications. + +It enables users to set guidelines and filters that manage content quality, relevancy, and adherence to responsible AI practices. + +Through Guardrail, organizations can enable Content filter for Hate, Insults, Sexual Violence and Misconduct along with Prompt Attack filters prompts +to prevent the model from generating content on specific, undesired subjects, and they can establish thresholds for harmful content categories. + + +*Possible investigation steps* + + +- Identify the user account whose prompts caused high confidence content filter blocks and whether it should perform this kind of action. +- Investigate other alerts associated with the user account during the past 48 hours. +- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day? +- Examine the account's prompts and responses in the last 24 hours. +- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours. + + +*False positive analysis* + + +- Verify the user account that queried denied topics, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Disable or limit the account during the investigation and response. +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services. + - Identify any regulatory or legal ramifications related to this activity. +- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed. +- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Setup + + + +*Setup* + + +This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation: + +https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-create.html + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-aws_bedrock.invocation-* +| MV_EXPAND gen_ai.compliance.violation_code +| MV_EXPAND gen_ai.policy.confidence +| MV_EXPAND gen_ai.policy.name +| where gen_ai.policy.action == "BLOCKED" and gen_ai.policy.name == "content_policy" and gen_ai.policy.confidence LIKE "HIGH" and gen_ai.compliance.violation_code IN ("HATE", "MISCONDUCT", "SEXUAL", "INSULTS", "PROMPT_ATTACK", "VIOLENCE") +| keep user.id, gen_ai.compliance.violation_code +| stats block_count_per_violation = count() by user.id, gen_ai.compliance.violation_code +| SORT block_count_per_violation DESC +| keep user.id, gen_ai.compliance.violation_code, block_count_per_violation +| STATS violation_count = SUM(block_count_per_violation) by user.id +| WHERE violation_count > 5 +| SORT violation_count DESC + +---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/unusual-high-denied-sensitive-information-policy-blocks-detected.asciidoc b/docs/detections/prebuilt-rules/rule-details/unusual-high-denied-sensitive-information-policy-blocks-detected.asciidoc new file mode 100644 index 0000000000..802fc93d1e --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/unusual-high-denied-sensitive-information-policy-blocks-detected.asciidoc @@ -0,0 +1,119 @@ +[[unusual-high-denied-sensitive-information-policy-blocks-detected]] +=== Unusual High Denied Sensitive Information Policy Blocks Detected + +Detects repeated compliance violation 'BLOCKED' actions coupled with specific policy name such as 'sensitive_information_policy', indicating persistent misuse or attempts to probe the model's denied topics. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 10m + +*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-components.html +* https://atlas.mitre.org/techniques/AML.T0051 +* https://atlas.mitre.org/techniques/AML.T0054 +* https://www.elastic.co/security-labs/elastic-advances-llm-security + +*Tags*: + +* Domain: LLM +* Data Source: AWS Bedrock +* Data Source: AWS S3 +* Use Case: Policy Violation +* Mitre Atlas: T0051 +* Mitre Atlas: T0054 + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Amazon Bedrock Guardrail High Sensitive Information Policy Blocks.* + + +Amazon Bedrock Guardrail is a set of features within Amazon Bedrock designed to help businesses apply robust safety and privacy controls to their generative AI applications. + +It enables users to set guidelines and filters that manage content quality, relevancy, and adherence to responsible AI practices. + +Through Guardrail, organizations can define "sensitive information filters" to prevent the model from generating content on specific, undesired subjects, +and they can establish thresholds for harmful content categories. + + +*Possible investigation steps* + + +- Identify the user account that queried sensitive information and whether it should perform this kind of action. +- Investigate other alerts associated with the user account during the past 48 hours. +- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day? +- Examine the account's prompts and responses in the last 24 hours. +- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours. + + +*False positive analysis* + + +- Verify the user account that queried denied topics, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Disable or limit the account during the investigation and response. +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services. + - Identify any regulatory or legal ramifications related to this activity. +- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed. +- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Setup + + + +*Setup* + + +This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation: + +https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-create.html + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-aws_bedrock.invocation-* +| MV_EXPAND gen_ai.policy.name +| where gen_ai.policy.action == "BLOCKED" and gen_ai.compliance.violation_detected == "true" and gen_ai.policy.name == "sensitive_information_policy" +| keep user.id +| stats sensitive_information_block = count() by user.id +| where sensitive_information_block > 5 +| sort sensitive_information_block desc + +---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/unusual-high-denied-topic-blocks-detected.asciidoc b/docs/detections/prebuilt-rules/rule-details/unusual-high-denied-topic-blocks-detected.asciidoc new file mode 100644 index 0000000000..ac012c808b --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/unusual-high-denied-topic-blocks-detected.asciidoc @@ -0,0 +1,119 @@ +[[unusual-high-denied-topic-blocks-detected]] +=== Unusual High Denied Topic Blocks Detected + +Detects repeated compliance violation 'BLOCKED' actions coupled with specific policy name such as 'topic_policy', indicating persistent misuse or attempts to probe the model's denied topics. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 10m + +*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-components.html +* https://atlas.mitre.org/techniques/AML.T0051 +* https://atlas.mitre.org/techniques/AML.T0054 +* https://www.elastic.co/security-labs/elastic-advances-llm-security + +*Tags*: + +* Domain: LLM +* Data Source: AWS Bedrock +* Data Source: AWS S3 +* Use Case: Policy Violation +* Mitre Atlas: T0051 +* Mitre Atlas: T0054 + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Amazon Bedrock Guardrail High Denied Topic Blocks.* + + +Amazon Bedrock Guardrail is a set of features within Amazon Bedrock designed to help businesses apply robust safety and privacy controls to their generative AI applications. + +It enables users to set guidelines and filters that manage content quality, relevancy, and adherence to responsible AI practices. + +Through Guardrail, organizations can define "denied topics" to prevent the model from generating content on specific, undesired subjects, +and they can establish thresholds for harmful content categories, including hate speech, violence, or offensive language. + + +*Possible investigation steps* + + +- Identify the user account that queried denied topics and whether it should perform this kind of action. +- Investigate other alerts associated with the user account during the past 48 hours. +- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day? +- Examine the account's prompts and responses in the last 24 hours. +- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours. + + +*False positive analysis* + + +- Verify the user account that queried denied topics, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Disable or limit the account during the investigation and response. +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services. + - Identify any regulatory or legal ramifications related to this activity. +- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed. +- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Setup + + + +*Setup* + + +This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation: + +https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-create.html + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-aws_bedrock.invocation-* +| MV_EXPAND gen_ai.policy.name +| where gen_ai.policy.action == "BLOCKED" and gen_ai.compliance.violation_detected == "true" and gen_ai.policy.name == "topic_policy" +| keep user.id +| stats denied_topics = count() by user.id +| where denied_topics > 5 +| sort denied_topics desc + +---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/unusual-high-word-policy-blocks-detected.asciidoc b/docs/detections/prebuilt-rules/rule-details/unusual-high-word-policy-blocks-detected.asciidoc new file mode 100644 index 0000000000..6495885511 --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/unusual-high-word-policy-blocks-detected.asciidoc @@ -0,0 +1,119 @@ +[[unusual-high-word-policy-blocks-detected]] +=== Unusual High Word Policy Blocks Detected + +Detects repeated compliance violation 'BLOCKED' actions coupled with specific policy name such as 'word_policy', indicating persistent misuse or attempts to probe the model's denied topics. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 10m + +*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-components.html +* https://atlas.mitre.org/techniques/AML.T0051 +* https://atlas.mitre.org/techniques/AML.T0054 +* https://www.elastic.co/security-labs/elastic-advances-llm-security + +*Tags*: + +* Domain: LLM +* Data Source: AWS Bedrock +* Data Source: AWS S3 +* Use Case: Policy Violation +* Mitre Atlas: T0051 +* Mitre Atlas: T0054 + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Amazon Bedrock Guardrail High Word Policy Blocks.* + + +Amazon Bedrock Guardrail is a set of features within Amazon Bedrock designed to help businesses apply robust safety and privacy controls to their generative AI applications. + +It enables users to set guidelines and filters that manage content quality, relevancy, and adherence to responsible AI practices. + +Through Guardrail, organizations can define "word filters" to prevent the model from generating content on profanity, undesired subjects, +and they can establish thresholds for harmful content categories, including hate speech, violence, or offensive language. + + +*Possible investigation steps* + + +- Identify the user account whose prompts contained profanity and whether it should perform this kind of action. +- Investigate other alerts associated with the user account during the past 48 hours. +- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day? +- Examine the account's prompts and responses in the last 24 hours. +- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours. + + +*False positive analysis* + + +- Verify the user account that queried denied topics, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Disable or limit the account during the investigation and response. +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services. + - Identify any regulatory or legal ramifications related to this activity. +- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed. +- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Setup + + + +*Setup* + + +This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation: + +https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-create.html + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-aws_bedrock.invocation-* +| MV_EXPAND gen_ai.policy.name +| where gen_ai.policy.action == "BLOCKED" and gen_ai.compliance.violation_detected == "true" and gen_ai.policy.name == "word_policy" +| keep user.id +| stats profanity_words= count() by user.id +| where profanity_words > 5 +| sort profanity_words desc + +---------------------------------- diff --git a/docs/index.asciidoc b/docs/index.asciidoc index 954366e5f9..9fb963f404 100644 --- a/docs/index.asciidoc +++ b/docs/index.asciidoc @@ -105,3 +105,5 @@ include::detections/prebuilt-rules/downloadable-packages/8-15-9/prebuilt-rules-8 include::detections/prebuilt-rules/downloadable-packages/8-15-10/prebuilt-rules-8-15-10-appendix.asciidoc[] include::detections/prebuilt-rules/downloadable-packages/8-15-11/prebuilt-rules-8-15-11-appendix.asciidoc[] + +include::detections/prebuilt-rules/downloadable-packages/8-15-12/prebuilt-rules-8-15-12-appendix.asciidoc[]