Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_cloudwatch_log_group does not destroy although log says it did #14057

Closed
ghost opened this issue Jul 6, 2020 · 5 comments
Closed

aws_cloudwatch_log_group does not destroy although log says it did #14057

ghost opened this issue Jul 6, 2020 · 5 comments
Labels
bug Addresses a defect in current functionality. service/cloudwatch Issues and PRs that pertain to the cloudwatch service. service/eks Issues and PRs that pertain to the eks service.

Comments

@ghost
Copy link

ghost commented Jul 6, 2020

This issue was originally opened by @ivanfarkas2 as hashicorp/terraform#25482. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.12.28
+ provider.aws v2.69.0
+ provider.http v1.2.0
+ provider.random v2.2.1

Terraform Configuration Files

resource "aws_eks_cluster" "cluster" {
  name                      = var.cluster_name
  role_arn                  = aws_iam_role.cluster.arn # required
  enabled_cluster_log_types = var.eks_enabled_log_types

  vpc_config {
    subnet_ids              = local.subnet_ids
    security_group_ids      = [aws_security_group.cluster_sg.id]
    endpoint_private_access = var.endpoint_private_access
    endpoint_public_access  = var.endpoint_public_access
  }

  depends_on = [
    aws_iam_role.cluster,
    aws_iam_role_policy_attachment.role_policies,
    aws_cloudwatch_log_group.cluster
  ]
}

resource "aws_cloudwatch_log_group" "cluster" {
  name              = "/aws/eks/${var.cluster_name}/cluster"
  retention_in_days = var.log_retention_in_days
}

Debug Output

Crash Output

N/A

Expected Behavior

terraform destroy should have removed the aws_cloudwatch_log_group resource (/aws/eks/ceres-eks-dev/cluster)

Actual Behavior

terraform destroy did not remove the aws_cloudwatch_log_group resource (/aws/eks/ceres-eks-dev/cluster), although log says it did, but changed the retention time from 7 days (1 week) to Never expire. Brilliant!

module.eks_cluster.aws_cloudwatch_log_group.cluster: Destroying... [id=/aws/eks/ceres-eks-dev/cluster]
module.eks_cluster.aws_cloudwatch_log_group.cluster: Destruction complete after 0s

After terraform apply
Apply

After terraform destroy
Destroy

Log streams

  Log stream Last event time
  kube-apiserver-audit-247a42f203b491348288e6636f5a13fa 7/5/2020, 8:31:29 PM
  kube-apiserver-audit-f50ba417c48ce8301763d936ead3d413 7/5/2020, 8:31:28 PM
  kube-scheduler-f50ba417c48ce8301763d936ead3d413 7/5/2020, 8:31:28 PM
  kube-apiserver-247a42f203b491348288e6636f5a13fa 7/5/2020, 8:31:28 PM
  kube-apiserver-f50ba417c48ce8301763d936ead3d413 7/5/2020, 8:31:28 PM
  kube-scheduler-247a42f203b491348288e6636f5a13fa 7/5/2020, 8:30:40 PM

Steps to Reproduce

  1. terraform init
  2. terraform apply
  3. terraform destroy

Additional Context

N/A

References

cloudwatch log group not destroyed hashicorp/terraform#14750 seems somewhat related.

@ghost ghost added service/cloudwatchlogs service/cloudwatch Issues and PRs that pertain to the cloudwatch service. service/eks Issues and PRs that pertain to the eks service. labels Jul 6, 2020
@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Jul 6, 2020
@breathingdust breathingdust added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels Jul 6, 2020
@johnthedev97
Copy link
Contributor

I had a similar problem but it was not a bug. In my case terraform was destroying the cloudwatch log group, but AWS (eks.amazonaws.com principal) was recreating it, I believe because some logs got delivered after the log group deletion.

You can verify this by looking at the expiry set on the log group, if AWS EKS created the log group then the expiry would be set to "never" instead of the log retention days you have specified in terraform.

@johnthedev97
Copy link
Contributor

I see that in your screenshot that it is actually set to "never", so I strongly believe this is the case. You can probably double check the Cloudtrail for confirmation.

@ivanfarkas2
Copy link

Yes. That's a bug as it was acknowledged 17 days ago.

@bflad
Copy link
Contributor

bflad commented Jul 23, 2020

Hi folks 👋 This is not likely an issue with the Terraform AWS Provider aws_cloudwatch_log_group resource not deleting the CloudWatch Log Group. We extensively test this behavior and verify it against the CloudWatch Logs API daily.

As mentioned above, certain AWS services will (re-)create a CloudWatch Log Group automatically when it is not present. A really good clue to this behavior is seeing the log group with no retention period. Given that CloudWatch Logs in general have eventually consistent delivery, the deletion of a resource that logs such as an EKS Cluster, may not fully guarantee that the CloudWatch Log Group is not re-created with those lingering entries.

As an example, these IAM permissions are present by default with the EKS managed IAM policy (e.g. AmazonEKSServiceRolePolicy):

        {
            "Effect": "Allow",
            "Action": "logs:CreateLogGroup",
            "Resource": "arn:aws:logs:*:*:log-group:/aws/eks/*"
        },

To prevent this automatic behavior, update your IAM permissions for the service in question, to not allow this logs:CreateLogGroup action. Some services will allow you to attach custom policies (where the extra actions/statements can be omitted), otherwise usually a Deny statement can be present in a new policy and attached to the role in question. For additional questions for how to prevent this behavior with specific AWS services, we would suggest checking the relevant AWS documentation or opening an AWS Support case.

Hope this helps.

@bflad bflad closed this as completed Jul 23, 2020
@ghost
Copy link
Author

ghost commented Aug 22, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Aug 22, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/cloudwatch Issues and PRs that pertain to the cloudwatch service. service/eks Issues and PRs that pertain to the eks service.
Projects
None yet
Development

No branches or pull requests

4 participants