-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enforce model defined in tfstate #97
Comments
This boils down to an automatic reconciler which is mentioned in issue #84. The idea of auto reconciliation sounds nice but I'd need to put some thought into how it would actually work. I could think of a workaround that might help. Here's how it works:
Personally, I have used an env like the following to force trigger builds: kind: Terraform
metadata:
name: my-tfo-resource
spec:
...
env:
- name: _REVISION
value: "10" # a counter or random string would work If you have a setup like above, you should be able to write a cron or an infinite loop to change the "_REVISION". while true; do
kubectl patch terraform my-tfo-resource --type json -p '[
{
"op": "replace",
"path": "/spec/env/0",
"value": {"name":"_REVISION","value":"'$RANDOM'"}
}
]'
sleep 600
done Every 10 minutes, this script will update the terraform which will auto-reconcile. |
Thanks @isaaguilar for the workaround. I will give it a try. |
This workaround works quite well, I've used Below some extracts of my configuration:
It also requires specifics roles to interact with terraform operator:
|
Hello @isaaguilar, after running this workaround for a few weeks, we've hit a limitation: new ConfigMap and Secrets are generated on each run, and kept forever. See sample below
Is it possible to kept only last xxx executions ? |
A few hours ago I released v0.8.2, which changes the behavior of kind: Terraform
metadata:
name: my-tfo-resource
spec:
...
keepLatestPodsOnly: true That should clear out old resources and keep only the latest. The ones that got created before will need to be manually cleared unfortunately. |
Thanks ! I've installed latest version and it works better. |
I've noticed that operator is killed due to Out Of Memory, but everything seems fine in log. - containerID: containerd://8ebf83d8d5d36bf0828c4f9262fe188d98a1356cffea470c920236a2428443d4
image: docker.io/isaaguilar/terraform-operator:v0.8.2
imageID: docker.io/isaaguilar/terraform-operator@sha256:319a86bad4bb657dc06f51f5f094639f37bceca2b0dd3255e5d1354d601270b2
lastState:
terminated:
containerID: containerd://8ebf83d8d5d36bf0828c4f9262fe188d98a1356cffea470c920236a2428443d4
exitCode: 137
finishedAt: "2022-06-10T08:32:41Z"
reason: OOMKilled
startedAt: "2022-06-10T08:31:56Z"
name: terraform-operator
ready: false
restartCount: 167
started: false I will try to increase allocated memory, and see :)
|
I'd be interested in knowing how much memory was allocated, total number of 'tf' resources. # total tf
kubectl get tf --all-namespaces | wc -l Maybe also some metrics on total number of pods since the operator has a watch on pod events as well. |
Allocated memory was default value (128M). I've increased allocated memory to 256M, and now, tf operator seems fine. For number of pods
|
I'm facing another issue making the workaround to fail: the yaml associated to terraform resource is to big, see message below
As all generation are keep forever, the yaml keep increasing: status:
exported: "false"
lastCompletedGeneration: 0
phase: running
podNamePrefix: tf-harbor-internet-46ja6izd
stages:
- generation: 1
interruptible: false
podType: setup
reason: TF_RESOURCE_CREATED
startTime: "2022-05-20T11:09:48Z"
state: failed
stopTime: "2022-05-20T11:09:51Z"
...
...
...
- generation: 9897
interruptible: true
podType: post
reason: ""
|
Thanks @o-orand I knew this would soon be an issue and I haven't thought of a good way to handle it yet. I figured using an existing option, like Other ideas, and possibly one I'll investigate (after kids go back to school 😅 ) is using the PVC to store runner status and terraform logs. This data will be formatted to be fed into a tfo dashboard. More on this to come. For an immediate fix, perhaps we should keep n number of generation status in case someone is using the generation status feature for some reason. I'll continue forming ideas. |
@isaaguilar - Checking up on this thread, is the above workaround still the only way for periodic reconciliation? Thanks |
As a terraform-operator user,
In order to ensure tfstate is always in sync with underling infrastructure, and to reduce manual operations,
I need a mechanism to automatically and frequently execute the terraform workflow.
Use case samples:
The text was updated successfully, but these errors were encountered: