Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crash due to missing relation #65

Open
simskij opened this issue Apr 26, 2022 · 6 comments
Open

Crash due to missing relation #65

simskij opened this issue Apr 26, 2022 · 6 comments

Comments

@simskij
Copy link
Member

simskij commented Apr 26, 2022

Bug Description

In the scenario where Alertmanager is related to something, in this example Prometheus, and the remote charm is removed, the next update-status event causes alertmanager to go into an error state.

To Reproduce

  1. juju deploy cos-lite --channel edge --trust
  2. juju remove-application prometheus --force
  3. wait

Environment

Relevant log output

$ juju status                        
Model  Controller  Cloud/Region        Version  SLA          Timestamp
cos    uk8s        microk8s/localhost  2.9.28   unsupported  14:41:46+02:00

App           Version  Status   Scale  Charm               Channel  Rev  Address         Exposed  Message
alertmanager           waiting      1  alertmanager-k8s    edge      14  10.152.183.203  no       installing agent
exporter               active       1  node-exporter-k8s              7  10.152.183.53   no       
grafana                active       1  grafana-k8s         edge      36  10.152.183.181  no       
kube-metrics           active       1  kube-metrics-k8s              10  10.152.183.87   no       
metrics                active       1  kube-state-metrics             0  10.152.183.202  no       
prometheus             active       1  prometheus-k8s      edge      30  10.152.183.16   no       
zinc                   active       1  zinc-k8s                       5  10.152.183.97   no       

Unit             Workload  Agent  Address     Ports  Message
alertmanager/0*  error     idle   10.1.14.40         hook failed: "update-status"
exporter/0*      active    idle   10.1.14.20         
grafana/0*       active    idle   10.1.14.50         
kube-metrics/0*  active    idle   10.1.14.54         
metrics/0*       active    idle   10.1.14.52         
prometheus/0*    active    idle   10.1.14.15         
zinc/0*          active    idle   10.1.14.47 


$ juju debug-log

unit-alertmanager-0: 14:40:15 INFO juju.worker.uniter awaiting error resolution for "update-status" hook
unit-alertmanager-0: 14:40:54 INFO juju.worker.uniter awaiting error resolution for "update-status" hook
unit-alertmanager-0: 14:40:54 INFO unit.alertmanager/0.juju-log alertmanager 0.21.0 is up and running (uptime: 2022-04-25T12:23:48.939Z); cluster mode: disabled, with 0 peers
unit-alertmanager-0: 14:40:54 ERROR unit.alertmanager/0.juju-log Uncaught exception while in charm code:
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-alertmanager-0/charm/venv/ops/model.py", line 1545, in _run
    result = run(args, **kwargs)
  File "/usr/lib/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '('/var/lib/juju/tools/unit-alertmanager-0/network-get', 'alerting', '-r', '17', '--format=json')' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./src/charm.py", line 491, in <module>
    main(AlertmanagerCharm, use_juju_for_storage=True)
  File "/var/lib/juju/agents/unit-alertmanager-0/charm/venv/ops/main.py", line 431, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-alertmanager-0/charm/venv/ops/main.py", line 142, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-alertmanager-0/charm/venv/ops/framework.py", line 283, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-alertmanager-0/charm/venv/ops/framework.py", line 743, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-alertmanager-0/charm/venv/ops/framework.py", line 790, in _reemit
    custom_handler(event)
  File "./src/charm.py", line 458, in _on_update_status
    self._common_exit_hook()
  File "./src/charm.py", line 380, in _common_exit_hook
    self.alertmanager_provider.update_relation_data()
  File "/var/lib/juju/agents/unit-alertmanager-0/charm/lib/charms/alertmanager_k8s/v0/alertmanager_dispatch.py", line 282, in update_relation_data
    relation.data[self.charm.unit].update(self._generate_relation_data(relation))
  File "/var/lib/juju/agents/unit-alertmanager-0/charm/lib/charms/alertmanager_k8s/v0/alertmanager_dispatch.py", line 262, in _generate_relation_data
    self.charm.model.get_binding(relation).network.bind_address, self.api_port
  File "/var/lib/juju/agents/unit-alertmanager-0/charm/venv/ops/model.py", line 556, in network
    self._network = Network(self._backend.network_get(self.name, self._relation_id))
  File "/var/lib/juju/agents/unit-alertmanager-0/charm/venv/ops/model.py", line 1801, in network_get
    return self._run(*cmd, return_output=True, use_json=True)
  File "/var/lib/juju/agents/unit-alertmanager-0/charm/venv/ops/model.py", line 1547, in _run
    raise ModelError(e.stderr)
ops.model.ModelError: b'ERROR relation 17 not found (not found)\n'

Additional context

No response

@sed-i
Copy link
Contributor

sed-i commented Apr 26, 2022

Mitigation fix 1

@@ -279,7 +280,15 @@ class AlertmanagerProvider(RelationManagerBase):
             # a single consumer charm's unit may be related to multiple providers
             if self.name in self.charm.model.relations:
                 for relation in self.charm.model.relations[self.name]:
-                    relation.data[self.charm.unit].update(self._generate_relation_data(relation))
+                    # Sometimes there is a dangling relation, for which we get the following error:
+                    # ops.model.ModelError: b'ERROR relation 17 not found (not found)\n'
+                    # when trying to `network-get alerting`. Suppressing the ModelError in this
+                    # case, with the expectation that Juju would resolve the dangling relation
+                    # eventually.
+                    with contextlib.suppress(ModelError):
+                        relation.data[self.charm.unit].update(
+                            self._generate_relation_data(relation)
+                        )

Mitigation fix 2

The following Provider code (alertmanager side) updates relation data with prometheus, so prometheus knows the (public) IP address of alertmanager:

def _generate_relation_data(self, relation: Relation):
"""Helper function to generate relation data in the correct format."""
public_address = "{}:{}".format(
self.charm.model.get_binding(relation).network.bind_address, self.api_port
)
return {"public_address": public_address}

Iirc, bind_address is known to be broken..? @rbarry82 @simskij @mmanciop

In that case I could use the (private) socket.getfqdn().

HOWEVER, the relation.data[self.charm.unit].update(...) bit is probably going to raise anyway, because the relation is gone.

Root cause

Could this be a refcount issue due to the --force used? @rbarry82

@rbarry82
Copy link
Contributor

Mitigation:

Don't use --force.

bind_address is not known to be broken. It asks Juju directly. See here. Arguably OF should treat subprocess.CalledProcessError in that case as None like it does if it cannot find one, but see the commit message here again. Force "takes out all the stops" and we should expect bad behavior like this.

If anything, this could be a regression in that Juju commit.

Either way, using contextlib.suppress is not the right way to cover it. What happens to the relation data then? try/except, and if there's a ModelError, treat it the same as getting None from get binding and return an empty string as a guardrail. This line will do the right thing anyway, but do we want to rely on that as the only safeguard unless we write a test for it?

@sed-i
Copy link
Contributor

sed-i commented Apr 26, 2022

(Thanks @rbarry82, will follow up shortly after the following)

After suppressing ModelError in alertmanager, I get something similar in prom:

controller-0: 17:29:59 ERROR juju.worker.caasapplicationprovisioner.runner exited "prom": Operation cannot be fulfilled on pods "prom-0": the object has been modified; please apply your changes to the latest version and try again
controller-0: 17:30:36 ERROR juju.worker.caasapplicationprovisioner.runner exited "prom": application "prom" not found
controller-0: 17:30:39 ERROR juju.worker.caasapplicationprovisioner.runner exited "prom": failed to watch for changes to application "prom": application "prom" not found
# ...
controller-0: 17:32:46 ERROR juju.worker.storageprovisioner failed to set status: cannot set status: filesystem not found
unit-prom-0: 17:32:46 ERROR unit.prom/0.juju-log alertmanager:4: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-prom-0/charm/venv/ops/model.py", line 1598, in _run
    result = run(args, **kwargs)
  File "/usr/lib/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '('/var/lib/juju/tools/unit-prom-0/network-get', 'ingress', '--format=json')' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./src/charm.py", line 374, in <module>
    main(PrometheusCharm)
  File "/var/lib/juju/agents/unit-prom-0/charm/venv/ops/main.py", line 419, in main
    charm = charm_class(framework)
  File "./src/charm.py", line 58, in __init__
    self.ingress = IngressPerUnitRequirer(self, endpoint="ingress", port=self._port)
  File "/var/lib/juju/agents/unit-prom-0/charm/lib/charms/traefik_k8s/v0/ingress_per_unit.py", line 353, in __init__
    self.auto_data = self._complete_request(host or "", port)
  File "/var/lib/juju/agents/unit-prom-0/charm/lib/charms/traefik_k8s/v0/ingress_per_unit.py", line 376, in _complete_request
    host = str(binding.network.bind_address)
  File "/var/lib/juju/agents/unit-prom-0/charm/venv/ops/model.py", line 558, in network
    self._network = Network(self._backend.network_get(self.name, self._relation_id))
  File "/var/lib/juju/agents/unit-prom-0/charm/venv/ops/model.py", line 1862, in network_get
    return self._run(*cmd, return_output=True, use_json=True)
  File "/var/lib/juju/agents/unit-prom-0/charm/venv/ops/model.py", line 1600, in _run
    raise ModelError(e.stderr)
ops.model.ModelError: b'ERROR relation 4 not found (not found)\n'
unit-prom-0: 17:32:46 ERROR juju.worker.uniter.operation hook "alertmanager-relation-departed" (via hook dispatching script: dispatch) failed: exit status 1

@rbarry82
Copy link
Contributor

Mitigation:

Don't use --force. Or open a bug against OF to handle this gracefully. This is frankly not for individual charms to handle.

@sed-i
Copy link
Contributor

sed-i commented Apr 27, 2022

try/except, and if there's a ModelError, treat it the same as getting None from get binding and return an empty string as a guardrail.

Not much sense in updating relation data if the relation is gone.
But I gave it a try anyway:

try:
    relation.data[self.charm.unit].update(
        self._generate_relation_data(relation)
    )
except ModelError:
    relation.data[self.charm.unit].pop("public_address")

which gives the good old ops.model.ModelError: b'ERROR permission denied\n'.

@rbarry82
Copy link
Contributor

I guess I should clarify:

contextlib.suppress may as well be catch {} in Java or On Error Return in VB6.

It has a limited number of "real" use cases. For what we've seen, the libjuju websocket thing where an unexpected/unsynchronized shutdown may remove the socket out from under you as part of shutdown (or the same with Pebble). Or a truly stateless application which is unaware of whether or not it is holding leadership and other instances may attempt to write.

A ModelError is an actual exception, in that you would be hard pressed to find a case where this should happen during regular operation, and should not be suppressed. It should be explicitly handled with log.warning|error in the except block, and the other side (which assumes an empty .get() is something to ignore rather than another "we should never encounter a relation databag which does not have this set* scenario) should instead return an empty string or something obvious.

Same question:

            address = relation.data[unit].get("public_address")
            if address:
                alertmanagers.append(address)

If you inverted it to if not address:, how many times should that happen? Probably never. There should always be a public_address key if my cursory read of this is correct.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants