-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remote Executors #84
Comments
This comment was marked as resolved.
This comment was marked as resolved.
@gaby This isn't about push notifications, it's about being able to deploy multiple instances of Uptime-Kuma that all feed back to a central instance. The idea is that I could deploy copies of Uptime Kuma into AWS EU-West-1, US-East-2, and APAC-West-1, but have them all report the latency back into an instance in EU-West-2 that shows the actual data, rather than just being able to push notifications from anywhere on the planet back to my browser/mobile device? I'd rather keep this open if people are happy to do so? |
Yes, should keep this open. Don't forget to give a 👍 to your post. |
Very much bike-sheding but I couldn't resist. Given Kuma=Bear, I propose that we call remote executors (at least in the case of many-to-one) "Cub" instances, with the primary being the "Mother" instance. |
I like this idea a lot, as coming from updown.io service. But this kind of setup does come with some complex caveats to expect which I also face randomly from time to time from updown.io and others:
I do like the idea alot as I like it from updown.io, but I think it's a challenge to get something like this developed properly and leave alone deployed properly. |
I agree it would be nice to have, but I think kuma is supposed to be light-weight, not enterprise-ready. Not many people would use it. |
I'm talking about something that's optional here, not required - Uptime Kuma would continue to run as a single instance by default, but adding some kind of "scaling" ability would be a configuration/settings flag. At the moment, I'm just deploying multiple Uptime Kuma installations, but until #898 is in a better state it's very difficult to determine which datacentre/cloud region the metrics and alerts are coming from in Prometheus, and it requires custom configuration of each installation. |
It would be nice to have a possibility to have a second docker container on a remote location to handle the same checks to prevent false positives, which could be scalable and report back with a voting system. I'd love to replace Uptime Robot at our office, but just one server to handle the checks is not enough for our infrastructure monitoring. I do use Uptime Kuma for personal servers and for a large Dutch Minecraft server with a complex server infrastructure. So far I am very happy with this platform, and it has a lot of potential. |
I'm coming back to this comment because there's soon going to be incident management and reporting added to UptimeKuma, which to me moves it firmly into the Enterprise space as far as functionality is concerned - I know very few people who run incident management on their home labs or similar! With the above in mind, to me the concept of "remote nodes" becomes even more important - the last thing you want to do is declare an incident that your entire infrastructure is down just because one instance of Uptime Kuma lost it's connectivity. I realise that most will be using additional tooling, but this is just a small thing that would appear to make it a lot easier to monitor disparate systems and consolidate the results in one place. |
It should use something like "paxos distributed consensus algorithm" It would be cool to have this option. For example, I check the operation of the service from different servers, but one of them has a network problem. In this case, he will say that "there was a problem." If the check came from 2 or more places, and only 1 server said about the problem, then I would conclude that the problem is on the checking server, but not on the service being checked |
This comment was marked as duplicate.
This comment was marked as duplicate.
This comment was marked as duplicate.
This comment was marked as duplicate.
@hanisirfan @jmiaje @officiallymarky add your 👍 to the initial issue if you want this feature and please don't spam everyone that is subscribed to this issue with unhelpful comments. That makes it take longer. This applies to GitHub and other places in general! |
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
One of the major benefits of something like this (which is how I came across this issue), could be multiple hosts with different setups. E.g. Primary/master uptime-kuma instance on a VPS somewhere outside the main infrastructure (especially for self hosting). Primary is then able to consolidate the information without exposing the services publicly, including the consolidation of maintenance windows utilising a single node. This would go hand in hand with @MikeBishop's suggestion of mirroring a monitor from a slave instance. |
I could do this now with something like Home Assistant, but I would prefer to have this option! |
This comment was marked as duplicate.
This comment was marked as duplicate.
I have added another method to replicator-kuma, and documented the different methods on https://github.com/Sid-Sun/replicator-kuma. I haven't yet tried upgrades but I've been using it with snapshot mode for a few months without issues so far. |
This comment has been minimized.
This comment has been minimized.
Any news on this? I'm not into enterprise or anything but would love to run multiple instances. My view on this might be a bit different but also a bit simpler. Multiple nodes that all show the same data but also have a monitor to each other. More or less like a CDN. Now there is no redundancy except building it by hand, which is possible but a bit clunky. |
I don't think it is going to happen. |
been asking and waiting since 2021, I also don't think it's ever going to happen. |
I think the basic usecase (monitoring one thing from mutliple sides) behind this feature can also be resolved via ripe atlas. I plan on implementing a monitor for this when I have time again. About the redundancy aspect:
Given the engineering resources we currently have, I would currently classify this as out of scope. |
Also interested in this. I monitor several networks and it would be nice to have a VM with a kuma instance watching all of the local devices there and reporting back to my main mothership. That would help avoid manually visiting several different kuma instances to check up on every network. |
This comment has been minimized.
This comment has been minimized.
If this is out of scope, could you add a list of workarounds in a wiki and close this? I've seen:
Are there any other ideas or workarounds that I missed? |
Among others, there are benefits to doing this with respect to remotely monitoring services on a locked down system, and making a distinction between "unreachable from this specific monitor" services and "down" services. That it is rather important to me, as the internet connection between my UptimeKuma proxy instance and the remote server it monitors is shaky. Meaning my monitor is sometimes unavailable. I'd rather move the monitor onto the frontend server, but then it couldn't reach the services it tests because of firewalls, so the solution for me here would be a remote executor that sends data to a frontend. Hope this issue :) |
i think this is the most asked feature :)) |
i found this repo https://github.com/CodeSpaceCZ/Uptime-Status but it doesnt work |
I changed to openstatus project (also open source) as they have this feature native. For more simple/single check use cases, Uptime Kuma is still a perfect solution. |
Oh yeah, openstatus is cool, but i cant manage to self host it, i wanted to even offered to pay the owner to set it up for me but i just got ghosted |
I was just thinking on this as I'd like to see something similar. If it helps move it forward, here's my idea for how this could be achieved. Add a new type of Monitor - Remote Uptime Kuma. When adding this, you could then add a remote instance of Uptime Kuma (much in the same way as you add a new docker host to U.K. today), and then from that select the monitor you want to monitor. This monitor could be one of the monitors on that remote instance, or it could be a "group" monitor, which means it could even be a top-level group monitor. That way, this preserves the main architecture of Uptime Kuma and just uses a new type of Monitor to implement the remote monitoring capability. Thoughts? |
OK, so this is quite possibly taking the project way past what it was ever intended to be, but stay with me on this one...
It would be amazing if I could launch "remote executors" and install them on various devices/virtual instances but have them report back to a central Uptime-Kuma instance.
This is a feature that some of the more advanced site-monitoring tools have, and would allow me to spin up instances across the globe to see what the response time was from each region etc.
As a rough outline, I'd probably be looking for the agents to either post their results back to the "primary" setup via HTTPS, or just load the results onto an MQTT Queue and have the "primary" read those results and log them.
Having it as a lightweight app would also mean that I could deploy onto a Raspberry Pi or similar if I didn't want to use virtual machines in a cloud-provider.
The text was updated successfully, but these errors were encountered: