-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Looking for a new maintainer #96
Comments
Completely different approach. Shepherd is a shell script, watchtower is written in Go. |
Yes, I intentionally did not start off from watchtower when starting shepherd because I realized it could be implemented with much less complexity. I do not think there is much benefit with fusing the two projects. With that said, if someone wants to implement swarm support as a new feature of watchtower, just go ahead. Ah, and thanks for the thanks ;) |
What @tigerblue77 said; a lot of respect and thanks for your work @djmaze. Personally, I don't have any interest in swarm, nor any bandwidth to also take on maintaining shepherd. Good luck with your future projects! 🎉 |
Hey @djmaze - since we are still using Docker Swarm and shepherd for some time (I think, at least), I can see myself helping with project maintainership (but would really rather not do that all alone, so if someone else wants to help, too, please comment)! Do you already have thoughts on the technical details of passing on ownership? Would you prefer to keep the repository? Or should we create a GitHub organization etc. for the project? |
Really looking forward to hearing what approach you have moved on to. Have you moved on from swarm to k8's? Or still a swarm based infra? Or something else? I guess I'll just have to be patient and wait for the blog unless you can share any more details! :-) |
Watching this after three weeks, I will step up, as I used it frequently in my homelab setup. 👍🏼 |
@martadinata666 @moschlar If you want, we can add it to the containrrr organization with you as the maintainers, since this aligns well with the goals of the "organization" (maintaining abandoned containerization tools). |
Sorry for the late response. Sounds sensible to move this project to the containrrr organization as this is project so closely related to watchtower. Also, I appreciate the offers for helping / taking over maintainership. It would feel best to me to hand over maintenance to people who already demonstrated their abilities by contributing code. So @moschlar looks like a good fit in this respect. I currently think it would be best to move to the new organization and make 1 or 2 more people contributors, but leave me in for now as a contributor so I can have some oversight (and maybe give some "senior" advice), at least in the beginning. So my preference would be to move to Doing this kind of thing for the first time so if people have (best practice) ideas to handle this process, feel free to comment! |
@dazinator No, I am still using (and loving) Docker swarm as I think Kubernetes is much too complicated for most use cases. I believe k8s only makes sense for medium to large enterprise who can afford to set up big clusters with dedicated devops teams and who need fined-grained permissions / policies etc. For my private and small business use cases, swarm makes so much more sense because it keeps complexity to a minimum. Instead of using shepherd, my new approach is using real infrastructure-as-code. The swarm stack yamls are checked into their own git repositories – one repository per swarm cluster. And I try to always reference specific image versions. Most images nowadays use semantic versioning. I use Renovate on those repositories which will automatically create PRs for image updates. I can then merge the updates and deploy the stacks with the new versions. This feels like a much cleaner and safer approach to me. (In any case, I strongly advise not to use the (I also use a selfmade tool for improving the swarm deployment workflow. Unfortunately there is no real documentation yet.) |
@djmaze I have invited you to the organization. You should be able to make the transfer when you are ready. @moschlar and @martadinata666 have invites as well, but I won't be able to set their roles until the repo is transferred (or you can do it of course, since you should be the admin of the repo after the transfer as well). |
Now it's just the issue of how to create a logotype that reflects all the mixed metaphors of |
Great agreement, thanks everyone! |
I can give it a go. I imagine a cluster of flying cargo containers with wings, shepherded by a... whale with wings? 😂 Or something. |
Or just something like https://github.com/google/go-containerregistry/blob/main/cmd/crane/README.md with a shepherd instead of the stork. 😁 |
I know it makes me a bit of an a-hole, but as hinted by the name of the repo, it's a crane not a stork 😂 Suggestion is good though! I'll see what I can do. |
@piksel Really sorry for letting this sit for so long. The invitation is now expired, can you send a new one? I follow up ASAP then. |
@djmaze You should be able to transfer using https://github.com/djmaze/shepherd/transfer and selecting the |
It is done! 🎊 |
@moschlar and @djmaze are now the maintainers, and @martadinata666 is set to triage for now. either of the maintainers can grant higher permissions when appropriate. |
@piksel Thanks! I guess ideally the "official" docker image should also be moved to the |
It should be possible to do directly using github actions, no? I can take a stab at creating a workflow for it. |
Yeah, should be possible as well. Please make sure to port not only the image build but also the shellcheck run from the drone config. Cool! |
Closing this at the migration has now happened. |
Removing new maintainer text as issue containrrr#96 has now been completed and closed.
Removing new maintainer text as issue #96 has now been completed and closed.
[Full Changelog](0.7.0...1.8.0) **Breaking changes:** - The docker image registry location has been changed to the containrrr organisation: `containrrr/shepherd` **Implemented enhancements:** - armhf support [\#108](#108) - Switch to official docker image v24 [\#107](#107) ([djmaze](https://github.com/djmaze)) - Restrict runtime of "docker service update" using "timeout" [\#98](#98) ([fooflington](https://github.com/fooflington)) - Add example for usage with swarm-cronjob [\#89](#89) ([djmaze](https://github.com/djmaze)) **Fixed bugs:** - Can't update some services: no such manifest [\#105](#105) - Service gets stuck when calling "docker service update" and won't progress [\#97](#97) - fix: docker service update with `--detach=false` hangs on services wi… [\#104](#104) ([AliRezaBeitari](https://github.com/AliRezaBeitari)) - Fix defunc VERBOSE handling [\#91](#91) ([sebthom](https://github.com/sebthom)) **Closed issues:** - How does it determine if there is an update or not? [\#111](#111) - Looking for a new maintainer [\#96](#96) - New OCI manifest issue [\#92](#92) - Run service update at a fixed time [\#88](#88) - docker swarm 20.10.12 | "docker service update" requires exactly 1 argument. [\#83](#83) - Error updating service, does not exist or it is not available when using a duplicate registry [\#78](#78) **Merged pull requests:** - Add apprise type and additional error notification [\#118](#118) ([andyloree](https://github.com/andyloree)) - Rename image in docs [\#114](#114) ([moschlar](https://github.com/moschlar)) - Fix release workflow [\#113](#113) ([moschlar](https://github.com/moschlar)) - Update README.md [\#103](#103) ([andrew-dixon](https://github.com/andrew-dixon)) - ci: add basic github actions for build/release [\#101](#101) ([piksel](https://github.com/piksel)) - correct misleading description of WITH\_NO\_RESOLVE\_IMAGE [\#100](#100) ([alex-vg](https://github.com/alex-vg)) - Move example configs to their own folder [\#99](#99) ([djmaze](https://github.com/djmaze)) - Add documentation about `REGISTRIES_FILE` [\#94](#94) ([tito](https://github.com/tito)) - Minor refactoring [\#90](#90) ([sebthom](https://github.com/sebthom)) Closes: #117
Personally, I haven't been using shepherd for a long time now. (I am pursueing a different, infrastructure-as-code based approach which I might blog about in the near future.)
That's a bad precondition for maintaining a project. So if there is anyone who is still actively using it (and preferably already demonstrated their ability to contribute to this project), please step up and show your interest here.
The text was updated successfully, but these errors were encountered: