Skip to content
This repository has been archived by the owner on Aug 18, 2023. It is now read-only.

far-away-unfinished-wip-idea-like-thing: varnish instead of service in kubernetes/openshift #12

Open
ibotty opened this issue Dec 24, 2015 · 6 comments
Assignees

Comments

@ibotty
Copy link

ibotty commented Dec 24, 2015

Adeel asked half a moon ago on hacker news to share my thoughts regarding zipnish in kubernetes. So here they come.

Kubernetes Services vs Zipnish

Services in kubernetes/openshift do in part what varnish/zipnish does:

  • be a stable entity to connect to from other parts in a cluster,
  • distribute between (potentially many) workers.

Now, services can (and in the future will be) implemented using iptables (on linux) without any userspace part, so it might be hard to beat them in performance numbers. But of course, zipnish being varnish under the hood, has many great advantages!

So I hearby propose thinking about vaksr: Varnish as kubernetes service replacement!

Scope

Afaict, without extensive changes to both kubernetes and varnish/zipnish, it's feasible that varnish/zipnish can be a drop-in replacement for kubernetes services. Drop in regarding containers accessing and containers serving, not regarding deployment. I sketch some ideas to make deployment completely within kubernetes possible below.

Design points

Varnish could (and arguably should when seen as a replacement) run on each node, at least when not caching. Does zipnish handle multiple varnish nodes? When caching, should there be a smart load-balancer in front that directs to the right instance, so that the hitrate is optimal? This might be way out of scope though.

Should there be one varnish instance per service or should one instance handle multiple services. The diagram in the presentation suggests the latter. (Can Zipnish handle multiple varnish instances handling different services?) Using one instance for many services might complicate it as a drop-in replacement, because kubernetes accesses services via ip address, not necessarily via dns. Varnish cannot count on a hostname being set! That might be overcome when using one ip address per service to handle, and routing all of them to the varnish instance.

Varnish should be deployed as kubernetes replication controller (or daemon set, when using one per host). This controller will manage the pod consisting of the following containers:

  • varnish, doing what varnish is doing,
  • zipnish, doing what zipnish is doing, and
  • a small process watching kubernetes for interesting events and adjusting varnish's config.

That pod will be exposed by (one or many, see above) kubernetes services so that other containers can consume them using kubernetes means (env variables, dns).

Deploying vaksr in kubernetes

Every kubernetes thingy has labels. We could use the org.varnish-cache.vaksr namespace to get information on whether to cache, how much memory to use (not possible when using multiple services per varnish instance, right?), a pod's priority (comparing to other pods) etc.

Some software could watch for new (kube) services to appear, record how they are defined, and rewire them to route to varnish. (The information on how that works is done through so-called selectors. It will record the old selectors in labels of the replaced service. That part should be pretty straightforward.) When only using one varnish, that code could be part of the container changing varnish's config. It needs way more privilege though!

Is vaksr a good code name?

I mean, vaccines are good, right?

@adeelshahid adeelshahid self-assigned this Dec 25, 2015
@adeelshahid
Copy link
Contributor

Hi Tobias @ibotty,

Thank you for writing back 😄 6:30am and a christmas morning I am programming yes but on vacation right now. I'll come back properly in the new year.

In short, you can install ZipKin with each Varnish instance but feed data to a single MySQL server. UI can be installed anywhere connecting to MySQL to view.

For name stuff I would refer to @perbu

Adeel

@perbu
Copy link

perbu commented Jan 4, 2016

I don't really have an opinion on the name. I agree vaccines are good.

@cassiussa
Copy link

Some software could watch for new (kube) services to appear, record how they are defined, and rewire them to route to varnish. (The information on how that works is done through so-called selectors. It will record the old selectors in labels of the replaced service. That part should be pretty straightforward.) When only using one varnish, that code could be part of the container changing varnish's config. It needs way more privilege though!

I'm also interested in exploring this further, and it's something I've been toying with in my head as of recent. If Zipnish (why not Zarnish?) is targeted at microservice architectures, it would seem to me that something along these lines would be important. Especially autodetection and reconfiguring them to route to Varnish. Doing it manually would seem prohibitive.

@ibotty
Copy link
Author

ibotty commented Feb 8, 2016

Are you interested in that you would also want to invest time in coding it up? If so, let's discuss. I am always a little time-constrained, but motivation is always higher with another person working on it as well.

BTW: now that kubernetes has DaemonSets, it might make sense to support not one varnish, but one daemonset with an individual selector that makes sense for the cluster.

@cassiussa
Copy link

Unfortunately I couldn't commit the time needed these days to be of much assistance with developing this. I meant more that the functionality you spoke of would likely be something the organization I'm employed by would consider implementing, if it were available, as we move more towards microservices.

@ibotty
Copy link
Author

ibotty commented Feb 12, 2016

Would the organization you are employed by consider funding part of the development? I can't commit much company time to this, because we don't have any immediate need for this.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants