-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider more efficient ways of supporting additional cloud platforms #719
Comments
One random idea I had was to teach Obviously the downside there is that users can no longer just grab a URL to the cloud image they need, and it would elevate |
One thing we could start doing is to not produce every artifact for every build of every stream. E.g. for |
There's no real good answer here that helps the problem and doesn't degrade the user experience (i.e. by making them do some step or use some specific tool to reconstruct the image). The only thing I can think of that doesn't take away from the UX but does help with long term storage costs is if we initially produce all the images, but then later remove all non essential images (images that can be easily reconstructed) from the object storage. The other side to that is that we never store non-essential images and we use a smart server to serve web requests for them and reconstruct them server side. No option is without drawbacks :( |
Describe the enhancement
Right now, we're shipping a separate image for each cloud platform we support. Sometimes, that means actually shipping in the same disk format with just different platform IDs stamped in. For example:
aliyun
,ibmcloud
,openstack
, andexoscale
all use the same QCOW2 formatazure
andazurestack
use the same VHD formatvultr
andmetal
are in raw formatThere is now a request to add another cloud platform which looks like it might require a VHD and an OVA.
More cloud images means more storage and longer pipelines. It seems highly inefficient to store multiple versions of e.g. a qcow2 with essentially just a few bits different.
Let's brainstorm whether there are ways to improve the situation here without sacrificing the UX too much.
The text was updated successfully, but these errors were encountered: