Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Repeatable signed urls for the same expiry #65

Closed
fourseven opened this issue Mar 5, 2015 · 10 comments
Closed

Repeatable signed urls for the same expiry #65

fourseven opened this issue Mar 5, 2015 · 10 comments

Comments

@fourseven
Copy link

Hi - one thing we've noticed when V4 signatures are active is that calling Fog::Storage::AWS::File.url (on an instance, not the class) with the same expiry time results in a new, unique signature/url each time.

With V2 it's the same output for an identical expiry. Because of this we're not giving users the ability to cache the files served with a V4 signature.

Would you have any suggestions to get the same URL when called multiple times with the same expiry param on V4 signed urls?

geemus added a commit that referenced this issue Mar 7, 2015
@geemus
Copy link
Member

geemus commented Mar 7, 2015

@fourseven hmm. Looks like v4 was doing a relative expiry (relative to now) instead of absolute. Which I think is probably just a mistake, but I'm not sure. Could you try that branch and see if you have better luck? (ie I don't think it will blow up, but we should make sure the resulting urls work as expected too). Thanks!

@fcheung Could you also eyeball that change and let me know if I missed something or that change was important for some reason I don't realize? (I think you probably made the change when your graciously brought v4 in). Thanks!

@fcheung
Copy link
Contributor

fcheung commented Mar 7, 2015

For AWS signature v4 expires is the number of seconds the url is valid for (see note by x-amz-expires http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html), whereas in v2 the expires is the actual date. The date wrangling I added was to not break backwards compatibility with previous versions of fog.

Now that I think about it, only performing that conversion is expires is a date and doing nothing if it's an integer might be best of both worlds.

@fcheung
Copy link
Contributor

fcheung commented Mar 7, 2015

Also v4 signature has the current date as one of the signature params, so you'd have to fiddle with that too to get repeatable urls

@fourseven
Copy link
Author

Interesting, so I'm not sure it's even possible with V4 in that case. Say I set the expires to a static value, 25 hours (in seconds) - it wouldn't work subsequent requests because the date (datetime) could have changed and therefore changed the URL.

Does that sound right?

@fcheung
Copy link
Contributor

fcheung commented Mar 9, 2015

Yes that sounds right. We could allow you to specify the date too though. Max expires is 1 week though

@geemus
Copy link
Member

geemus commented Mar 9, 2015

Thanks for the clarifications.

@fourseven I guess you might need to store the generated value (instead of regenerating). And/or opt-in to just use v2 in these cases.

@geemus geemus closed this as completed Mar 9, 2015
@fourseven
Copy link
Author

@geemus - I'm worried that if the date is old then the signature won't pass. We have moved back to v2 urls (since finding this) - v4 was a lot more expensive because there is no way to avoid the s3 get request cost.

Thanks for the insight though, and hopefully amazon doesn't remove v2 anytime soon!

@geemus
Copy link
Member

geemus commented Mar 9, 2015

Hmm. Good point, they have supported old stuff for a long time in my experience. Which get request is unavoidable?

@fourseven
Copy link
Author

If the URL is the same, then a browser won't ask the server for a file if the Cache-Control header is set to a (large) time in the future. With different params in the URL the cache is effectively busted, forcing the browser to ask S3 for the file again, which is a GetObject request to S3 (and has cost associated with it).

We have a lot of media on 24h expiries so that they only need to pull down images once a day (while still keeping them somewhat restricted). Having repeatable urls has saved 50% of our S3 costs at peak (per day).

@fcheung
Copy link
Contributor

fcheung commented Mar 9, 2015

Well S3 has no way of telling whether this is a freshly generated link or an old one, so as long as expires is less than the maximum (1 week) I would have thought you should be ok.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants