Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

default_url_options host is not used with restore_cached_data #100

Closed
Chocksy opened this issue Oct 27, 2016 · 8 comments
Closed

default_url_options host is not used with restore_cached_data #100

Chocksy opened this issue Oct 27, 2016 · 8 comments

Comments

@Chocksy
Copy link
Contributor

Chocksy commented Oct 27, 2016

When using restore_cached_data plugin the default_url_options host seems to not be used. The extract_metadata method in store_dimentions raises an error.

The error is because we define the host attribute without https but it seems like extract_metadata tries to call it via https.

Errno::ECONNREFUSED - Connection refused - connect(2) for "localhost" port 443:
  /Users/razvanciocanel/.rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/net/http.rb:879:in `block in connect'
  /Users/razvanciocanel/.rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/timeout.rb:74:in `timeout'
  /Users/razvanciocanel/.rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/net/http.rb:878:in `connect'
  /Users/razvanciocanel/.rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/net/http.rb:863:in `do_start'
  /Users/razvanciocanel/.rvm/rubies/ruby-2.2.1/lib/ruby/2.2.0/net/http.rb:852:in `start'
  down (2.3.6) lib/down.rb:109:in `block in open'

The only way to make this work is if we set the host attribute to the Shrine::Storage::S3 but that raises a warning that it is deprecated. Only setting the default_url_options is not enough.

@janko
Copy link
Member

janko commented Oct 28, 2016

What is the S3 URL without any URL options? Because to me this error looks like it's trying to connect to https on localhost, not Amazon S3. Could you post here you Shrine::Storage::S3 setup?

The restore_cached_data plugin technically doesn't know about URLs, it only calls #open on the storage, and then the storage will return an IO object representing that file. For S3 and other remote storages, this #open is implemented by opening an IO via HTTP/HTTPS using #url (and default_url_options won't be applied here, as you noticed, because #url is called on storage directly, it doesn't go through Shrine::UploadedFile). For filesystem or database storages #open will be implemented differently.

For that reason, restore_cached_data plugin cannot apply default_url_options plugin. For Amazon S3 storage, the #url should return a presigned Amazon S3 URL (with the query parameters so that it works for private files as well), and this is what should be used by #open.

Note that if you need Shrine::Storage::S3 to use a different URI host than s3.amazonaws.com, not just as a CDN optimization but because going through the default s3.amazonaws.com wouldn't work, you can use :endpoint which will get forwarded to Aws::S3::Client:

Shrine::Storage::S3.new(endpoint: "http://my-different-host.com", **s3_options)

@Chocksy
Copy link
Contributor Author

Chocksy commented Oct 28, 2016

Yes we are using localhost with this gem https://github.com/jubos/fake-s3
The configuration we have looks like this:

require 'shrine'
require 'shrine/storage/file_system'
require 'shrine/storage/s3'
require 'shrine/plugins/activerecord'

s3_options[:region] ||= 'us-east-1'
s3_options[:bucket] ||= 'bucket'
s3_options[:access_key_id] ||= '123'
s3_options[:secret_access_key] ||= 'abc'
s3_options[:endpoint] = ENV['AWS_ENDPOINT'] || 'http://localhost:10001'
# this was commented out as it's raising a deprecated warning
# s3_options[:host] = "#{s3_options[:endpoint]}/#{s3_options[:bucket]}/"
s3_options[:force_path_style] = true
url_options[:host] = "#{s3_options[:endpoint]}/#{s3_options[:bucket]}/"

Shrine.storages = {
  cache: Shrine::Storage::S3.new(prefix: 'shrine/cache', upload_options: { acl: 'public-read' }, **s3_options),
  avatar_store: Shrine::Storage::S3.new(prefix: 'avatars', upload_options: { acl: 'public-read' }, **s3_options)
}

Shrine.plugin :logging, logger: Rails.logger
Shrine.plugin :activerecord
Shrine.plugin :backgrounding
Shrine.plugin :default_url_options, avatar_store: { public: true }, **url_options
Shrine.plugin :restore_cached_data

Shrine::Attacher.promote { |data| ShrineUploadJob.perform_in(30, data) }
Shrine::Attacher.delete { |data| ShrineDeleteJob.perform_async(data) }

This allows us to use a fake s3 bucket on local without having to worry about amazon buckets when doing development. I saw that the restore_cached_data does not deal with the url but the url that gets to the extract_metadata method does not have a https and my wonder is if this has anything to do with the shrine gem or the fastimage gem that tries to extract the meta from the file to make sure it's the right one after upload.

@janko
Copy link
Member

janko commented Oct 28, 2016

@Chocksy Thanks for the details! I'm trying to reproduce it now, but with no success. Here is my attempt (a self-contained file), maybe you could try to see if you can find anything that is different, and try to reproduce the bug using this script:

require "shrine"
require "shrine/storage/s3"

s3_options = {
  region:            'us-east-1',
  bucket:            'bucket',
  access_key_id:     '123',
  secret_access_key: 'abc',
  endpoint:          'http://localhost:10001',
  force_path_style:  true,
}

Shrine.storages = {
  cache: Shrine::Storage::S3.new(prefix: 'shrine/cache', upload_options: { acl: 'public-read' }, **s3_options),
  store: Shrine::Storage::S3.new(prefix: 'shrine/store', upload_options: { acl: 'public-read' }, **s3_options),
}

Shrine.plugin :restore_cached_data
Shrine.plugin :determine_mime_type

class ImageUploader < Shrine
end

uploader = ImageUploader.new(:cache)
cached_file = uploader.upload(File.open(__FILE__))
cached_file.metadata["mime_type"] = "fake/mime"

class User
  attr_accessor :avatar_data
  include ImageUploader[:avatar]
end

user = User.new
user.avatar = cached_file.to_json # restore_cached_data kicks in and corrects MIME type
puts user.avatar.mime_type # outputs the correct "text/x-ruby"

When I was thinking about the possible reasons for your error, the only thing that came to mind was that Shrine::Storage::S3#open always sends a ssl_ca_cert parameter, which is only applicable to HTTPS endpoints. However, the above script still works even with the :ssl_ca_cert being passed in, so I discarded this hunch. But maybe you could try modifying that code to not send :ssl_ca_cert for the Fake S3 endpoint (which is HTTP), and see if it helps.

@janko
Copy link
Member

janko commented Oct 28, 2016

What would be most useful is to add a puts statement in Shrine::Storage::S3#open, and see for which URL the Down.open call raises an error.

@Chocksy
Copy link
Contributor Author

Chocksy commented Oct 28, 2016

It seems like the url is like this: "https://localhost/bucket/shrine/cache/9f1cbd1650f1c690e31e0b565cb34237?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=123%2F20161028%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20161028T205458Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=60f935012ffe1bae58ab7aa2767b3fc4d1811baeaff47d17611c7908374f0512"

I'm not sure why it would add https there though.

@Chocksy
Copy link
Contributor Author

Chocksy commented Oct 28, 2016

I need the url method to call this object(id).presigned_url(:get, secure: false) in order to get what i want.

The options in the s3#url method doesn't seem to be open for me though.

@janko
Copy link
Member

janko commented Oct 29, 2016

@Chocksy Thank you for diving into this and determining the problem. You are right, you cannot configure S3#url options on the storage-level, but you shouldn't need to; aws-sdk shouldn't be generating an https URL if you've passed an http endpoint.

And this is coincidentally something for which I've already submitted a fix almost a year ago (aws/aws-sdk-ruby#1027), which seems to have been applied since version 2.2.25 of aws-sdk gem. I'm betting that you're running an older version of aws-sdk, let me know if upgrading fixes it.

@Chocksy
Copy link
Contributor Author

Chocksy commented Oct 31, 2016

@janko yep that seems to be the issue. Thank you for your help. I'll close this now.

So as a summary:
In order for the shrine gem to work with fakes3 or http endpoints you need to have aws-sdk above 2.2.25 version.

@Chocksy Chocksy closed this as completed Oct 31, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants