Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for node.js aws sdk? #127

Closed
chicagobuss opened this issue Mar 1, 2016 · 6 comments
Closed

Support for node.js aws sdk? #127

chicagobuss opened this issue Mar 1, 2016 · 6 comments

Comments

@chicagobuss
Copy link

I finally got s3proxy working with both boto and spark and it's really quite cool.

I then tried the node.js aws SDK and it didn't work, regardless of whether I enabled authentication or not.

With auth turned off entirely:

$ node test_download.js
 { [CredentialsError: Missing credentials in config]
message: [Getter/Setter],
code: 'CredentialsError',
time: Tue Mar 01 2016 14:29:56 GMT-0600 (CST),
originalError:
 { message: 'Could not load credentials from any providers',
   code: 'CredentialsError',
   time: Tue Mar 01 2016 14:29:56 GMT-0600 (CST),
   originalError: { message: 'Unexpected token <' } } } 'CredentialsError: Missing credentials in config\n    at Object.parse (native)\n    at /home/jbuss/test_node_with_local_s3proxy/node_modules/aws-sdk/lib/metadata_service.js:115:38\n    at IncomingMessage.<anonymous> (/home/jbuss/test_node_with_local_s3proxy/node_modules/aws-sdk/lib/metadata_service.js:74:45)\n    at IncomingMessage.emit (events.js:117:20)\n    at _stream_readable.js:944:16\n    at process._tickDomainCallback (node.js:492:13)'

With auth enabled:

$ node test_download.js
{ [AccessDenied: AWS authentication requires a valid Date or x-amz-date header]
message: 'AWS authentication requires a valid Date or x-amz-date header',
code: 'AccessDenied',
region: null,
time: Tue Mar 01 2016 14:33:07 GMT-0600 (CST),
requestId: null,
extendedRequestId: null,
statusCode: 403,
retryable: false,
retryDelay: 15.363339916802943 } 'AccessDenied: AWS authentication requires a valid Date or x-amz-date header\n    at Request.extractError (/home/jbuss/test_node_with_local_s3proxy/node_modules/aws-sdk/lib/services/s3.js:327:35)\n    at Request.callListeners (/home/jbuss/test_node_with_local_s3proxy/node_modules/aws-sdk/lib/sequential_executor.js:105:20)\n    at Request.emit (/home/jbuss/test_node_with_local_s3proxy/node_modules/aws-sdk/lib/sequential_executor.js:77:10)\n    at Request.emit (/home/jbuss/test_node_with_local_s3proxy/node_modules/aws-sdk/lib/request.js:596:14)\n    at Request.transition (/home/jbuss/test_node_with_local_s3proxy/node_modules/aws-sdk/lib/request.js:21:10)\n    at AcceptorStateMachine.runTo (/home/jbuss/test_node_with_local_s3proxy/node_modules/aws-sdk/lib/state_machine.js:14:12)\n    at /home/jbuss/test_node_with_local_s3proxy/node_modules/aws-sdk/lib/state_machine.js:26:10\n    at Request.<anonymous> (/home/jbuss/test_node_with_local_s3proxy/node_modules/aws-sdk/lib/request.js:37:9)\n    at Request.<anonymous> (/home/jbuss/test_node_with_local_s3proxy/node_modules/aws-sdk/lib/request.js:598:12)\n    at Request.callListeners (/home/jbuss/test_node_with_local_s3proxy/node_modules/aws-sdk/lib/sequential_executor.js:115:18)'
@gaul
Copy link
Owner

gaul commented Mar 1, 2016

I successfully tested S3Proxy master with the latest AWS SDK and added an example here:

https://github.com/andrewgaul/s3proxy/wiki/Client-compatibility-list#aws-sdk-for-javascript-in-nodejs

Can you share your test_download.js and run S3Proxy again with trace logging:

java -Djclouds.regions=us-east-1 -DLOG_LEVEL=trace -jar target/s3proxy

Note that I had to set jclouds.regions to work around aws/aws-sdk-js#919.

@gaul
Copy link
Owner

gaul commented Mar 1, 2016

Perhaps your issue is related to using S3Proxy 1.4.0 which does not support AWSv4 signatures? You can either upgrade to master or specify signatureVersion: 'v2' in AWS.S3 constructor.

@chicagobuss
Copy link
Author

I built from master. Here's my test_download.js:

var AWS = require('aws-sdk');
var fs = require('fs');

AWS.config.update({ s3ForcePathStyle: true });

var filename = 'out.csv'

var ep = new AWS.Endpoint('http://127.0.0.1:7000');
var s3 = new AWS.S3({endpoint: ep, params: {Bucket: 'dabucket', Key: filename}});

var download_params = { Bucket: 'dabucket', Key: filename };
var outStream = fs.createWriteStream(filename);

s3.getObject(download_params, function(err, data) {
  if (err === null) {
     outStream.write(data);
  } else {
     console.log(err);
  }
  outStream.end();
});

I'll try the jclouds.regions thing and let you know if that works.

@chicagobuss
Copy link
Author

No luck with that. I'm using the following proxy config (which works great with boto, btw.. able to upload and download huge files without issue)

# Local proxy settings
s3proxy.authorization=aws-v2
s3proxy.identity=superaccess
s3proxy.credential=supersecret
s3proxy.endpoint=http://127.0.0.1:7000

# Jclouds settings - https://github.com/jclouds/jclouds-site/blob/master/guides/aws-s3.md
jclouds.provider=s3
jclouds.endpoint=https://storage.googleapis.com
jclouds.s3.virtual-host-buckets=false
jclouds.strip-expect-header=true
jclouds.identity=GOOGLEDEVELOPERACCESSKEY
jclouds.credential=GOOGLEDEVELOPERSECRETKEY
jclouds.regions=us-east-1

The TRACE level logging from s3proxy all looks like this:
Caused by: org.jclouds.aws.AWSResponseException: request POST https://storage.googleapis.com/cigtest/out.csv?uploads HTTP/1.1 failed with code 403, error: AWSError{requestId='null', requestToken='null' NotMatch', message='The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.', stringSigned='POST

@gaul
Copy link
Owner

gaul commented Mar 2, 2016

GCS raises this error since their service does not support multi-part uploads:

http://stackoverflow.com/a/27830881/2800111

Can you configure your application to use single-part uploads? Unfortunately this is not something that S3Proxy can work around.

You can also try using GCS via its native google-cloud-storage API although this has another issue in the underlying jclouds library:

https://issues.apache.org/jira/browse/JCLOUDS-912

@chicagobuss
Copy link
Author

Thanks, we had good luck with setting the partSize to a very large value.. that's good enough for us for now. I'd like to try your GCS native implementation idea soon, too - but are you saying s3proxy won't do multiple-part uploads to GCS at all?

For completeness' sake here's our working upload script:

var AWS = require('aws-sdk');
var fs = require('fs');
var zlib = require('zlib');

var s3obj = new AWS.S3({
  endpoint: 'http://127.0.0.1:7000',
  accessKeyId: 'superaccess',
  secretAccessKey: 'supersecret',
  s3ForcePathStyle: true,
  params: {Bucket: 'dabucket', Key: 'dafatkey'},
  ManagedUpload: {maxTotalParts: 1}
});

var body = fs.createReadStream('bigfile').pipe(zlib.createGzip());
var options = {partSize: 10 * 1024 * 1024 * 1024, queueSize: 1};
var params = {Body: body}
s3obj.upload(params, options).
  // on('httpUploadProgress', function(evt) { console.log(evt); }).
  send(function(err, data) { console.log(err, data) });

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants