Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: [object Object] #46

Open
dkebler opened this issue Nov 3, 2015 · 14 comments
Open

Error: [object Object] #46

dkebler opened this issue Nov 3, 2015 · 14 comments

Comments

@dkebler
Copy link

dkebler commented Nov 3, 2015

Have S3 bucket in website mode with URL navigation
I think I followed your readme closely but this is what comes up.

Error: [object Object]

Here is the website endpoint (you can check the actual source from your browser).
http://images.healthwrights.org.s3-website-us-east-1.amazonaws.com/

It is in east-1 and I followed the correct rest api url for that you indicated. (no trailing /)

typo somewhere, or a setup issue or a bug?


<div id="listing"></div>

<!-- add jquery - if you already have it just ignore this line -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script>

<!-- the JS variables for the listing -->
<script type="text/javascript">
  var S3BL_IGNORE_PATH = false;
  // var BUCKET_NAME = 'images.healthwrights.org';
  var BUCKET_URL = 'https://images.healthwrights.org.s3.amazonaws.com';
  // var S3B_ROOT_DIR = 'SUBDIR_L1/SUBDIR_L2/';
</script>

<!-- the JS to the do the listing -->
<script src="https://rgrp.github.io/s3-bucket-listing/list.js"></script>
@namebrandon
Copy link

Every get this figured out? I'm seeing the same problems, trying the same options in the script as you are.. applied the policies, made index.html public, etc.. can't quite figure out what is missing.

Edit - Got it working.. This is what my file ended up looking like..

<html>
<head>
</head>
<body>
  <div id="listing"></div>

<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script>
<script type="text/javascript">
  var S3BL_IGNORE_PATH = true;
  var BUCKET_NAME = 'bucketname';
  //var BUCKET_URL = 'http://bucketname.s3-website-us-east-1.amazonaws.com'; 
  var S3B_ROOT_DIR = '';
</script>
<script src="https://rgrp.github.io/s3-bucket-listing/list.js"></script>
</body>
</html>

@jimbru
Copy link

jimbru commented Jul 3, 2016

I had the same issue. Check your console--there's probably a cross-domain policy error? If so, setting a CORS policy on your S3 bucket should solve the issue.

@draeath
Copy link

draeath commented Jul 31, 2016

I'm having the same problem, and I do have a CORS policy. Can you share yours?

@jimbru
Copy link

jimbru commented Aug 23, 2016

@draeath Looks something like this?

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>Authorization</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

@improved-broccoli
Copy link

I had the same issue. And finally change my bucket policy.
First, I used bucket policy JSON document given in this project README. But this was produce error in browser console. So I end up checking "List" and "View" checkbox for "Everyone" in S3 bucket console and it finally work!

@timtribers
Copy link

Ditto - same issue here.
I have permissions set (policy and console) and CORS policy in place.
Can you give some basic debug steps to work out where the issue lies?
E.g., is there a url I should be able to hit in my browser to check that s3 is returning something?

Also, does the s3 bucket have to be set to website hosting, or can that be left disabled? (I have tried both, and makes no difference, I still get the same error)

@timtribers
Copy link

Doh - turns out I was specifying the website endpoint in the bucket url, not the REST endpoint (so, should be https://BUCKET.s3-eu-west-1.amazonaws.com not https://BUCKET.s3-website-eu-west-1.amazonaws.com).

Also, have now proved that the S3 bucket does not have to be set to website hosting (if implementing using method 1 in the readme).

@draeath
Copy link

draeath commented Mar 6, 2017

Yep as long as the html and js gets returned from an HTTP GET, it should be able to run the API stuff necessary to build the index. (you could actually host these elsewhere even)

@jmukhtar
Copy link

Hi, I am getting the same error.

In console, i am getting 403 Forbidden when calling in the http://BUCKET_URL/?delimiter=/

We have not made the bucket public but have bucket policy allowing access from specific IP addresses.

Any idea what could be the reason. I can successfully get to index.html and it downloads list.js but then after that it shows forbidden error

@creativekindle
Copy link

I've tried a few different approaches, and I keep ending up with the same 403 issue/output:

Error: [object Object]

I have CORS and Bucket Policy set per documentation. I attempted the above solution.

I also attempted this solution (issue 80).

var S3BL_IGNORE_PATH = true;
var BUCKET_URL = 'http://BUCKET.s3.us-east-1.amazonaws.com';

My bucket is also structured just as in issue 80: xx.xxxx.xxx

In my simplest attempt per documentation:

var BUCKET_URL = 'http://xx.xxxx.xxx.s3.us-east-1.amazonaws.com';

...results in a 403 in console.

@jmukhtar
Copy link

jmukhtar commented Mar 6, 2019

you have to have the policy to allow s3:ListBucket "arn:aws:s3:::<bucket_name>" and s3:GetObject on "arn:aws:s3:::<bucket_name>/*"

@creativekindle
Copy link

Son of a gun. That would make sense. Thanks! So two quick things in case anyone runs into same issue, my final parameters are:

var S3BL_IGNORE_PATH = true;
var BUCKET_URL = 'http://xx.xxxxxx.xx.s3.us-east-1.amazonaws.com';

And with bucket policy set to:

{
    "Version": "2008-10-17",
    "Statement": [
        {
            "Sid": "AllowPublicRead",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::xx.xxxxxx.xx/*"
        },
        {
            "Sid": "AllowPublicList",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::xx.xxxxxx.xx"
        }
    ]
}

@jaboutboul
Copy link

jaboutboul commented Nov 7, 2019

Was getting this error and now getting another issue after updating my config.

I'm using option #4 as I do not want to have to use the -website in the links. My config is below and my index.html is in the root of the bucket. I'd like to be able to access the file using the virtualhost style just http://bucketname.s3.amazonaws.com from the root, just the author set it up in his example, however, when I hit that page it just shows me the xml, but when i append /index.html to the end it works.

Public access is on. Bucket policy is configured as well as CORS. See below.

index.html

<html>
<head>
  <title>My File Listing Generator</title>
</head>
<body>
  <!---<div id="navigation"></div>--->
  <div id="listing"></div>

<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.1/jquery.min.js"></script>
<script type="text/javascript">
  var S3BL_IGNORE_PATH = true;
  var BUCKET_NAME = 'mybucket';
  // var BUCKET_URL = 'https://mybucket.s3.amazonaws.com';
  var S3B_ROOT_DIR = '';
  // var S3B_SORT = 'DEFAULT';
  var EXCLUDE_FILE = 'index.html';  // change to array to exclude multiple files
  // var AUTO_TITLE = true;
  // var S3_REGION = 's3'; // for us-east-1
</script>
<script type="text/javascript" src="https://rufuspollock.github.io/s3-bucket-listing/list.js"></script>

</body>
</html>```

Bucket Policy
```{
    "Version": "2008-10-17",
    "Statement": [
        {
            "Sid": "AllowPublicRead",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::s1ftp/*"
        },
        {
            "Sid": "AllowPublicList",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::s1ftp"
        }
    ]
}```

CORS
```<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>```

@MrDOS
Copy link
Contributor

MrDOS commented Jun 16, 2020

I ran into this, too. I think in most cases, the CORS error we're all seeing in the debug console is a red herring. The real problem occurs due using a real DNS name/multi-level name as the name of your bucket, configuring s3-bucket-listing to access your bucket via a subdomain of amazonaws.com instead of a subdirectory, and accessing the bucket via HTTPS.

tl;dr:

  • Always set your BUCKET_URL to https://s3.region.amazonaws.com/bucket-name (e.g., for a bucket named foo.example.com in us-east-1, use https://s3.us-east-1.amazonaws.com/foo.example.com).
  • Explicitly set BUCKET_WEBSITE_URL to the URL at which people will reach your s3-bucket-listing installation (e.g., https://foo.example.com).
  • Leave BUCKET_NAME and S3_REGION unset.

Say we have a bucket named foo.example.com in the us-east-1 AWS region, and we access s3-bucket-listing at https://foo.example.com. On page load, s3-bucket-listing will try to figure out the bucket URL in a couple of different ways. Notably, if you've set S3_REGION, it will throw out any manually-configured BUCKET_URL you might have provided and attempt to reconstruct it:

if (typeof S3_REGION != 'undefined') {
  var BUCKET_URL = location.protocol + '//' + location.hostname + '.' + S3_REGION + '.amazonaws.com'; // e.g. just 's3' for us-east-1 region
  var BUCKET_WEBSITE_URL = location.protocol + '//' + location.hostname;
}

Or perhaps you've manually specified BUCKET_URL without any other configuration parameters, but you've done so in the subdomain style: http://foo.example.com.s3.amazonaws.com, or http://foo.example.com.s3.us-east-1.amazonaws.com, or https://foo.example.com.s3.amazonaws.com – it doesn't really matter, as long as it's a subdomain. For now, we'll assume it's https://foo.example.com.s3.us-east-1.amazonaws.com.

The first thing the browser tries to do is make an OPTION request to https://foo.example.com.s3.us-east-1.amazonaws.com?delimiter=/. The browser makes this request to check if your CORS configuration is valid or not, but it's this request itself which fails – not any subsequent requests which may or may not be blocked by the CORS configuration. Why does it fail? Because there's no HTTPS certificate for https://foo.example.com.s3.us-east-1.amazonaws.com. Go there, see for yourself: you'll get an SSL warning in your browser. Amazon serves a wildcard certificate covering *.s3.us-east-1.amazonaws.com, yes, but wildcard certificates only cover one subdomain level: anything at *.*.s3.us-east-1.amazonaws.com or “lower” won't be covered, and requests will fail (unless you explicitly permit access). When you make the request manually, the browser can show you an error message and offer to make an exception, but AJAX requests will just fail outright.

So why doesn't a non-HTTPS request work? Because you can't make AJAX requests to a non-HTTPS URL from an HTTPS page. The request fails before it even hits the network.

If the name of your bucket contains multiple full-stop-separated parts – i.e., its AWS subdomain is more than one level below the leftmost s3 part – your HTTPS requests will fail, and s3-bucket-listing will display this error message. Your easiest solution, as noted at the top of this comment, is to use subdirectory-style access to your bucket: https://s3.us-east-1.amazonaws.com/foo.example.com. Set the BUCKET_URL, and don't set any other configuration options (BUCKET_NAME or S3_REGION) which will cause s3-bucket-listing to modify or discard your explicitly-provided bucket URL. And, you'll also want to manually set BUCKET_WEBSITE_URL so that the first entry in the breadcrumb navigation is the “pretty” URL you want your visitors to see, not the full S3 URL.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants