-
Notifications
You must be signed in to change notification settings - Fork 24.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Return HTTP 413 (Request Entity Too Large) when http.max_content_length exceeded #2902
Comments
+1 - This is causing me trouble at the moment (Have a specific list of indexes to search), specifically the Pyes python library raises NoServerAvailable exception when this happens, not very helpful! |
Looks like a dupe of #2137 (or am I missing something) - You need to do some extra work to do this with netty3 IIRC, so maybe netty4 will help here. |
+1 Version 0.90.10 org.elasticsearch.common.netty.handler.codec.frame.TooLongFrameException: HTTP content length exceeded 104857600 bytes. Update: I increased size in /etc/elasticsearch/elasticsearch.yml and it works again # Set a custom allowed content length: # http.max_content_length: 500mb |
The exception with same name can be caused by large header, and it should be fixed differently: #5665 |
Is there a workaround for this? We are regularly getting this error on some larger documents. |
@wflanagan Did you increase The issue here is about how the connection is terminated and does not return HTTP status code 413 when the configured max content length is exceeded (by the way, this is an underlying issue with Netty). |
I can confirm that the problem still exists, and it's painful for the clients, since there's no good way how to properly handle the situation or propagate the error to the user (see the linked issue for the Ruby client). I've decreased the limit when launching Elasticsearch: $ ./tmp/builds/elasticsearch-2.4.0-SNAPSHOT/bin/elasticsearch -D es.http.max_content_length=1kb When I try to index a document via Curl, I get back a $ curl -v -X POST localhost:9200/test/test/1 -d @/Users/karmi/Contracts/Elasticsearch/Projects/BuildSystem/API/test/fixtures/builds_elasticsearch.json
* Hostname was NOT found in DNS cache
* Trying ::1...
* Connected to localhost (::1) port 9200 (#0)
> POST /test/test/1 HTTP/1.1
> User-Agent: curl/7.37.1
> Host: localhost:9200
> Accept: */*
> Content-Length: 75680
> Content-Type: application/x-www-form-urlencoded
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server This is the log output from Elasticsearch:
I don't know the options we have how to handle the situation when somebody sends a too big request, but I think we should try hard here to be correct and return the |
Netty has added the ability to respond with a 413. See netty/netty#2211 |
With the upgrade to Netty 4, this is now handled correctly:
Note that if you send an
|
Closed by #19526 |
+1 |
Currently elasticsearch drops the connection if http.max_content_length is exceeded. While this is acceptable behavior based on RFC2616 (http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.14), it's not particularly friendly to client libraries.
Depending on the library being used, it can be difficult to determine the exact size of the HTTP request prior to actually sending it. Additionally, when the connection is simply closed, it leaves the underlying cause of the problem somewhat ambiguous without also inspecting the elasticsearch logs.
Proposed change: add an option to return HTTP 413 when http.max_content_length is exceeded instead of just dropping the connection.
Steps to repro the current behavior:
curl -XPUT 'http://localhost:9200/testindex/'
curl -v -XPUT 'http://localhost:9200/testindex/testtype/1' -d '{
"message": "large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message large message"
}'
Expected: HTTP status code 413 (Request Entity Too Large)
Actual: Dropped connection client-side, and a TooLongFrameException in elasticsearch log
Here's the output from curl:
The text was updated successfully, but these errors were encountered: