Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ProvisionedThroughputExceededException in kinesis client library #4

Closed
rantav opened this issue Sep 14, 2014 · 11 comments
Closed

ProvisionedThroughputExceededException in kinesis client library #4

rantav opened this issue Sep 14, 2014 · 11 comments

Comments

@rantav
Copy link

rantav commented Sep 14, 2014

Sometimes I see these errors in the logs.
They don't happen a lot, but they do happen.
I suppose they mean that the kinesis client reads data too fast.
I was under the impression that the client lib is supposed to take care of reading the data in the right pace, am I wrong?

Please advise....

ERROR [2014-09-12 13:52:33,150] com.amazonaws.services.kinesis.clientlibrary.lib.worker.ProcessTask: ShardId shardId-000000000002: Caught exception:
! com.amazonaws.services.kinesis.model.ProvisionedThroughputExceededException: Rate exceeded for stream gateway-filtered under account 671587110562. (Service: AmazonKinesis; Status Code: 400; Error Code: ProvisionedThroughputExceededException; Request ID: 0b75a25f-3a84-11e4-a1f8-cf42abe98da9)
! at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:820) ~[gateway-packager-8601271.jar:1.0-BETA-SNAPSHOT]
! at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:439) ~[gateway-packager-8601271.jar:1.0-BETA-SNAPSHOT]
! at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:245) ~[gateway-packager-8601271.jar:1.0-BETA-SNAPSHOT]
! at com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2128) ~[gateway-packager-8601271.jar:1.0-BETA-SNAPSHOT]
! at com.amazonaws.services.kinesis.AmazonKinesisClient.getRecords(AmazonKinesisClient.java:590) ~[gateway-packager-8601271.jar:1.0-BETA-SNAPSHOT]
! at com.amazonaws.services.kinesis.clientlibrary.proxies.KinesisProxy.get(KinesisProxy.java:135) ~[gateway-packager-8601271.jar:1.0-BETA-SNAPSHOT]
! at com.amazonaws.services.kinesis.clientlibrary.proxies.MetricsCollectingKinesisProxyDecorator.get(MetricsCollectingKinesisProxyDecorator.java:72) ~[gateway-packager-8601271.jar:1.0-BETA-SNAPSHOT]
! at com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisDataFetcher.getRecords(KinesisDataFetcher.java:69) ~[gateway-packager-8601271.jar:1.0-BETA-SNAPSHOT]
! at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ProcessTask.getRecords(ProcessTask.java:186) ~[gateway-packager-8601271.jar:1.0-BETA-SNAPSHOT]
! at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ProcessTask.call(ProcessTask.java:96) ~[gateway-packager-8601271.jar:1.0-BETA-SNAPSHOT]
! at com.amazonaws.services.kinesis.clientlibrary.lib.worker.MetricsCollectingTaskDecorator.call(MetricsCollectingTaskDecorator.java:48) [gateway-packager-8601271.jar:1.0-BETA-SNAPSHOT]
! at com.amazonaws.services.kinesis.clientlibrary.lib.worker.MetricsCollectingTaskDecorator.call(MetricsCollectingTaskDecorator.java:23) [gateway-packager-8601271.jar:1.0-BETA-SNAPSHOT]
! at java.util.concurrent.FutureTask.run(FutureTask.java:262) [na:1.7.0_65]
! at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65]
! at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65]
! at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
@kumarumesh
Copy link
Contributor

Hi Rantav,

Thanks for reporting this.

As you said, if you see this exception occasionally, and your application (that uses KCL to process Kinesis stream) doesn't fall behind in processing your Kinesis stream data, then it can be considered as a benign exception and can be ignored.

We recently updated our documentation to reflect this ( for details see section titled "Read Throttling" at http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-record-processor-additional-considerations.html )

Let me know if you have further questions.

Thanks
Umesh

@rantav
Copy link
Author

rantav commented Sep 18, 2014

Thank you @kumarumesh so in this case, I'd appreciate it if you could level down the log message to INFO or at least WARN. I regard ERROR as something that needs to be taken care of immediately, not benign,

@arisha84
Copy link

Hi Rantav,

I got this error as well.
Notice that there is a limitation of 5 getRecord requests per second per shard and 1000 puts per second per shard.
You should take it into account when you determine the number of shards in your stream.

This is very important and can cause you a lot of problems!

for example if you read less than 1Mb/sec from each shard (the max is 2Mb/sec), but you do it in 7 getRecord requests/sec you will get ProvisionedThroughputExceededException and you'll probably don't know why. It may cause your applications to have delays on the stream

To solve this you need to do some fine tuning on your KinesisClientLibConfiguration - you can control the maxRecords in each read and the idleTimeBetweenReadsInMillis.
Lets say you have 7 applications working on a stream - if the idleTimeBetweenReadsInMillis is 1000 (1 sec) you'll probably get a lot of those exceptions. In this case you'll want to increase the idle time to 2 seconds. Also you want to read as many records as possible in each getRecords call, so I recommend setting maxRecords to 10000 (the max value allowed).

Hope this helps,
Ari.

@NathanChristie
Copy link

I also support lowering this to the warning level.

@f-ganz
Copy link

f-ganz commented Sep 15, 2015

+1

@preetpuri
Copy link

I agree this should be lower down to warn level

@SmiddyPence
Copy link

+1 For a Warn

@zackhsi
Copy link

zackhsi commented Jan 12, 2016

+1

@pfifer
Copy link
Contributor

pfifer commented Oct 4, 2016

Thanks for reporting this. We'll look at handling the throttling exception, and reporting it at a lower logging level.

@pfifer pfifer added this to the Release 1.7.4 milestone Jan 23, 2017
pfifer added a commit to pfifer/amazon-kinesis-client that referenced this issue Feb 27, 2017
* Fixed an issue building JavaDoc for Java 8.
  * [Issue awslabs#18](awslabs#18)
  * [PR awslabs#141](awslabs#141)
* Reduce Throttling Messages to WARN, unless throttling occurs 6 times consecutively.
  * [Issue awslabs#4](awslabs#4)
  * [PR awslabs#140](awslabs#140)
* Fixed two bugs occurring in requestShutdown.
  * Fixed a bug that prevented the worker from shutting down, via requestShutdown, when no leases were held.
    * [Issue awslabs#128](awslabs#128)
  * Fixed a bug that could trigger a NullPointerException if leases changed during requestShutdown.
    * [Issue awslabs#129](awslabs#129)
  * [PR awslabs#139](awslabs#139)
* Upgraded the AWS SDK Version to 1.11.91
  * [PR awslabs#138](awslabs#138)
* Use an executor returned from `ExecutorService.newFixedThreadPool` instead of constructing it by hand.
  * [PR awslabs#135](awslabs#135)
* Correctly initialize DynamoDB client, when endpoint is explicitly set.
  * [PR awslabs#142](awslabs#142)
pfifer added a commit that referenced this issue Feb 27, 2017
* Fixed an issue building JavaDoc for Java 8.
  * [Issue #18](#18)
  * [PR #141](#141)
* Reduce Throttling Messages to WARN, unless throttling occurs 6 times consecutively.
  * [Issue #4](#4)
  * [PR #140](#140)
* Fixed two bugs occurring in requestShutdown.
  * Fixed a bug that prevented the worker from shutting down, via requestShutdown, when no leases were held.
    * [Issue #128](#128)
  * Fixed a bug that could trigger a NullPointerException if leases changed during requestShutdown.
    * [Issue #129](#129)
  * [PR #139](#139)
* Upgraded the AWS SDK Version to 1.11.91
  * [PR #138](#138)
* Use an executor returned from `ExecutorService.newFixedThreadPool` instead of constructing it by hand.
  * [PR #135](#135)
* Correctly initialize DynamoDB client, when endpoint is explicitly set.
  * [PR #142](#142)
@pfifer
Copy link
Contributor

pfifer commented Feb 27, 2017

This has been fixed in the latest release. It will now be a warning unless it's throttled 6 times consecutively. Additionally the message is now reported from the the ThrottleReporter if you want to filter it out completely.

Feel free to reopen if you have any other questions or concerns.

@pfifer pfifer closed this as completed Feb 27, 2017
@kirill-kozlov
Copy link

Hey, is it possible to migrate the implemented functionality to 2.X version also?
There is an open request about it: #539

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants