Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remaining duration of <negativenumber> insufficient to load cluster #216

Open
cjxkb210 opened this issue Aug 12, 2020 · 0 comments
Open

Comments

@cjxkb210
Copy link

cjxkb210 commented Aug 12, 2020

Setup

Runtime 10.x and 12.x
Timeout 2mins 30 seconds

Error Details

2020-08-12T13:48:12.102+01:00
START RequestId: 1a5aa337-9f84-5f13-9486-f49a78bcbb46 Version: $LATEST

2020-08-12T13:48:12.115+01:00
info: Failing Batch 48e2e469-2fbd-4e98-99c6-9b028831a612 due to "Remaining duration of -89145 insufficient to load cluster"

2020-08-12T13:48:12.458+01:00
END RequestId: 1a5aa337-9f84-5f13-9486-f49a78bcbb46
REPORT RequestId: 1a5aa337-9f84-5f13-9486-f49a78bcbb46 Duration: 355.01 ms Billed Duration: 400 ms Memory Size: 256 MB Max Memory Used: 103 MB

I confirmed that the Remaining duration of -89145 insufficient to load cluster did pertain to the same request 1a5aa337-9f84-5f13-9486-f49a78bcbb46 by checking the batches table.

Expected Behaviour

Given that the Lambda timeout was well above the execution time of the request, it's expected that the function has well above adequate time to execute.

Based on the log statements above, this suggests that the call to context.getRemainingTimeInMillis() function returned -89145, 13ms after request invocation.

This occurred 3 times out of 48 batches in my test envrionment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant