You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are using https://github.com/animir/node-rate-limiter-flexible with ioredis to implement rate limiting across our web endpoints. Yesterday, we observed a series of errors that caused our server to raise an uncaughtException on a global level and fully crash.
Within our endpoint, we got the first error:
TypeError: pipeline[functionName] is not a function
at /app/node_modules/.pnpm/ioredis@5.4.1/node_modules/ioredis/built/autoPipelining.js:155:31
at new Promise (<anonymous>)
at executeWithAutoPipelining (/app/node_modules/.pnpm/ioredis@5.4.1/node_modules/ioredis/built/autoPipelining.js:144:33)
at EventEmitter.rlflxIncr (/app/node_modules/.pnpm/ioredis@5.4.1/node_modules/ioredis/built/utils/Commander.js:114:63)
at RateLimiterRedis._upsert (/app/node_modules/.pnpm/rate-limiter-flexible@5.0.3/node_modules/rate-limiter-flexible/lib/RateLimiterRedis.js:108:28)
at /app/node_modules/.pnpm/rate-limiter-flexible@5.0.3/node_modules/rate-limiter-flexible/lib/RateLimiterStoreAbstract.js:203:12
at new Promise (<anonymous>)
at RateLimiterRedis.consume (/app/node_modules/.pnpm/rate-limiter-flexible@5.0.3/node_modules/rate-limiter-flexible/lib/RateLimiterStoreAbstract.js:195:12)
at d.checkRateLimit (/app/web/.next/server/chunks/6702.js:1:2410)
at d.rateLimitRequest (/app/web/.next/server/chunks/6702.js:1:2040)
About .7ms later, I see two exceptions being raised. Both with the following stacktrace:
TypeError: results[i] is not iterable (cannot read property undefined)
at /app/node_modules/.pnpm/ioredis@5.4.1/node_modules/ioredis/built/autoPipelining.js:64:25
at tryCatcher (/app/node_modules/.pnpm/standard-as-callback@2.1.0/node_modules/standard-as-callback/built/utils.js:12:23)
at /app/node_modules/.pnpm/standard-as-callback@2.1.0/node_modules/standard-as-callback/built/index.js:22:53
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
We ran this setup for about 2 months without errors, i.e. I expect this to be an edge case. It happened about 20s after the server started, i.e. it may be an issue on startup, not in ongoing operation.
We create our Redis client like this:
constdefaultRedisOptions: Partial<RedisOptions>={maxRetriesPerRequest: null,enableAutoPipelining: env.REDIS_ENABLE_AUTO_PIPELINING==="true",};exportconstredisQueueRetryOptions: Partial<RedisOptions>={retryStrategy: (times: number)=>{// Retries forever. Waits at least 1s and at most 20s between retries.logger.warn(`Connection to redis lost. Retry attempt: ${times}`);returnMath.max(Math.min(Math.exp(times),20000),1000);},reconnectOnError: (err)=>{// Reconnects on READONLY errors and auto-retries the command.logger.warn(`Redis connection error: ${err.message}`);returnerr.message.includes("READONLY") ? 2 : false;},};exportconstcreateNewRedisInstance=(additionalOptions: Partial<RedisOptions>={},)=>{constinstance=env.REDIS_CONNECTION_STRING
? newRedis(env.REDIS_CONNECTION_STRING,{
...defaultRedisOptions,
...additionalOptions,})
: env.REDIS_HOST
? newRedis({host: String(env.REDIS_HOST),port: Number(env.REDIS_PORT),password: String(env.REDIS_AUTH),
...defaultRedisOptions,
...additionalOptions,})
: null;instance?.on("error",(error)=>{logger.error("Redis error",error);});returninstance;};
Assumptions
I assume that we fail to send the rlflxIncr command to Redis, but ioredis somehow still registers a callback for it. When it comes to processing the result, we try to access the result set at a position that is not filled, because redis never received this command. Then it fails out of the core request loop and raises an uncaught exception.
What I expect to happen
No errors being raised or errors being thrown within the core application loop.
The text was updated successfully, but these errors were encountered:
What
We are using https://github.com/animir/node-rate-limiter-flexible with ioredis to implement rate limiting across our web endpoints. Yesterday, we observed a series of errors that caused our server to raise an uncaughtException on a global level and fully crash.
Within our endpoint, we got the first error:
I noticed that rlflxlncr is a function that is defined within the node-rate-limited-flexible library: https://github.com/animir/node-rate-limiter-flexible/blob/master/lib/RateLimiterRedis.js#L35, but cannot tell if there is a correlation.
About .7ms later, I see two exceptions being raised. Both with the following stacktrace:
followed by
Signal: uncaughtException
.Context / Versions
ioredis@5.4.1
rate-limiter-flexible@5.0.3
redis v7 (Managed ElastiCache Redis)
We ran this setup for about 2 months without errors, i.e. I expect this to be an edge case. It happened about 20s after the server started, i.e. it may be an issue on startup, not in ongoing operation.
We create our Redis client like this:
Assumptions
I assume that we fail to send the
rlflxIncr
command to Redis, but ioredis somehow still registers a callback for it. When it comes to processing the result, we try to access the result set at a position that is not filled, because redis never received this command. Then it fails out of the core request loop and raises an uncaught exception.What I expect to happen
No errors being raised or errors being thrown within the core application loop.
The text was updated successfully, but these errors were encountered: