Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Edge Limiting is sharing the hits across windows #1406

Open
SixPathsSage opened this issue Jul 18, 2023 · 3 comments
Open

Edge Limiting is sharing the hits across windows #1406

SixPathsSage opened this issue Jul 18, 2023 · 3 comments

Comments

@SixPathsSage
Copy link

SixPathsSage commented Jul 18, 2023

I have configured the edge limiting policy (Fixed window limiters) for a service that has 2 Rest APIs and each has a different rate limit configured. But while hitting the first API the limit is getting affected for both the APIs.

Version: 3.8.0

Steps To Reproduce
  1. Configuration:
"policy_chain": [
  {
    "name": "rate_limit",
    "version": "builtin",
    "configuration": {
      "limits_exceeded_error": {
        "status_code": 429,
        "error_handling": "exit"
      },
      "configuration_error": {
        "status_code": 500,
        "error_handling": "exit"
      },
      "fixed_window_limiters": [
        {
          "window": 60,
          "condition": {
            "combine_op": "and",
            "operations": [
              {
                "op": "==",
                "right": "/first-endpoint",
                "left_type": "liquid",
                "left": "{{uri}}",
                "right_type": "plain"
              }
            ]
          },
          "key": {
            "scope": "service",
            "name": "{{ jwt.sub }}",
            "name_type": "liquid"
          },
          "count": 10
        },
        {
          "window": 60,
          "condition": {
            "combine_op": "and",
            "operations": [
              {
                "op": "==",
                "right": "/second-endpoint",
                "left_type": "liquid",
                "left": "{{uri}}",
                "right_type": "plain"
              }
            ]
          },
          "key": {
            "scope": "service",
            "name": "{{ jwt.sub }}",
            "name_type": "liquid"
          },
          "count": 20
        }
      ]
    }
  }
]
  1. Hit both endpoints sequentially 21 times in a minute.
Current Result
Endpoint Value Configured Hits Allowed
/first-endpoint 10 5
/second-endpoint 20 15

The 6th hit of the first-endpoint fails and gives 429 too many requests and the 16th hit of the second-endpoint fails and gives 429 too many requests.

Expected Result
Endpoint Value Configured Hits Allowed
/first-endpoint 10 10
/second-endpoint 20 20

The 11th hit of the first-endpoint should fail and give 429 too many requests and the 21st hit of the second-endpoint should fail and give 429 too many requests

Limit reducing pattern
Hits /first-endpoint (limit after hit) /second-endpoint (limit after hit)
1 8 18
2 6 16
3 4 14
4 2 12
5 0 10
6 too many requests 9
7 too many requests 8
8 too many requests 7
9 too many requests 6
10 too many requests 5
11 too many requests 4
12 too many requests 3
13 too many requests 2
14 too many requests 1
15 too many requests 0
16 too many requests too many requests
@odkq
Copy link

odkq commented Oct 30, 2023

I did reproduce the issue with the attached configuration
edge-limiting-1406.json
and the test program:

import json
import requests
from sys import exit

url = 'http://localhost:8080/{}-endpoint?user_key=foo'
results = {}

# Hit both endpoints 21 times, print the aggregated number
# of hits allowed (200) and rejected (429)
for i in range(21):
    for endpoint in ['first', 'second']:
        r = requests.get(url.format(endpoint))
        if endpoint not in results:
            results[endpoint] = {}
        if r.status_code not in results[endpoint]:
            results[endpoint][r.status_code] = 0
        results[endpoint][r.status_code] += 1

print(json.dumps(results, indent=2))

if results['first'][200] != 10 and results['second'][200] != 20:
    print("Test FAILED")
    exit(1)
else:
    print("Test PASSED")
    exit(0)

Launching with:

$ docker run -d --publish 8080:8080 --rm --volume $(pwd)/edge-limiting-1406.json:/edge-limiting-1406.json --env THREESCALE_CONFIG_FILE=/edge-limiting-1406.json quay.io/3scale/apicast:latest
f15b7bb4f1bf6a50877b5cc38f2480a3b2f7bef63497f1fdd09ce0ef1d646f95
$ ./test.py
{
  "first": {
    "200": 5,
    "429": 16
  },
  "second": {
    "200": 15,
    "429": 6
  }
}
Test FAILED

Looking at the code at https://github.com/3scale/lua-resty-limit-traffic/blob/count-increments/lib/resty/limit/count.lua#L47 it looks like for a certain key there is a common count incremented, even when the limits for each path are different. The key is jwc.sub (the subscriber id) which is effectively the same for the same session.

If we use a different key for each path, as in
edge-limiting-1406-differentkeys.json the test passes:

$ docker run -d --publish 8080:8080 --rm --volume $(pwd)/edge-limiting-1406-differentkeys.json:/edge-limiting-1406.json --env THREESCALE_CONFIG_FILE=/edge-limiting-1406.json quay.io/3scale/apicast:latest
3006a2208e5098368009faf66a83e9d8a93b3210d2b1d1b633f0efa01eddc653
$ ./test.py
{
  "first": {
    "200": 10,
    "429": 11
  },
  "second": {
    "200": 20,
    "429": 1
  }
}
Test PASSED

This comment mentions the flexibility of using different limits on the same key (for example on different paths or for different groups of users).

IMHO, this may be a feature, as using the same key for both paths makes it a single window limiter, even if each path has a different limit.

@tkan145
Copy link
Contributor

tkan145 commented Oct 31, 2023

@odkq The behavior is expected. It's especially useful when you have multiple gateways.

When using openresty shdict, the limit is applied per gateway so depending on where the request is routed to, the user may exceed the limit. So when used with redis and sharing the same key, the limit is shared across gateways.

@odkq
Copy link

odkq commented Nov 1, 2023

So this are limits per user, not to avoid call congestion (which may be configured using other nginx/openresty mechanism?) so sharing the hit count between gateways makes all sense. Thank you @tkan145 !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants