Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

THREESCALE-10582 fix integration of upstream connection policy with camel policy #1443

Merged
merged 9 commits into from
Feb 8, 2024

Conversation

eguzki
Copy link
Member

@eguzki eguzki commented Feb 5, 2024

What

Fixes: https://issues.redhat.com/browse/THREESCALE-10582

Upstream timeouts don't work with Camel Service. Actually upstream connection policy is not working when https_proxy is being used (either with env vars or policy).

This PR fixes the integration of upstream connection with any use case for https_proxy.

It was considered adding connection options to the proxy policy and camel policy. Mainly because "Upstream connection" policy is referring to "upstream", which can be confusing when a proxy is being used, as the connection to upstream backend is no longer made by APIcast. Instead, APIcast creates a connection to the proxy. So the Upstream connection opts should, ideally, only apply to connections to backend "upstream".

We decided to apply upstream connection policy to any "upstream" connection the APIcast make initiate, either a proxy or a actual upstream backend. That way, implementation wise it is easier. No need to add extended connection parameters to the proxy policies. Furthermore, if users are using Upstream connection policy together with http_proxy, the configuration still applies. With a new connection parameters in the proxy policies, this last use case would be broken. Additionally, if connection opts are added to the policies as optional params, we would need to add new env vars as well for the use case where proxies are configured via env vars. Too complex just to stick "upstream" concept to the actual backend (api_backend in the service configuration). Instead, upstram connection policy applies to any "upstream" connection APIcast does, regardless of being it a proxy or backend upstream.

Verification Steps

Upstream connection integration with https_proxy camel proxy

  • Build docker image from this git branch
make runtime-image IMAGE_NAME=apicast-test
  • Camel proxy dev environment setup
cd dev-environments/camel-proxy
make certs
  • Testing https_proxy use case: APIcast --> camel proxy --> upstream (TLS)

This env uses as api_backend the real env https://echo-api.3scale.net:443. My roundtrip latency is ~400ms.

❯ curl -i -w "tcp:%{time_total}\n" https://echo-api.3scale.net:443 2>/dev/null
HTTP/1.1 200 OK
content-type: application/json
x-3scale-echo-api: echo-api/1.0.3
vary: Origin
x-content-type-options: nosniff
content-length: 524
x-envoy-upstream-service-time: 0
date: Mon, 05 Feb 2024 14:50:56 GMT
server: envoy

{
  "method": "GET",
  "path": "/",
  "args": "",
  "body": "",
  "headers": {
    "HTTP_VERSION": "HTTP/1.1",
    "HTTP_HOST": "echo-api.3scale.net",
    "HTTP_USER_AGENT": "curl/7.81.0",
    "HTTP_ACCEPT": "*/*",
    "HTTP_X_FORWARDED_FOR": "81.61.128.254",
    "HTTP_X_FORWARDED_PROTO": "https",
    "HTTP_X_ENVOY_EXTERNAL_ADDRESS": "81.61.128.254",
    "HTTP_X_REQUEST_ID": "a78c1edb-b0bf-42a2-8c44-e1ae9d4dba49",
    "HTTP_X_ENVOY_EXPECTED_RQ_TIMEOUT_MS": "15000"
  },
  "uuid": "b48cdf4d-1419-4aaf-871e-10cd9c67f68e"
}tcp:0.398028

Let's start with timeouts set to 1 sec and the request should be accepted.

patch <<EOF
diff --git a/dev-environments/camel-proxy/apicast-config.json b/dev-environments/camel-proxy/apicast-config.json
index 91201afa..8f92f029 100644
--- a/dev-environments/camel-proxy/apicast-config.json
+++ b/dev-environments/camel-proxy/apicast-config.json
@@ -44,6 +44,14 @@
           "host": "backend"
         },
         "policy_chain": [
+          {
+            "name": "apicast.policy.upstream_connection",
+            "configuration": {
+              "connect_timeout": 1,
+              "send_timeout": 1,
+              "read_timeout": 1
+            }
+          },
           {
             "name": "apicast.policy.camel",
             "configuration": {
EOF

Run environment

make gateway IMAGE_NAME=apicast-test

The request should be accepted (200 OK), as the connection timeouts should not be exceeded.

curl --resolve https-proxy.example.com:8080:127.0.0.1 -v "http://https-proxy.example.com:8080/?user_key=123"

Now, let's lower the timeouts threshold to something like 100ms that should be exceeded because the upstream is far away.

Stop the gateway

CTRL-C

Restore apicast-config.json file

git checkout apicast-config.json

Apply 100ms timeouts

patch <<EOF
diff --git a/dev-environments/camel-proxy/apicast-config.json b/dev-environments/camel-proxy/apicast-config.json
index 91201afa..8f92f029 100644
--- a/dev-environments/camel-proxy/apicast-config.json
+++ b/dev-environments/camel-proxy/apicast-config.json
@@ -44,6 +44,14 @@
           "host": "backend"
         },
         "policy_chain": [
+          {
+            "name": "apicast.policy.upstream_connection",
+            "configuration": {
+              "connect_timeout": 0.1,
+              "send_timeout": 0.1,
+              "read_timeout": 0.1
+            }
+          },
           {
             "name": "apicast.policy.camel",
             "configuration": {
EOF

Run environment

make gateway IMAGE_NAME=apicast-test

The request should fail (502 Bad Gateway), as the connection timeouts should be exceeded.

curl --resolve https-proxy.example.com:8080:127.0.0.1 -v "http://https-proxy.example.com:8080/?user_key=123"

The logs should show the following line

[error] 19#19: *2 lua tcp socket read timed out

Clean the env before starting next step

make clean

Upstream connection integration with https_proxy with proxy policy (tinyproxy)

  • Build docker image from this git branch
cd ${APICAST_PROJECT} && git checkout THREESCALE-10582-upstream-policy-with-camel-policy

make runtime-image IMAGE_NAME=apicast-test
  • https_proxy dev environment setup
cd dev-environments/https-proxy-upstream-tlsv1.3
make certs
  • Testing https_proxy use case: APIcast --> tiny proxy --> upstream (TLS)

Timeouts set to 0.1 sec

patch <<EOF
diff --git a/dev-environments/https-proxy-upstream-tlsv1.3/apicast-config.json b/dev-environments/https-proxy-upstream-tlsv1.3/apicast-config.json
index 5227c5aa..34a2ed23 100644
--- a/dev-environments/https-proxy-upstream-tlsv1.3/apicast-config.json
+++ b/dev-environments/https-proxy-upstream-tlsv1.3/apicast-config.json
@@ -11,6 +11,14 @@
           "host": "backend"
         },
         "policy_chain": [
+          {
+            "name": "apicast.policy.upstream_connection",
+            "configuration": {
+              "connect_timeout": 0.1,
+              "send_timeout": 0.1,
+              "read_timeout": 0.1
+            }
+          },
           {
             "name": "apicast.policy.http_proxy",
             "configuration": {
EOF

Run environment

make gateway IMAGE_NAME=apicast-test

The request should be accepted (200 OK), as the connection timeouts should not be exceeded.

curl --resolve get.example.com:8080:127.0.0.1 -i "http://get.example.com:8080/?user_key=123" 

Now, let's simulate some network latency using docker containers with Traffic control. We will add 200ms of latency to the container running socat between the proxy and the backend upstream. It is called example.com service.

Stop the gateway

CTRL-C

We are going to modify network-related stuff, so the NET_ADMIN capability is needed.

patch <<EOF
diff --git a/dev-environments/https-proxy-upstream-tlsv1.3/docker-compose.yml b/dev-environments/https-proxy-upstream-tlsv1.3/docker-compose.yml
index 9fa735f7..0f802e8b 100644
--- a/dev-environments/https-proxy-upstream-tlsv1.3/docker-compose.yml
+++ b/dev-environments/https-proxy-upstream-tlsv1.3/docker-compose.yml
@@ -39,6 +39,8 @@ services:
     restart: unless-stopped
     volumes:
       - ./cert/example.com.pem:/etc/pki/example.com.pem
+    cap_add:
+      - NET_ADMIN
   two.upstream:
     image: kennethreitz/httpbin
     expose:
EOF

Run environment with the new config

make gateway IMAGE_NAME=apicast-test

install the tc (traffic control) command

docker compose exec example.com apk add iproute2-tc

Add 200ms latency to the outbound traffic of example.com service.

docker compose exec example.com tc qdisc add dev eth0 root netem delay 200ms

The request should be rejected (503 Service Temporarily Unavailable), as the connection timeouts should not be exceeded.

curl --resolve get.example.com:8080:127.0.0.1 -i "http://get.example.com:8080/?user_key=123" 

The logs should show the following line

[error] 19#19: *2 lua tcp socket read timed out

@@ -202,10 +202,8 @@ ETag: foobar
--- error_code: 200
--- error_log env
proxy request: CONNECT test-upstream.lvh.me:$TEST_NGINX_RANDOM_PORT HTTP/1.1
--- user_files fixture=tls.pl eval
--- error_log env
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there cannot be two --- error_log directives. Only one of them is being applied.

@@ -1,6 +1,8 @@
use lib 't';
use Test::APIcast::Blackbox 'no_plan';

require("http_proxy.pl");
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one comment not related to this PR: I am thinking about running one http proxy (tinyproxy for example) in the docker compose env we run for the e2e tests and get rid of this custom, perl implemented, proxy we are using. Currently, in addition to the apicast dev container, there is a redis one.

Same can be done for the camel proxy.

For backend, upstream, it is very handy to do some asserts so, "outsourcing" upstream may not be convenient.

wdyt?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there are a few tests that watch the output of this proxy module, ie headers. So we will need to rewrite a few tests, but on the flip side we get more reliable test (may be), ie avoid the case where e2e tests run before the proxy module is ready

We will need at least 4 containers for both dev and ci image.

  • Normal proxy
  • A camel proxy
  • A camel proxy support HTTPS
  • A proxy that accept BasicAuth

I had a similar problem with #1414 where I need 4 more containers just to be able to test redis sentinel.

As long as it works well and is easy to maintain, I really don't mind adding additional pods.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like we have a TODO list. We can implement step by step and remove the perl based proxy when it is fully being replaced by other proxies.

PS: dev andci image are currently the same one

@eguzki eguzki force-pushed the THREESCALE-10582-upstream-policy-with-camel-policy branch from a955607 to 2673754 Compare February 5, 2024 14:20
@eguzki eguzki marked this pull request as ready for review February 5, 2024 15:48
@eguzki eguzki requested a review from a team as a code owner February 5, 2024 15:48
@eguzki eguzki requested a review from tkan145 February 5, 2024 17:18
CHANGELOG.md Outdated Show resolved Hide resolved
gateway/src/resty/resolver/http.lua Outdated Show resolved Hide resolved
t/apicast-policy-upstream-connection.t Show resolved Hide resolved
t/apicast-policy-upstream-connection.t Outdated Show resolved Hide resolved
@eguzki eguzki force-pushed the THREESCALE-10582-upstream-policy-with-camel-policy branch from 54d26f7 to b705c0f Compare February 6, 2024 10:09
@eguzki eguzki requested a review from tkan145 February 6, 2024 12:55
@eguzki
Copy link
Member Author

eguzki commented Feb 6, 2024

all comments addressed

@tkan145
Copy link
Contributor

tkan145 commented Feb 8, 2024

LGTM!

@eguzki eguzki merged commit 60f977f into master Feb 8, 2024
13 checks passed
@eguzki eguzki deleted the THREESCALE-10582-upstream-policy-with-camel-policy branch February 8, 2024 09:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants