This project leverages Red Hat build of Quarkus 3.8.x, the Supersonic Subatomic Java Framework. More specifically, the project is implemented using Red Hat build of Apache Camel 4.4.x for Quarkus.
This camel proxy service can be leveraged to configure the Red Hat 3scale APIcast Camel Service policy.
The camel proxy service uses the OAuth2 client credentials flow to retrieve an access token from Red Hat build of Keycloak, and then sets it in the Authorization HTTP header before proxying the request to the upstream backend.
- Maven 3.8.1+
- JDK 21 installed with
JAVA_HOME
configured appropriately - A running Red Hat build of Keycloak instance. The following must be configured:
- A confidential client with the following characteristics:
- Client ID:
threescale-camel-service
- Client Protocol:
openid-connect
- Access type:
confidential
- OpenID Connect flow:
service account (client credentials)
- Client ID:
- Replace the
client secret
in:quarkus.oidc-client.credentials.secret
property in theapplication.yml
filequarkus.oidc-client.credentials.secret
property of thethreescale-camel-service-secret
in theopenshift.yml
file
- Replace the
OIDC authorization server URL
in:quarkus.oidc-client.auth-server-url
property in theapplication.yml
filequarkus.oidc-client.auth-server-url
property of thethreescale-camel-service-secret
in theopenshift.yml
file
- A confidential client with the following characteristics:
- A running Red Hat OpenShift cluster
- A running Red Hat 3scale API Management platform
keytool -genkey -keypass P@ssw0rd -storepass P@ssw0rd -alias threescale-camel-service -keyalg RSA \
-dname "CN=threescale-camel-service" \
-validity 3600 -keystore ./tls-keys/keystore.p12 -v \
-ext san=DNS:threescale-camel-service.svc,DNS:threescale-camel-service.svc.cluster.local,DNS:threescale-camel-service.camel-quarkus.svc,DNS:threescale-camel-service.camel-quarkus.svc.cluster.local,DNS:threescale-camel-service.ceq-services-jvm.svc,DNS:threescale-camel-service.ceq-services-jvm.svc.cluster.local,DNS:threescale-camel-service.ceq-services-native.svc,DNS:threescale-camel-service.ceq-services-native.svc.cluster.local
You can run your application in dev mode that enables live coding using:
./mvnw quarkus:dev
NOTE: Quarkus now ships with a Dev UI, which is available in dev mode only at http://localhost:8080/q/dev/.
-
Execute the following command line:
./mvnw package
It produces the
quarkus-run.jar
file in thetarget/quarkus-app/
directory. Be aware that it’s not an über-jar as the dependencies are copied into thetarget/quarkus-app/lib/
directory.The application is now runnable using:
java -Dquarkus.kubernetes-config.enabled=false -jar target/quarkus-app/quarkus-run.jar
-
OPTIONAL: Creating a native executable
You can create a native executable using:
./mvnw package -Pnative
Or, if you don't have GraalVM installed, you can run the native executable build in a container using:
./mvnw package -Pnative -Dquarkus.native.container-build=true
You can then execute your native executable with:
./target/threescale-camel-service-1.0.0-SNAPSHOT-runner
If you want to learn more about building native executables, please consult https://quarkus.io/guides/maven-tooling.
-
Running Jaeger locally
Jaeger, a distributed tracing system for observability (open tracing). 💡 A simple way of starting a Jaeger tracing server is with
docker
orpodman
:- Start the Jaeger tracing server:
podman run --rm -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 -e COLLECTOR_OTLP_ENABLED=true \ -p 6831:6831/udp -p 6832:6832/udp \ -p 5778:5778 -p 16686:16686 -p 4317:4317 -p 4318:4318 -p 14250:14250 -p 14268:14268 -p 14269:14269 -p 9411:9411 \ quay.io/jaegertracing/all-in-one:latest
- While the server is running, browse to http://localhost:16686 to view tracing events.
- Start the Jaeger tracing server:
-
Test locally
print "GET http://localhost:8080/q/health HTTP/1.1\nHost: localhost\nAccept: */*\n\n" | ncat --no-shutdown --ssl localhost 9443
- Access to a Red Hat OpenShift cluster v4
- User has self-provisioner privilege or has access to a working OpenShift project
- OPTIONAL: Jaeger, a distributed tracing system for observability (open tracing).
- Login to the OpenShift cluster
oc login ...
- Create an OpenShift project or use your existing OpenShift project. For instance, to create
ceq-services-jvm
oc new-project ceq-services-jvm --display-name="Red Hat build of Apache Camel for Quarkus Apps - JVM Mode"
- Create secret containing the keystore
oc create secret generic threescale-camel-service-keystore-secret \ --from-file=keystore.p12=./tls-keys/keystore.p12
- Adjust the
quarkus.otel.exporter.otlp.traces.endpoint
property of thethreescale-camel-service-secret
in theopenshift.yml
file according to your OpenShift environment and where you installed the Jaeger server. - Package and deploy to OpenShift
./mvnw clean package -Dquarkus.openshift.deploy=true -Dquarkus.container-image.group=ceq-services-jvm
⚠️ Pre-requisites
- For native compilation, a Linux X86_64 operating system or an OCI (Open Container Initiative) compatible container runtime, such as Podman or Docker is required.
- Login to the OpenShift cluster
oc login ...
- Create an OpenShift project or use your existing OpenShift project. For instance, to create
ceq-services-native
oc new-project ceq-services-native --display-name="Red Hat build of Apache Camel for Quarkus Apps - Native Mode"
- Create secret containing the keystore
oc create secret generic threescale-camel-service-keystore-secret \ --from-file=keystore.p12=./tls-keys/keystore.p12
- Adjust the
quarkus.otel.exporter.otlp.traces.endpoint
property of thethreescale-camel-service-secret
in theopenshift.yml
file according to your OpenShift environment and where you installed the Jaeger server. - Package and deploy to OpenShift
- Using podman to build the native binary:
./mvnw clean package -Pnative \ -Dquarkus.openshift.deploy=true \ -Dquarkus.native.container-runtime=podman \ -Dquarkus.native.builder-image=registry.access.redhat.com/quarkus/mandrel-21-jdk17-rhel8:latest \ -Dquarkus.container-image.group=ceq-services-native
- Using docker to build the native binary:
./mvnw clean package -Pnative \ -Dquarkus.openshift.deploy=true \ -Dquarkus.native.container-runtime=docker \ -Dquarkus.native.builder-image=registry.access.redhat.com/quarkus/mandrel-21-jdk17-rhel8:latest \ -Dquarkus.container-image.group=ceq-services-native
- Using podman to build the native binary:
- An API Product configured in Red Hat 3scale API Management. For instance, the sample
Echo API
can be used.
- Add and configure the APICast Camel Service policy on the API Product
- Beware of the following note:
NOTE: You cannot use
curl
(or any other HTTP client) to test the Camel HTTP proxy directly because the proxy does not support HTTP tunneling using theCONNECT
method. When using HTTP tunneling withCONNECT
, the transport is end-to-end encrypted, which does not allow the Camel HTTP proxy to mediate the payload. You may test this with 3scale, which implements this as if proxying via HTTP but establishes a new TLS session towards the Camel application. If you need to perform integration tests against the Camel application you need to use a custom HTTP client. You can use something like:print "GET https://<backend url> HTTP/1.1\nHost: <backend host>\nAccept: */*\n\n" | ncat --no-shutdown --ssl <camel proxy app host> <camel proxy app port>
Below is a screenshot of the Camel Service policy configuration:
Below is a sample test where you can notice the Authorization
HTTP header added and populated with the retrieved OpenID Connect access token (HTTP_AUTHORIZATION
header in the Echo API
response):
http -v 'https://echo-api.apps.cluster-l5mt5.l5mt5.sandbox1873.opentlc.com:443/demo' user_key:fb61a7d34e82c83b029216a3ca2e24e6
GET /demo HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: echo-api.apps.cluster-l5mt5.l5mt5.sandbox1873.opentlc.com:443
User-Agent: HTTPie/3.2.1
user_key: fb61a7d34e82c83b029216a3ca2e24e6
HTTP/1.1 200 OK
cache-control: private
content-type: application/json
date: Wed, 10 Aug 2022 13:03:05 GMT
server: envoy
set-cookie: d0df2ffbd348521e2eef0bdddd2b78c1=83e0250661bd946000044c7d5d01a9a8; path=/; HttpOnly; Secure; SameSite=None
transfer-encoding: chunked
vary: Origin
x-3scale-echo-api: echo-api/1.0.3
x-content-type-options: nosniff
x-envoy-upstream-service-time: 1
{
"args": "",
"body": "",
"headers": {
"CONTENT_LENGTH": "0",
"HTTP_ACCEPT": "*/*,*/*",
"HTTP_ACCEPT_ENCODING": "gzip, deflate,gzip, deflate",
"HTTP_AUTHORIZATION": "Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJYbHBmQzVaT3dZUVZVdlRHREppRmxMT3lVTXhFZkFLSUdiMDdxcEN6dlBBIn0.eyJleHAiOjE2NjAxMzY3NzcsImlhdCI6MTY2MDEzNjQ3NywianRpIjoiNTUxMWJkNjgtNTk4ZS00YTU2LTk5YTUtYTdjZDk2NjFlMzhlIiwiaXNzIjoiaHR0cHM6Ly9zc28uYXBwcy5jbHVzdGVyLWw1bXQ1Lmw1bXQ1LnNhbmRib3gxODczLm9wZW50bGMuY29tL2F1dGgvcmVhbG1zL29wZW5zaGlmdC1jbHVzdGVyIiwiYXVkIjoiYWNjb3VudCIsInN1YiI6Ijc1MzljOGVkLTgxNWQtNGFiMC04N2FiLTNlYTFjNDYzZTc3MyIsInR5cCI6IkJlYXJlciIsImF6cCI6InRocmVlc2NhbGUtY2FtZWwtc2VydmljZSIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJkZWZhdWx0LXJvbGVzLW9wZW5zaGlmdC1jbHVzdGVyIiwib2ZmbGluZV9hY2Nlc3MiLCJ1bWFfYXV0aG9yaXphdGlvbiJdfSwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX0sInNjb3BlIjoiZW1haWwgcHJvZmlsZSIsImNsaWVudElkIjoidGhyZWVzY2FsZS1jYW1lbC1zZXJ2aWNlIiwiZW1haWxfdmVyaWZpZWQiOmZhbHNlLCJjbGllbnRIb3N0IjoiMy43NC42Ny4yNDgiLCJwcmVmZXJyZWRfdXNlcm5hbWUiOiJzZXJ2aWNlLWFjY291bnQtdGhyZWVzY2FsZS1jYW1lbC1zZXJ2aWNlIiwiY2xpZW50QWRkcmVzcyI6IjMuNzQuNjcuMjQ4In0.R2r5ByPw8HwcDAZzWOxpRzFKQu7dhp6aPJkT1j-UAAhVMYsqQRzWhb0nBN1Pd7svyy1pZqI_brmKSkprkCcOH8evgokDqTsTwW8DGtrBNCEEaigSwuRGnctWK2nhifjQBg3hbLxN5PO_VUXmn5bLvk6N0WKvAyFcgM-EMQXwrBEw80MjM6EnOuSAyY0vYyAK2_D_UNgtMy0sCVFH0_sMLjQBWn1ppoRkLxCSdFyF7RmPhLUVDtC7mfZ5jGYgVIzMR6rW2FvUIukycPGjsEWL9PiyIub_2ocOvgXxGMggV_rIjeI3j6jEQ-BGTLfWzeVOPTBkk2vcucyd9QgZBCPe3A",
"HTTP_FORWARDED": "for=92.169.228.162;host=echo-api.apps.cluster-l5mt5.l5mt5.sandbox1873.opentlc.com;proto=https, for=92.169.228.162;host=echo-api.apps.cluster-l5mt5.l5mt5.sandbox1873.opentlc.com;proto=https",
"HTTP_HOST": "echo-api.3scale.net",
"HTTP_UBER_TRACE_ID": "3571087cfb4b59b3:f187ff011680923e:3571087cfb4b59b3:1, 3571087cfb4b59b3:a125cfd990a3369e:e1ebfb5bbe241762:1",
"HTTP_USER_AGENT": "HTTPie/3.2.1,HTTPie/3.2.1",
"HTTP_USER_KEY": "fb61a7d34e82c83b029216a3ca2e24e6, fb61a7d34e82c83b029216a3ca2e24e6",
"HTTP_VERSION": "HTTP/1.1",
"HTTP_X_ENVOY_EXPECTED_RQ_TIMEOUT_MS": "15000",
"HTTP_X_ENVOY_EXTERNAL_ADDRESS": "3.74.67.248",
"HTTP_X_FORWARDED_FOR": "92.169.228.162,92.169.228.162,3.74.67.248",
"HTTP_X_FORWARDED_HOST": "echo-api.apps.cluster-l5mt5.l5mt5.sandbox1873.opentlc.com, echo-api.apps.cluster-l5mt5.l5mt5.sandbox1873.opentlc.com",
"HTTP_X_FORWARDED_PORT": "443, 443",
"HTTP_X_FORWARDED_PROTO": "https",
"HTTP_X_REQUEST_ID": "5dea4e96-5768-4ec2-a96a-de3e0559d80c"
},
"method": "GET",
"path": "/demo",
"uuid": "98621a69-0fa8-4bf6-8c11-7e9ae140f9fd"
}
If you enabled OpenTracing on your Red Hat 3scale API Management platform and used the same Jaeger collector, you can observe spans similar to the following:
- OpenShift (guide): Generate OpenShift resources from annotations
- OpenID Connect Client (guide): Get and refresh access tokens from OpenID Connect providers
- Camel MicroProfile Health (guide): Expose Camel health checks via MicroProfile Health
- Camel MicroProfile Metrics (guide): Expose metrics from Camel routes
- Camel Bean Validator (guide): Validate the message body using the Java Bean Validation API
- YAML Configuration (guide): Use YAML to configure your Quarkus application
- RESTEasy JAX-RS (guide): REST endpoint framework implementing JAX-RS and more
- Camel Bean (guide): Invoke methods of Java beans
- Kubernetes Config (guide): Read runtime configuration from Kubernetes ConfigMaps and Secrets
- Camel OpenTelemetry (guide): Distributed tracing using OpenTelemetry
- Camel Netty HTTP (guide): Netty HTTP server and client using the Netty 4.x
Configure your application with YAML
The Quarkus application configuration is located in src/main/resources/application.yml
.