Skip to content

Security and Authz

Andrew Azores edited this page Jul 12, 2024 · 31 revisions

General Application Architecture w.r.t Security

The Cryostat 3.0 application as a whole consists of:

  • Cryostat Deployment
    • Service + Route → Auth container
    • Cryostat Pod
      • Auth Proxy container instance
      • Cryostat container instance
      • cryostat-db container instance
        • PersistentVolumeClaim for Postgres Database data
      • cryostat-storage container instance
        • PersistentVolumeClaim for SeaweedFS data
      • Grafana container instance
      • jfr-datasource container instance
  • (optional) Cryostat Report Generator Deployment
    • Service (no Route) → Pods
    • Cryostat Report Generator Pod(s)
      • cryostat-report container instance
  • Operator Pod
    • cryostat-operator instance, containing various controllers

The Routes are configured with TLS Re-Encryption so all connections from outside the cluster use HTTPS/WSS using the OpenShift cluster's TLS cert externally. Internally, Service connections between Cryostat components use HTTPS with cert-manager (described in more detail below) to ensure that connections are private even within the cluster namespace. Each Auth Proxy container is either an oauth2-proxy configured with htpasswd Basic authentication, or an openshift-oauth-proxy delegating to the cluster's internal authentication/authorization server and optional htpasswd authentication.

Scenario

Network topology of Cryostat, its containers, and target applications

In this scenario, the Cryostat Operator is installed into its own namespace. It runs here separately with its privileged serviceaccount. Cryostat CR objects are created to request the Operator to create Cryostat instances. The CR has a field for a list of namespace names that the associated Cryostat instance should be deployed across. When the Cryostat instances are created, they are supplied with an environment variable informing them which namespaces should be monitored. These Cryostat instances are deployed into their own separate install namespaces as well and run with their own lower privileged serviceaccounts. Using these privileges they perform an Endpoints query to discover target applications across each of the listed namespaces. Cryostat will only automatically discover those target applications (potentially including itself) that are located within this namespace. Cryostat queries the k8s/OpenShift API server for Endpoints objects within each namespace, then filters them for ports with either the name jfr-jmx or the number 9091. Other applications, within the namespace or otherwise, may be registered via the Custom Targets API or the Discovery Plugin API (ex. using the Cryostat Agent), but Cryostat will not be aware that these applications may be in other namespaces.

With this setup, the target applications are not able to assume the privileges associated with the serviceaccounts for the Cryostat Operator or each of the Cryostat instances. Each Cryostat instance can discover and become aware of target JVM applications across any of the namespaces that this particular instance is monitoring. The separated namespaces also ease administration and access management, so cluster administrators can assign roles to users that allow them to work on projects within namespaces, and assign other roles to other users that allow them to acces Cryostat instances that may have visibility into those namespaces.

Flow of JFR Data

Cryostat traditonally connects to other JVM applications within its cluster using remote JMX, using cluster-internal URLs so that no traffic will leave the cluster. Cryostat supports connecting to target JVMs with JMX auth credentials enabled ("Basic" style authentication). When a connection attempt to a target fails due to a SecurityException, Cryostat responds to the requesting client with an HTTP 427 status code and the header X-JMX-Authenticate: Basic. The client is expected to create a Stored Credential object via the Cryostat API before retrying the request, which results in the required target credentials being stored in an encrypted database table. When deployed in OpenShift the requests are already encrypted using OpenShift TLS re-encryption as mentioned above, so the credentials are never transmitted in cleartext. The table is encrypted with a passphrase either provided by the user at deployment time, or generated by the Operator if none is specified. It is also possible to configure Cryostat to trust SSL certificates used by target JVMs by adding the certificate to a Secret and linking that to the Cryostat CR, which will add the certificate to the SSL trust store used by Cryostat. The Operator also uses cert-manager to generate a self-signed CA and provides Cryostat's auth proxy with certificates as a mounted volume.

In more recent releases, JVM applications may optionally be instrumented with the Cryostat Agent, which uses the local JDK Instrumentation API to hook into the target application. The Cryostat Agent then exposes a JDK HTTP(S) webserver, generates credentials to secure it, and looks up its supplied configuration to locate the Cryostat server instance it should register with. Once it is registered the Agent creates a Stored Credential object on the server corresponding to itself, then clears its generated password from memory retaining only the hash. From this point on, the Agent and Cryostat server communicate with each other using Basic authentication bidirectionally, and with TLS enabled on each webserver if enabled/configured.

Cryostat and the associated Operator will only monitor the OpenShift namespace(s) that they are deployed within (see Scenarios above), and can only initiate connections to target JVMs within this namespace - this is enforced by OpenShift's networking setup. This way, end user administrators or developers can be sure of which set of JVMs they are running which are visible to Cryostat and thus which JVMs' data they should be mindful of.

Once Cryostat has established a JMX or HTTP(S) connection to a target application its primary purpose is to enable JFR recordings on the target JVM and expose them to the end user. These recordings can be transferred from the target JVM back to Cryostat over the JMX/HTTP(S) connection. Cryostat does this for four purposes:

  1. to generate Automated Rules Reports of the JFR contents, served to clients over HTTPS. These may be generated by the Cryostat container itself or by cryostat-reports sidecar container(s) depending on the configuration.
  2. to stream JFR file contents into the cryostat-storage container "archives", which saves them in an OpenShift PersistentVolumeClaim
  3. to stream a snapshot of the JFR contents over HTTPS to a requesting client's GET request
  4. to upload a snapshot of the JFR contents using HTTPS POST to the jfr-datasource

("archived" JFR copies can also be streamed back out to clients over HTTPS, or POSTed to jfr-datasource, and Automated Rules Reports can also be made of them)

Here, "the client" may refer to an end user's browser when using Cryostat's web interface, or may be the end user using a direct HTTP(S) client (ex. HTTPie or curl), or may be an OpenShift Operator controller acting as an automated client. All of these cases are handled identically by Cryostat.

jfr-datasource receives file uploads by POST request from the Cryostat container. Cryostat and jfr-datasource run together within the same Pod and use the local loopback network interface, so the file contents do not travel across the network outside of the Pod. These files are held in transient storage by the jfr-datasource container and the parsed JFR data contents held in-memory to make available for querying by the Grafana dashboard container, which also runs within the same Pod and communicates over the local loopback network interface.

Cryostat Authz Specifics

When deployed in OpenShift, the Cryostat Service is fronted by an instance of the OpenShift OAuth Proxy. This proxy will accept Authorization: Bearer abcd1234 headers from CLI clients, or will send interactive clients through the OAuth login flow to gain an authorization token and cookie. These tokens are the ones provided by OpenShift OAuth itself, ie. the user's account for that OpenShift instance/cluster. On each HTTPS request, the OAuth Proxy instance in front of Cryostat receives the token and sends its own request to the internal OpenShift OAuth server to validate the token. If OpenShift OAuth validates the token the request is accepted. If OpenShift OAuth does not validate the token, or the user does not provide a token, then the request is rejected with a 401. The default RBAC configuration requires clients to pass a create pods/exec access check in the Cryostat instance's installation namespace.

When deployed outside of OpenShift, the Cryostat Service is instead fronted by an instance of OAuth2 Proxy. This behaves very similarly to the OpenShift OAuth Proxy except without the integration to the cluster's internal OAuth server. Instead, users are able to configure an htpasswd file to define Authorization: Basic base64(user:pass)-style authentication. In this mode there is no RBAC, users either have an account and may access Cryostat or they have no account.