Hyades, named after the star cluster closest to earth, is an incubating project for decoupling responsibilities from Dependency-Track's monolithic API server into separate, scalableβ’ services. We're using Apache Kafka (or Kafka-compatible brokers like Redpanda) for communicating between API server and Hyades services.
If you're interested in the technical background of this project, please refer to π WTF.md
π.
The main objectives of Hyades are:
- Enable Dependency-Track to handle portfolios spanning hundreds of thousands of projects
- Improve resilience of Dependency-Track, providing more confidence when relying on it in critical workflows
- Improve deployment and configuration management experience for containerized / cloud native tech stacks
Other than separating responsibilities, the API server has been modified to allow for high availability (active-active) deployments. Various "hot paths", like processing of uploaded BOMs, have been optimized in the existing code. Further optimization is an ongoing effort.
Hyades already is a superset of Dependency-Track, as changes up to Dependency-Track v4.11.3 were ported, and features made possible by the new architecture have been implemented on top. Where possible, improvements made in Hyades are, or will be, backported to Dependency-Track v4.x.
Generally, Hyades can do everything Dependency-Track can do.
On top of that, it is capable of:
- Evaluating policies defined in the Common Expression Language (CEL)
- Verifying the integrity of components, based on hashes consumed from BOMs and remote repositories
Rough overview of the architecture:
Except the mirror service (which is not actively involved in event processing), all services can be scaled up and down, to and from multiple instances. Despite being written in Java, all services except the API server can optionally be deployed as self-contained native binaries, offering a lower resource footprint.
To read more about the individual services, refer to their respective REAMDE.md
:
Yes! And all you need to kick the tires is Docker Compose!
docker compose --profile demo up -d --pull always
This will launch all required services, and expose the following endpoints:
Service | URL |
---|---|
API Server | http://localhost:8080 |
Frontend | http://localhost:8081 |
Redpanda Console | http://localhost:28080 |
PostgreSQL | localhost:5432 |
Redpanda Kafka API | localhost:9092 |
Simply navigate to the frontend to get started!
The initial admin credentials are admin
/ admin
π
The recommended way to deploy Hyades is via Helm.
The chart is maintained in the [
DependencyTrack/helm-charts`](https://github.com/DependencyTrack/helm-charts) repository.
$ helm repo add dependency-track https://dependencytrack.github.io/helm-charts
$ helm search repo dependency-track -o json | jq -r '.[].name'
dependency-track/dependency-track
dependency-track/hyades
The chart does not include:
- a database
- a Kafka-compatible broker
Helm charts to deploy Kafka brokers to Kubernetes are provided by both Strimzi and Redpanda.
Deploying to a local Minikube cluster is a great way to get started.
Note
To allow for frictionless testing, we will use the values-minikube.yaml
configuration template. This template includes PostgreSQL and Redpanda deployments.
Both are configured for minimal resource footprint, which can lead to suboptimal performance.
- Start a local Minikube cluster, exposing
NodePort
s for API server (30080
) and frontend (30081
)
minikube start --ports 30080:30080,30081:30081
- Download the example
values-minikube.yaml
configuration template:
curl -O https://raw.githubusercontent.com/DependencyTrack/helm-charts/main/charts/hyades/values-minikube.yaml
- Make adjustments to
values-minikube.yaml
as needed- Refer to the chart's documentation for details on available values
- Refer to the configuration reference for details on available application options
- Deploy Hyades
helm install hyades dependency-track/hyades \
-n hyades --create-namespace \
-f ./values-minikube.yaml
- Wait a moment for all deployments to become ready
kubectl -n hyades rollout status deployment \
--selector 'app.kubernetes.io/instance=hyades' \
--watch --timeout 3m
- Visit
http://localhost:30081
in your browser to access the frontend
A basic metrics monitoring stack is provided, consisting of Prometheus and Grafana.
To start both services, run:
docker compose --profile monitoring up -d
The services will be available locally at the following locations:
- Prometheus: http://localhost:9090
- Grafana: http://localhost:3000
Prometheus is configured to scrape metrics from the following services in a 5s intervals:
- Redpanda Broker
- API Server
- Notification Publisher
- Repository Meta Analyzer
- Vulnerability Analyzer
The Grafana instance will be automatically provisioned to use Prometheus as data source. Additionally, dashboards for the following services are automatically set up:
- Redpanda Broker
- API Server
- Vulnerability Analyzer
The provided docker-compose.yml
includes an instance of Redpanda Console
to aid with gaining insight into what's happening in the message broker. Among many other things, it can be used to
inspect messages inside any given topic.
The console is exposed at http://127.0.0.1:28080
and does not require authentication. It's intended for local use only.
Refer to the Configuration
documentation.
- JDK 21+
- Maven
- Docker
mvn clean install -DskipTests
Running the Hyades services locally requires both a Kafka broker and a database server to be present. Containers for Redpanda and PostgreSQL can be launched using Docker Compose:
docker compose up -d
To launch individual services execute the quarkus:dev
Maven goal for the respective module:
mvn -pl vulnerability-analyzer quarkus:dev
Make sure you've built the project at least once, otherwise the above command will fail.
Note
If you're unfamiliar with Quarkus' Dev Mode, you can read more about it here
To execute the unit tests for all Hyades modules:
mvn clean verify
Note
End-to-end tests are based on container images. The tags of those images are currently hardcoded. For the Hyades services, the tags are set tolatest
. If you want to test local changes, you'll have to first:
- Build container images locally
- Update the tags in
AbstractE2ET
To execute end-to-end tests as part of the build:
mvn clean verify -Pe2e-all
To execute only the end-to-end tests:
mvn -pl e2e clean verify -Pe2e-all