Contributions to Morpheus fall into the following three categories.
- To report a bug, request a new feature, or report a problem with documentation, file an issue describing in detail the problem or new feature. The Morpheus team evaluates and triages issues, and schedules them for a release. If you believe the issue needs priority attention, comment on the issue to notify the team.
- To propose and implement a new Feature, file a new feature request issue. Describe the intended feature and discuss the design and implementation with the team and community. Once the team agrees that the plan is good, go ahead and implement it, using the code contributions guide below.
- To implement a feature or bug-fix for an existing outstanding issue, follow the code contributions guide below. If you need more context on a particular issue, ask in a comment.
As contributors and maintainers to this project, you are expected to abide by Morpheus' code of conduct. More information can be found at: Contributor Code of Conduct.
- Find an issue to work on. The best way is to search for issues with the good first issue label.
- Comment on the issue stating that you are going to work on it.
- Code! Make sure to update unit tests! Ensure the license headers are set properly.
- When done, create your pull request.
- Wait for other developers to review your code and update code as needed.
- Once reviewed and approved, a Morpheus developer will merge your pull request.
Remember, if you are unsure about anything, don't hesitate to comment on issues and ask for clarifications!
Once you have gotten your feet wet and are more comfortable with the code, you can review the prioritized issues for our next release in our project boards.
Pro Tip: Always review the release board with the highest number for issues to work on. This is where Morpheus developers also focus their efforts.
Review the unassigned issues, and find an issue to which you are comfortable contributing. Start with Step 2 above, commenting on the issue to let others know you are working on it. If you have any questions related to the implementation of the issue, ask them in the issue instead of the PR.
The following instructions are for developers who are getting started with the Morpheus repository. The Morpheus development environment is flexible (Docker, Conda and bare metal workflows) but has a high number of dependencies that can be difficult to set up. These instructions outline the steps for setting up a development environment inside a Docker container or on a host machine with Conda.
All of the following instructions assume several variables have been set:
MORPHEUS_ROOT
: The Morpheus repository has been checked out at a location specified by this variable. Any non-absolute paths are relative toMORPHEUS_ROOT
.PYTHON_VER
: The desired Python version. Minimum required is3.10
RAPIDS_VER
: The desired RAPIDS version for all RAPIDS libraries including cuDF and RMM. If in doubt use23.06
TRITONCLIENT_VERSION
: The desired Triton client. If in doubt use22.10
CUDA_VER
: The desired CUDA version to use. If in doubt use11.8
export PYTHON_VER=3.10
export RAPIDS_VER=23.06
export TRITONCLIENT_VERSION=22.10
export CUDA_VER=11.8
export MORPHEUS_ROOT=$(pwd)/morpheus
git clone https://github.com/nv-morpheus/Morpheus.git $MORPHEUS_ROOT
cd $MORPHEUS_ROOT
Ensure all submodules are checked out:
git submodule update --init --recursive
The large model and data files in this repo are stored using Git Large File Storage (LFS). These files will be required for running the training/validation scripts and example pipelines for the Morpheus pre-trained models.
By default only those files stored in LFS strictly needed for running Morpheus are included when the Morpheus repository is cloned. Additional datasets can be downloaded using the scripts/fetch_data.py
script. Refer to the section Git LFS of the getting_started.md guide for details on this.
This workflow utilizes a Docker container to set up most dependencies ensuring a consistent environment.
-
Ensure all requirements from getting_started.md are met.
-
Build the development container
./docker/build_container_dev.sh
- The container tag will default to
morpheus:YYMMDD
whereYYMMDD
is the current 2 digit year, month and day respectively. The tag can be overridden by settingDOCKER_IMAGE_TAG
. For example,Would build the containerDOCKER_IMAGE_TAG=my_tag ./docker/build_container_dev.sh
morpheus:my_tag
. - To build the container with a debugging version of CPython installed, update the Docker target as follows:
DOCKER_TARGET=development_pydbg ./docker/build_container_dev.sh
- Note: When debugging Python code, you just need to add
ci/conda/recipes/python-dbg/source
to your debugger's source path. - Once created, you will be able to introspect Python objects from within GDB. For example, if we were to break within a generator setup call and examine its PyFrame_Object
f
, it might be similar to:
#4 0x000056498ce685f4 in gen_send_ex (gen=0x7f3ecc07ad40, arg=<optimized out>, exc=<optimized out>, closing=<optimized out>) at Objects/genobject.c:222 (gdb) pyo f object address : 0x7f3eb3888750 object refcount : 1 object type : 0x56498cf99c00 object type name: frame object repr : <frame at 0x7f3eb3888750, file '/workspace/morpheus/pipeline/pipeline.py', line 902, code join
- Note: Now when running the container, Conda should list your Python version as
pyxxx_dbg_morpheus
.
(morpheus) user@host:/workspace# conda list | grep python python 3.8.13 py3.8.13_dbg_morpheus local
- Note: This does not build any Morpheus or MRC code and defers building the code until the entire repo can be mounted into a running container. This allows for faster incremental builds during development.
- The container tag will default to
-
Run the development container
./docker/run_container_dev.sh
- The container tag follows the same rules as
build_container_dev.sh
and will default to the currentYYMMDD
. Specify the desired tag withDOCKER_IMAGE_TAG
. i.e.DOCKER_IMAGE_TAG=my_tag ./docker/run_container_dev.sh
- This will automatically mount the current working directory to
/workspace
. - Some of the validation tests require launching a Triton Docker container within the Morpheus container. To enable this you will need to grant the Morpheus container access to your host OS's Docker socket file with:
Then once the container is started you will need to install some extra packages to enable launching Docker containers:
DOCKER_EXTRA_ARGS="-v /var/run/docker.sock:/var/run/docker.sock" ./docker/run_container_dev.sh
./external/utilities/docker/install_docker.sh # Install utils for checking output apt install -y jq bc
- The container tag follows the same rules as
-
Compile Morpheus
./scripts/compile.sh
This script will run both CMake Configure with default options and CMake build.
-
Install Morpheus
pip install -e /workspace
Once Morpheus has been built, it can be installed into the current virtual environment.
-
morpheus run pipeline-nlp ...
At this point, Morpheus can be fully used. Any changes to Python code will not require a rebuild. Changes to C++ code will require calling
./scripts/compile.sh
. Installing Morpheus is only required once per virtual environment.
If a Conda environment on the host machine is preferred over Docker, it is relatively easy to install the necessary dependencies (In reality, the Docker workflow creates a Conda environment inside the container).
Note: These instructions assume the user is using mamba
instead of conda
since its improved solver speed is very helpful when working with a large number of dependencies. If you are not familiar with mamba
you can install it with conda install -n base -c conda-forge mamba
(Make sure to only install into the base environment). mamba
is a drop in replacement for conda
and all Conda commands are compatible between the two.
- Pascal architecture GPU or better
- NVIDIA driver
520.61.05
or higher - CUDA 11.8
conda
andmamba
-
Refer to the Getting Started Guide if
conda
is not already installed -
Install
mamba
:conda activate base conda install -c conda-forge mamba
-
Note:
mamba
should only be installed once in the base environment
-
-
Set up env variables and clone the repo:
export PYTHON_VER=3.10 export RAPIDS_VER=23.06 export CUDA_VER=11.8 export MORPHEUS_ROOT=$(pwd)/morpheus git clone https://github.com/nv-morpheus/Morpheus.git $MORPHEUS_ROOT cd $MORPHEUS_ROOT
-
Ensure all submodules are checked out:
git submodule update --init --recursive
-
Create the Morpheus Conda environment
mamba env create -f ./docker/conda/environments/cuda${CUDA_VER}_dev.yml conda activate morpheus
This creates a new environment named
morpheus
, and activates that environment. -
Build Morpheus
./scripts/compile.sh
This script will run both CMake Configure with default options and CMake build.
-
Install Morpheus
pip install -e ${MORPHEUS_ROOT}
Once Morpheus has been built, it can be installed into the current virtual environment.
-
Test the build (Note: some tests will be skipped)
Some of the tests will rely on external data sets.MORPHEUS_ROOT=${PWD} git lfs install git lfs update ./scripts/fetch_data.py fetch all
This script will fetch the data sets needed. Then run:
pytest
-
Optional: Run full end-to-end tests
- Our end-to-end tests require the camouflage testing framework. Install camouflage with:
npm install -g camouflage-server
Run all tests:
pytest --run_slow
- Our end-to-end tests require the camouflage testing framework. Install camouflage with:
-
Optional: Install cuML
- Many users may wish to install cuML. Due to the complex dependency structure and versioning requirements, we need to specify exact versions of each package. The command to accomplish this is:
mamba install -c rapidsai -c nvidia -c conda-forge cuml=23.06
- Many users may wish to install cuML. Due to the complex dependency structure and versioning requirements, we need to specify exact versions of each package. The command to accomplish this is:
-
Run Morpheus
morpheus run pipeline-nlp ...
At this point, Morpheus can be fully used. Any changes to Python code will not require a rebuild. Changes to C++ code will require calling
./scripts/compile.sh
. Installing Morpheus is only required once per virtual environment.
Launching a full production Kafka cluster is outside the scope of this project; however, if a quick cluster is needed for testing or development, one can be quickly launched via Docker Compose. The following commands outline that process. Refer to this guide for more in-depth information:
-
Install
docker-compose-plugin
if not already installed:apt-get update apt-get install docker-compose-plugin
-
Clone the
kafka-docker
repo from the Morpheus repo root:git clone https://github.com/wurstmeister/kafka-docker.git
-
Change directory to
kafka-docker
:cd kafka-docker
-
Export the IP address of your Docker
bridge
network:export KAFKA_ADVERTISED_HOST_NAME=$(docker network inspect bridge | jq -r '.[0].IPAM.Config[0].Gateway')
-
Update the
kafka-docker/docker-compose.yml
, performing two changes:-
Update the
ports
entry to:ports: - "0.0.0.0::9092"
This will prevent the containers from attempting to map IPv6 ports.
-
Change the value of
KAFKA_ADVERTISED_HOST_NAME
to match the value of theKAFKA_ADVERTISED_HOST_NAME
environment variable from the previous step. For example, the line should be similar to:environment: KAFKA_ADVERTISED_HOST_NAME: 172.17.0.1
Which should match the value of
$KAFKA_ADVERTISED_HOST_NAME
from the previous step:$ echo $KAFKA_ADVERTISED_HOST_NAME "172.17.0.1"
-
-
Launch kafka with 3 instances:
docker compose up -d --scale kafka=3
In practice, 3 instances have been shown to work well. Use as many instances as required. Keep in mind each instance takes about 1 Gb of memory.
-
Launch the Kafka shell
- To configure the cluster, you will need to launch into a container that has the Kafka shell.
- You can do this with:
./start-kafka-shell.sh $KAFKA_ADVERTISED_HOST_NAME
- However, this makes it difficult to load data into the cluster. Instead, you can manually launch the Kafka shell by running:
Note the
# Change to the morpheus root to make it easier for mounting volumes cd ${MORPHEUS_ROOT} # Run the Kafka shell Docker container docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock \ -e HOST_IP=$KAFKA_ADVERTISED_HOST_NAME -e ZK=$2 \ -v $PWD:/workspace wurstmeister/kafka /bin/bash
-v $PWD:/workspace
. This will make anything in your current directory available in/workspace
. - Once the Kafka shell has been launched, you can begin configuring the cluster. All of the following commands require the argument
--bootstrap-server
. To simplify things, set theBOOTSTRAP_SERVER
andMY_TOPIC
variables:export BOOTSTRAP_SERVER=$(broker-list.sh) export MY_TOPIC="your_topic_here"
-
Create the topic
# Create the topic kafka-topics.sh --bootstrap-server ${BOOTSTRAP_SERVER} --create --topic ${MY_TOPIC} # Change the number of partitions kafka-topics.sh --bootstrap-server ${BOOTSTRAP_SERVER} --alter --topic ${MY_TOPIC} --partitions 3 # Refer to the topic info kafka-topics.sh --bootstrap-server ${BOOTSTRAP_SERVER} --describe --topic=${MY_TOPIC}
Note: If you are using
to-kafka
, ensure your output topic is also created. -
Generate input messages
-
In order for Morpheus to read from Kafka, messages need to be published to the cluster. You can use the
kafka-console-producer.sh
script to load data:kafka-console-producer.sh --bootstrap-server ${BOOTSTRAP_SERVER} --topic ${MY_TOPIC} < ${FILE_TO_LOAD}
Note: In order for this to work, your input file must be accessible from the current directory the Kafka shell was launched from.
-
You can view the messages with:
kafka-console-consumer.sh --bootstrap-server ${BOOTSTRAP_SERVER} --topic ${MY_TOPIC}
Note: This will consume messages.
-
To verify that all pipelines are working correctly, validation scripts have been added at ${MORPHEUS_ROOT}/scripts/validation
. There are scripts for each of the main workflows: Anomalous Behavior Profiling (ABP), Humans-as-Machines-Machines-as-Humans (HAMMAH), Phishing Detection (Phishing), and Sensitive Information Detection (SID).
To run all of the validation workflow scripts, use the following commands:
# Run validation scripts
./scripts/validation/val-run-all.sh
At the end of each workflow, a section will print the different inference workloads that were run and the validation error percentage for each. For example:
===ERRORS===
PyTorch :3/314 (0.96 %)
Triton(ONNX):Skipped
Triton(TRT) :Skipped
TensorRT :Skipped
Complete!
This indicates that only 3 out of 314 rows did not match the validation dataset. Errors similar to :/ ( %)
or very high percentages display, then the workflow did not complete successfully.
Due to the large number of dependencies, it's common to run into build issues. The follow are some common issues, tips, and suggestions:
- Issues with the build cache
- To avoid rebuilding every compilation unit for all dependencies after each change, a fair amount of the build is cached. By default, the cache is located at
${MORPHEUS_ROOT}/.cache
. The cache contains both compiled object files, source repositories, ccache files, clangd files and even the cuDF build. - The entire cache folder can be deleted at any time and will be redownload/recreated on the next build
- To avoid rebuilding every compilation unit for all dependencies after each change, a fair amount of the build is cached. By default, the cache is located at
- Message indicating
git apply ...
failed- Many of the dependencies require small patches to make them work. These patches must be applied once and only once. If this error displays, try deleting the offending package from the
build/_deps/<offending_package>
directory or from.cache/cpm/<offending_package>
. - If all else fails, delete the entire
build/
directory and.cache/
directory.
- Many of the dependencies require small patches to make them work. These patches must be applied once and only once. If this error displays, try deleting the offending package from the
Morpheus is licensed under the Apache v2.0 license. All new source files including CMake and other build scripts should contain the Apache v2.0 license header. Any edits to existing source code should update the date range of the copyright to the current year. The format for the license header is:
/*
* SPDX-FileCopyrightText: Copyright (c) <year>, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: Apache-2.0
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
Third-party code included in the source tree (that is not pulled in as an external dependency) must be compatible with the Apache v2.0 license and should retain the original license along with a URL to the source. If this code is modified, it should contain both the Apache v2.0 license followed by the original license of the code and the URL to the original code.
Ex:
/**
* SPDX-FileCopyrightText: Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: Apache-2.0
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
//
// Original Source: https://github.com/org/other_project
//
// Original License:
// ...
Portions adopted from