The APM Agent is open source and we love to receive contributions from our community — you!
There are many ways to contribute, from writing tutorials or blog posts, improving the documentation, submitting bug reports and feature requests or writing code.
You can get in touch with us through Discuss. Feedback and ideas are always welcome.
Please note that this repository is covered by the Elastic Community Code of Conduct.
If you have a bugfix or new feature that you would like to contribute, please find or open an issue about it first. Talk about what you would like to do. It may be that somebody is already working on it, or that there are particular issues that you should know about before implementing the change.
Generally, we require that you test any code you are adding or modifying. Once your changes are ready to submit for review:
-
Sign the Contributor License Agreement
Please make sure you have signed our Contributor License Agreement. We are not asking you to assign copyright to us, but to give us the right to distribute your code without restriction. We ask this of all contributors in order to assure our users of the origin and continuing existence of the code. You only need to sign the CLA once.
-
Code style
This project uses several tools to maintain a consistent code style:
- the automatic code formatter black
- sorting of imports via isort
- flake8
- License header check via custom script
The easiest way to make sure your pull request adheres to the code style is to install pre-commit.
pip install pre-commit # or "brew install pre-commit" if you use Homebrew pre-commit install
-
Test your changes
Run the test suite to make sure that nothing is broken. See testing for details. (Note, only unit tests are expected to be run before submitting a PR.)
-
Rebase your changes
Update your local repository with the most recent code from the main repo, and rebase your branch on top of the latest master branch. When we merge your PR, we will squash all of your commits into a single commit on the master branch.
-
Submit a pull request
Push your local changes to your forked copy of the repository and submit a pull request. In the pull request, choose a title which sums up the changes that you have made, and in the body provide more details about what your changes do. Also mention the number of the issue where discussion has taken place, eg "Closes #123".
-
Be patient
We might not be able to review your code as fast as we would like to, but we'll do our best to dedicate it the attention it deserves. Your effort is much appreciated!
To run local unit tests, you can install the relevant
requirements files
and then run make test
from the project root:
pip install -r tests/requirements/reqs-flask-1.1.txt
make test
Pytest will automatically discover all the tests and skip the ones for which dependencies are not met.
If you want to go above and beyond and run the full test suite, you need to install several databases (Elasticsearch, PostgreSQL, MySQL, Cassandra, Redis). This can be quite a hassle, so we recommend to use our dockerized test setup. See Running tests for detailed instructions.
This project uses pytest for all of its
testing needs. Note that pytest can be a bit confusing at first, due to its
dynamic discovery features. In particular,
fixtures can be confusing
and hard to discover, due to the fact that they do not need to be imported to
be used. For example, whenever a test has elasticapm_client
as an argument,
that is a fixture which is defined
here.
For tests that require external dependencies like databases, or for testing different versions of the same library, we use a matrix build that leverages Docker and docker-compose.
The setup requires a little bit of boilerplate to get started.
In this example, we will create an instrumentation for the "foo" database, by instrumenting its Python driver, foodriver
.
-
mark your tests with a pytest marker that describes the new instrumentation at the top of your tests file:
pytestmark = pytest.mark.foo
pytestmark
can also be a list if you need to define more than one mark (e.g. to mark tests as integration tests):pytestmark = [pytest.mark.foo, pytest.mark.integrationtest]
-
make sure to use
pytest.importorskip
to import any dependencies that are only required by your tests:foodriver = pytest.importorskip("foodriver")
-
Create one or more requirements files in
tests/requirements
that list the dependencies that are to be installed specifically for your tests: To only test the newest version of the library, create a filetests/requirements/reqs-foo-newest.txt
and add something like this to it:foodriver -r reqs-base.txt
This tells the matrix runner to install the newest version of
foodriver
, as well as the base requirements needed to run the test suite. To test more than one version of the library, create additionalreqs-foo-X.Y.txt
files with specific versions of your instrumented package. -
Create a file called
foo.sh
intests/scripts/envs/foo.sh
. Here you can define environment variables that are required to run your tests. As a minimum, you'll have to set thePYTEST_MARKER
variable to the same value you used above for the pytest marker, e.g.export PYTEST_MARKER="-m foo"
-
Add entries in
.ci/jenkins_framework.yml
(for pull requests) and.ci/jenkins_framework_full.yml
(for nightly builds). Generally, we only test the newest version of an instrumentation with every pull request:- foo-newest
To test other versions in the nightly build, add them to
.ci/jenkins_framework_full.yml
. -
OPTIONAL: If you need a real service to test against (e.g. an actual foo database), add an entry in
tests/docker-compose.yml
underservices
:foo: image: foobase:latest
You'll also have to add a
DOCKER_DEPS
environment variable totests/scripts/envs/foo.sh
which tells the matrix to spin up the given docker-compose service before running your tests. You may also need to add things like hostname configuration here.DOCKER_DEPS="foo" FOO_CONNECTION_URL="http://foo:4711"
-
OPTIONAL: If your instrumented package does not support all Python versions we test with, you can exclude certain combinations by adding them to
.ci/jenkins_exclude.yml
:- PYTHON_VERSION: python-3.5 # foo doesn't support Python 3.5 FRAMEWORK: foo-newest
All feature development and most bug fixes hit the master branch first. Pull requests should be reviewed by someone with commit access. Once approved, the author of the pull request, or reviewer if the author does not have commit access, should "Squash and merge".
If you have commit access, the process is as follows:
- Update the version in
elasticapm/version.py
according to the scale of the change. (major, minor or patch) - Update
CHANGELOG.asciidoc
. Rename theUnreleased
section to the correct version (vX.X.X
), and nest under the appropriate sub-heading, e.g.,Python Agent version 5.x
. - For Majors: Add a new row to the EOL table in
docs/upgrading.asciidoc
. The EOL date is the release date plus 18 months. - For Majors: Add the new major version to
conf.yaml
in the elastic/docs repo. - Commit changes with message
update CHANGELOG and bump version to X.Y.Z
whereX.Y.Z
is the version inelasticapm/version.py
- Tag the commit with
git tag -a vX.Y.Z
, for examplegit tag -a v1.2.3
. Copy the changelog for the release to the tag message, removing any leading#
. - Reset the current major branch (
1.x
,2.x
etc) to point to the current master, e.g.git branch -f 1.x master
- Push commits and tags upstream with
git push upstream master && git push upstream --tags
(and optionally to your own fork as well) - Update major branch, e.g.
1.x
on upstream withgit push upstream 1.x
- After tests pass, Jenkins will automatically build a source package, as well as binary wheels.
To upload them to PyPI, go to Jenkins
and look for the build with the correct tag name (
vX.Y.Z
). Once the build is done, a dialog will be shown. Note that you need to be logged in to trigger an upload. - Create a Github release targeting the new tag. Copy the changelog into the body of the release.