The AppDynamics Ansible Collection installs and configures AppDynamics agents. All supported agents are downloaded from the download portal unto the Ansible control node automatically –– this makes it easy to acquire and upgrade agents declaratively.
Additionally, this AppDynamics Ansible Collection supports auto-instrumenation of JBoss(Wildfly) and Tomcat, on Linux only.
Refer to the role variables below for a description of available deployment options.
We built this AppDynamics Ansible collection to support (immutable) infrastructure as code deployment methodology; this means that the collection will NOT preserve any manual configuration changes on the target servers. In other words, the collection will overwrite any local or pre-existing configuration with the variables that are defined in the playbook. Therefore, we strongly recommend that you convert any custom agent configuration (this collection does not support) into an ansible role to ensure consistency of deployments and configurations across your estate.
Pro Tip: Right-Click the GIF and "Open in new Tab" or view on terminalizer
- Requires Ansible >=2.9.0
- Supports most Debian and RHEL-based Linux distributions, and Windows.
- Windows OS requires >= Powershell 5.0
- Network/firewall access to download AppDynamics agents from
https://download-files.appdynamics.com
andhttps://download.appdynamics.com
on the Ansible control node -
jq
is required on the Ansible control node. It is recommended that you install it manually (since it requiressudo
access and it is a one time task), or run theinstall_jq.yaml
playbook in theplaybook
folder. For example:ansible-playbook install_jq.yaml --ask-become-pass -e 'ansible_python_interpreter=/usr/bin/python'
lxml
is required if you need to enable enhanced agent logging (e.g DEBUG, TRACE, etc). We recommend that you install it manaually on the Ansible control node usingpip3 install lxml
, or run theinstall_lxml.yaml
playbook in theplaybook
folder.
The agent binaries and the installation process for the Machine and DB agent depend on the OS type –– Windows or Linux. This AppDynamics collection abstracts the OS differences so you should only have to provide agent_type
, without necessarily specifying your OS type.
Install the AppDynamics Collection from Ansible Galaxy on your Ansible control node:
ansible-galaxy collection install appdynamics.agents
Example playbooks for each agent type is provided in the collections's playbooks
folder.
You should either reference the example playbooks in the collection installation folder, or access the examples in the GitHub repository .
The var/playbooks/controller.yaml
file is meant to contain constant variables such as enable_ssl
, controller_port
, etc. You may either include var/playbooks/controller.yaml
in the playbook as shown in the java example below, or overwrite the variables in the playbooks - whatever works best for you.
This role features:
- java-agent installation for Windows/Linux
Example 1: Install java-agent without any apps instrumentation.
---
- hosts: all
tasks:
- name: Include variables for the controller settings
include_vars: vars/controller.yaml
- include_role:
name: appdynamics.agents.java
vars:
agent_version: 21.1.0
agent_type: java8
application_name: "IoT_API" # agent default application
tier_name: "java_tier" # agent default tier
Example 2: Install java-agent with changing java startup script and performing restart
---
- hosts: single-java-host
tasks:
- name: Include variables for the controller settings
include_vars: vars/controller.yaml
- import_role:
name: appdynamics.agents.java
public: yes
vars:
agent_version: 21.1.0
agent_type: java8
application_name: "BIGFLY" # agent default application
tier_name: "java_tier" # agent default tier
- name: Edit startup script with new java startup variables
lineinfile:
path: /opt/application/startAll.sh
# Line to Search/Match against
regexp: '^(.*)(-jar.*$)'
# Line to Replace with
line: '\1 -javaagent:{{ java_agent_dest_folder_linux }}/javaagent.jar -Dappdynamics.agent.nodeName=application-1 \2'
backup: yes
backrefs: yes
state: present
notify: RestartingApp
- name: Allow appuser write to appd logs folder
user:
name: appuser
groups:
- appdynamics
append: yes
become: yes
handlers:
- name: RestartingApp
command: '/opt/application/stopAll.sh && /opt/application/startAll.sh'
args:
chdir: '/opt/application/'
This role features:
- java-agent installation for Linux
- instrumentation of Jboss/Wildfly
- automatic applications restart (if systemd service is present)
- java agent start verification
Example 1: Install java-agent and instrument one or more applications.
---
- hosts: all
tasks:
- name: Include variables for the controller settings
include_vars: vars/controller.yaml
- include_role:
name: appdynamics.agents.java
# use java role variables in the following instrumentation tasks when public: yes
public: yes
vars:
agent_version: 21.1.0
agent_type: java8
- include_role:
name: appdynamics.agents.instrument_jboss
vars:
# instrument jboss:
application_name: "IoT_API2"
tier_name: "Jboss"
jboss_service: wildfly
app_user: wildfly
restart_app: yes
jboss_config: /opt/wildfly/bin/standalone.sh
Example 2: To make sure all instrumented applications can have access to java-agent logs directory, this role creates appdynamics
functional user/group to own java-agent directory and then assigns applications PID users to appdynamics
group.
In some cases, when application PID user is not local on linux host (i.e. from external source) it cannot be added to the appdynamics
group. In such case you can let application user to own java-agent directory instead.
---
- hosts: all
tasks:
- name: Include variables for the controller settings
include_vars: vars/controller.yaml
- include_role:
name: appdynamics.agents.java
# use java role variables in the following instrumentation tasks when public: yes
public: yes
vars:
agent_version: 21.1.0
agent_type: java8
# single app mode: Can skip appdynamics user creation and own java-agent directory by app user (wildfly in this case)
create_appdynamics_user: no
agent_dir_permission:
user: wildfly
group: wildfly
- include_role:
name: appdynamics.agents.instrument_jboss
vars:
# instrument jboss:
application_name: "IoT_API2"
tier_name: "Jboss"
jboss_service: wildfly
app_user: wildfly
restart_app: yes
jboss_config: /opt/wildfly/bin/standalone.sh
This role features:
- java-agent installation for Linux
- instrumentation of Apache Tomcat
- automatic applications restart (if systemd service is present)
- java agent start verification
Example 1: Install java-agent and instrument one or more applications.
---
- hosts: all
tasks:
- name: Include variables for the controller settings
include_vars: vars/controller.yaml
- include_role:
name: appdynamics.agents.java
# use java role variables in the following instrumentation tasks when public: yes
public: yes
vars:
agent_version: 21.1.0
agent_type: java8
- include_role:
name: appdynamics.agents.instrument_tomcat
vars:
# instrument tomcat:
tomcat_service: tomcat9
application_name: "IoT_API22"
tier_name: "Tomcat"
app_user: tomcat
restart_app: yes
tomcat_config: /usr/share/tomcat9/bin/setenv.sh
Example 2: To make sure all instrumented applications can have access to java-agent logs directory, this role creates appdynamics
functional user/group to own java-agent dir and then assigns applications PID users to appdynamics
group.
In some cases, when application PID user is not local on linux host (i.e. from external source) it cannot be added to the appdynamics
group. In such case you can let application user to own java-agent directory instead.
---
- hosts: all
tasks:
- name: Include variables for the controller settings
include_vars: vars/controller.yaml
- include_role:
name: appdynamics.agents.java
# use java role variables in the following instrumentation tasks when public: yes
public: yes
vars:
agent_version: 21.1.0
agent_type: java8
# single app mode: Can skip appdynamics user creation and own java-agent directory by app user (tomcat in this case)
create_appdynamics_user: no
agent_dir_permission:
user: tomcat
group: tomcat
- include_role:
name: appdynamics.agents.instrument_tomcat
vars:
# instrument tomcat:
tomcat_service: tomcat9
application_name: "IoT_API22"
tier_name: "Tomcat"
app_user: tomcat
restart_app: yes
tomcat_config: /usr/share/tomcat9/bin/setenv.sh
In the playbook below, the parameters are initialised directly in the yaml file rather than including them from var/playbooks/controller.yaml
Example 1: Install .net agent and instrument standalone applications.
---
- hosts: windows
tasks:
- include_role:
name: appdynamics.agents.dotnet
vars:
agent_version: 20.8.0
agent_type: dotnet
application_name: 'IoT_API'
# Your controller details
controller_account_access_key: "123456" # Please add this to your Vault
controller_global_analytics_account_name: "customer1_GUID" # Please add this to your Vault
controller_host_name: "fieldlab.saas.appdynamics.com"
controller_account_name: "customer1" # Please add this to your Vault
enable_ssl: "true"
controller_port: "443"
enable_proxy: "true" #use quotes please
proxy_host: "10.0.1.3"
proxy_port: "80"
monitor_all_IIS_apps: "false" # Enable automatic instrumentation of all IIS applications
runtime_reinstrumentation: "true" # Runtime reinstrumentation works for .NET Framework 4.5.2 and greater.
# Define standalone executive applications to monitor
standalone_applications:
- tier: login
executable: login.exe
- tier: tmw
executable: tmw.exe
command-line: "-x"
- tier: mso
executable: mso.exe
Variable | Description | Required | Default |
---|---|---|---|
monitor_all_IIS_apps |
Enable automatic instrumentation of all IIS applications | N | no |
runtime_reinstrumentation |
Runtime re-instrumentation works for .NET Framework 4.5.2 and greater. Note: Make sure you test this first in a non-production environment | N | no |
dotnet_machine_agent |
YAML map that describes dotnet machine agent settings. See roles/dotnet/defaults/main.yml for the example | N | |
standalone_applications |
List of standalone services to be instrumented with the .NET agent. See roles/dotnet/defaults/main.yml for the example | N | |
logFileFolderAccessPermissions |
The list of users who require write access to log directory of the agent (i.e. user who runs IIS). See roles/dotnet/defaults/main.yml for the example | N | |
restart_app |
Set to 'yes' to automatically restart IIS | N | no |
In the playbook below, the parameters for communicating with controller included from vars/controller.yaml
Example 1: Install .net core agent on linux host and change environment variables for application to start app with .net core agent
---
- hosts: netcore_lin
tasks:
- name: Include variables for the controller settings
include_vars: vars/controller.yaml
- include_role:
name: appdynamics.agents.dotnetcore
public: yes
vars:
# Define Agent Type and Version
agent_version: 21.5.0
agent_type: dotnetcore
# The applicationName
application_name: "BIGCOMPANY"
tier_name: "dotnet"
# Directory permissions for agent. These can be set at host level in the invertory as well
agent_dir_permission: #defaults to root:root if not specified
user: "centos" # This user must pre-exist. It is recommended to use the PID owner of your netcore app
group: "centos" # This group must pre-exist
- name: Change application startup script
blockinfile:
path: /opt/Gateway/StartGatewayApi.sh
backup: yes
insertbefore: BOF
marker: "# {mark} appd instrumentation"
block: |
export APPDYNAMICS_AGENT_APPLICATION_NAME="BIGCOMPANY"
export APPDYNAMICS_AGENT_TIER_NAME="dotnet-gateweay"
export APPDYNAMICS_AGENT_REUSE_NODE_NAME=true
export APPDYNAMICS_AGENT_REUSE_NODE_NAME_PREFIX="dotnet-gw"
export CORECLR_PROFILER={57e1aa68-2229-41aa-9931-a6e93bbc64d8}
export CORECLR_ENABLE_PROFILING=1
export CORECLR_PROFILER_PATH={{ dotnet_core_agent_dest_folder_linux }}/libappdprofiler.so
notify: RestartingApp
handlers:
- name: RestartingApp
shell: '{{ item }}'
args:
chdir: '/opt/Gateway/'
with_items:
- '/opt/Gateway/StopGatewayApi.sh '
- '/opt/Gateway/StartGatewayApi.sh '
Example 1: Install database agent on linux host
---
- hosts: linux
tasks:
- include_role:
name: appdynamics.agents.db
vars:
agent_version: 20.9.0
agent_type: db
controller_account_access_key: "b0248ceb-c954-4a37-97b5-207e90418cb4" # Please add this to your Vault
controller_host_name: "ansible-20100nosshcont-bum4wzwa.appd-cx.com" # Your AppDynamics controller
controller_account_name: "customer1" # Please add this to your Vault
enable_ssl: "false"
controller_port: "8090"
db_agent_name: "ProdDBAgent"
Example 1: Install machine agent on hosts. Agents communicate with controller with proxy.
---
- hosts: all
tasks:
- include_role:
name: appdynamics.agents.machine
vars:
# Define Agent Type and Version
agent_version: 20.9.0
agent_type: machine
machine_hierarchy: "AppName|Owners|Environment|" # Make sure it ends with a |
sim_enabled: "true"
# Analytics settings
analytics_event_endpoint: "https://fra-ana-api.saas.appdynamics.com:443"
enable_analytics_agent: "true"
# Your controller details
controller_account_access_key: "123key" # Please add this to your Vault
controller_host_name: "fieldlab.saas.appdynamics.com" # Your AppDynamics controller
controller_account_name: "customer1" # Please add this to your Vault
enable_ssl: "false"
controller_port: "8090"
controller_global_analytics_account_name: 'customer1_e52eb4e7-25d2-41c4-a5bc-9685502317f2' # Please add this to your Vault
# config properties docs - https://docs.appdynamics.com/display/latest/Machine+Agent+Configuration+Properties
# Can be used to configure the proxy for the agent
java_system_properties: "-Dappdynamics.http.proxyHost=10.0.4.2 -Dappdynamics.http.proxyPort=9090" # mind the space between each property
The logger role allows you to change the agent log level for already deployed agents (either one agent type at a time or multiple types, depending on the value of the agents_to_set_loggers_for
list.
The init_and_validate_agent_variables
should be false when using the logger role after the agents are already deployed, to skip unnecessary common role processing.
The logger role allows you to change the agent log level for already deployed agents (either one agent type at a time or multiple types, depending on the value of the agents_to_set_loggers_for
list.
The init_and_validate_agent_variables
should be false when using the logger role after the agents are already deployed, to skip unnecessary common role processing.
- hosts: all
tasks:
- include_role:
name: appdynamics.agents.logger
vars:
init_and_validate_agent_variables: false # skip agent type variable init and validation
agents_to_set_loggers_for: ['db', 'java', 'machine']
agent_log_level: "info"
agent_loggers: ['com.appdynamics', 'com', 'com.singularity', 'com.singularity.BusinessTransactions', 'com.singularity.ee.agent.dbagent.collector.server.connection.wmi.NativeClient']
Check Agent Type/Roles
for specific variable support.
Here are a few ways you can pitch in:
- Report bugs or issues.
- Fix bugs and submit pull requests.
- Write, clarify or fix documentation.
- Refactor code.