Skip to content

Latest commit

 

History

History
220 lines (174 loc) · 7.16 KB

README.md

File metadata and controls

220 lines (174 loc) · 7.16 KB

redis-collectd-plugin

💥 Project status: I am no longer working with Redis + collectd so am not actively maintaining this project. High quality PRs are welcome. Please contact me if you're interested in helping to maintain the project.

A Redis plugin for collectd using collectd's Python plugin.

You can capture any kind of Redis metrics like:

  • Memory used
  • Commands processed per second
  • Number of connected clients and slaves
  • Number of blocked clients
  • Number of keys stored (per database)
  • Uptime
  • Changes since last save
  • Replication delay (per slave)

Install

  1. Place redis_info.py in /opt/collectd/lib/collectd/plugins/python (assuming you have collectd installed to /opt/collectd).
  2. Configure the plugin (see below).
  3. Restart collectd.

Configuration

Add the following to your collectd config or use the included redis.conf for full example. Notice, you will have to adjust cmdset section depending on the Redis version, see below.

    # Configure the redis_info-collectd-plugin

    <LoadPlugin python>
      Globals true
    </LoadPlugin>

    <Plugin python>
      ModulePath "/opt/collectd/lib/collectd/plugins/python"
      Import "redis_info"

      <Module redis_info>
        Host "localhost"
        Port 6379
        # Un-comment to use AUTH
        #Auth "1234"
        # Cluster mode expected by default
        #Cluster false
        Verbose false
        #Instance "instance_1"
        # Redis metrics to collect (prefix with Redis_)
        Redis_db0_keys "gauge"
        Redis_uptime_in_seconds "gauge"
        Redis_uptime_in_days "gauge"
        Redis_lru_clock "counter"
        Redis_connected_clients "gauge"
        Redis_connected_slaves "gauge"
        Redis_blocked_clients "gauge"
        Redis_evicted_keys "gauge"
        Redis_used_memory "bytes"
        Redis_used_memory_peak "bytes"
        Redis_changes_since_last_save "gauge"
        Redis_instantaneous_ops_per_sec "gauge"
        Redis_rdb_bgsave_in_progress "gauge"
        Redis_total_connections_received "counter"
        Redis_total_commands_processed "counter"
        Redis_keyspace_hits "derive"
        Redis_keyspace_misses "derive"
        #Redis_master_repl_offset "gauge"
        #Redis_master_last_io_seconds_ago "gauge"
        #Redis_slave_repl_offset "gauge"
        Redis_cmdstat_info_calls "counter"
        Redis_cmdstat_info_usec "counter"
        Redis_cmdstat_info_usec_per_call "gauge"
        </Module>
    </Plugin>

Use below command and see which keys are present/missing:

redis-cli -h redis-host info commandstats

For example certain entries will not show up, because they were never used. Also if you enable verbose logging and see:

... collectd[6139]: redis_info plugin: Info key not found: cmdstat_del_calls, Instance: redis-server.tld.example.org:6379

It means given redis server does not return such value, and you should comment out that from config, to avoid filling logs with not so useful data, not to mention that you may trigger dropping log lines.

Multiple Redis instances

You can configure to monitor multiple redis instanceswith different config setups by the same machine by repeating the <Module> section, such as:

<Plugin python>
  ModulePath "/opt/collectd_plugins"
  Import "redis_info"

  <Module redis_info>
    Host "127.0.0.1"
    Port 9100
    Verbose true
    Instance "instance_9100"
    Redis_uptime_in_seconds "gauge"
    Redis_used_memory "bytes"
    Redis_used_memory_peak "bytes"
  </Module>

  <Module redis_info>
    Host "127.0.0.1"
    Port 9101
    Verbose true
    Instance "instance_9101"
    Redis_uptime_in_seconds "gauge"
    Redis_used_memory "bytes"
    Redis_used_memory_peak "bytes"
    Redis_master_repl_offset "gauge"
  </Module>

  <Module redis_info>
    Host "127.0.0.1"
    Port 9102
    Verbose true
    Instance "instance_9102"
    Redis_uptime_in_seconds "gauge"
    Redis_used_memory "bytes"
    Redis_used_memory_peak "bytes"
    Redis_slave_repl_offset "gauge"

    # notice, this is not added in above sections
    Redis_cmdstat_info_calls "counter"
    Redis_cmdstat_info_usec "counter"
    Redis_cmdstat_info_usec_per_call "gauge"
  </Module>
</Plugin>

These 3 redis instances listen on different ports, they have different plugin_instance combined by Host and Port:

"plugin_instance" => "127.0.0.1:9100",
"plugin_instance" => "127.0.0.1:9101",
"plugin_instance" => "127.0.0.1:9102",

These values will be part of the metric name emitted by collectd, e.g. collectd.redis_info.127.0.0.1:9100.bytes.used_memory

If you want to set a static value for the plugin instance, use the Instance configuration option:

...
  <Module redis_info>
    Host "127.0.0.1"
    Port 9102
    Instance "redis-prod"
  </Module>
...

This will result in metric names like: collectd.redis_info.redis-prod.bytes.used_memory

Instance can be empty, in this case the name of the metric will not contain any reference to the host/port. If it is omitted, the host:port value is added to the metric name.

Multiple Data source types

You can send multiple data source types from same key by specifying it in the Module:

...
  <Module redis_info>
    Host "localhost"
    Port 6379

    Redis_total_net_input_bytes "bytes"
    Redis_total_net_output_bytes "bytes"
    Redis_total_net_input_bytes "derive"
    Redis_total_net_output_bytes "derive"
  </Module>
...

Graph examples

These graphs were created using collectd's rrdtool plugin, drraw and graphite with grafana.

Clients connected Commands/sec db0 keys Memory used Command stats in grafana 1 Command stats in grafana 2

Requirements

  • collectd 4.9+

Devel workflow with Docker & Docker Compose

You can start hacking right away by using the provided Docker Compose manifest. No devel packages nor libs need to be installed on development host, just Docker and Docker Compose and you are good to go!

The Compose manifest launches a Redis server container based on redis[:latest] image (4.x as of Dec'17) and a collectd+python runtime container from pataquets/collectd-python image. Both containers share the same net iface to connect via localhost (not a best practice on production, but fair enough for developing). Also, the collectd container mounts from the Docker host's git repo directory:

  • The Python program file.
  • The redis.conf config file.
  • An additional collectd conf file to make al collectd readings to be sent to stdout (using the CSV plugin).

Just hack, change confs and test by doing:

$ docker-compose up

Stop by CTRL+C'ing. Rinse and repeat.