Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Elasticsearch exporter integration #347

Merged
merged 10 commits into from
Jan 22, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/depcheck.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
go_modules:
- github.com/cortexproject/cortex
- github.com/grafana/loki
- github.com/justwatchcom/elasticsearch_exporter
- github.com/oliver006/redis_exporter
- github.com/prometheus/memcached_exporter
- github.com/prometheus/statsd_exporter
Expand Down
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ can be found at [#317](https://github.com/grafana/agent/issues/317).

# Master (unreleased)

- [FEATURE] Added [ElasticSearch exporter](https://github.com/justwatchcom/elasticsearch_exporter) integration. (@colega)
- [BUGFIX] `agentctl config-check` will now work correctly when the supplied config file contains integrations. (@hoenn)

# v0.11.0 (2021-01-20)
Expand Down
3 changes: 3 additions & 0 deletions cmd/agent/agent-integrations-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,9 @@ integrations:
enabled: true
consul_exporter:
enabled: true
elasticsearch_exporter:
enabled: true
address: http://localhost:9200
prometheus_remote_write:
- url: http://localhost:9009/api/prom/push

94 changes: 94 additions & 0 deletions docs/configuration-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -1985,6 +1985,9 @@ redis_exporter: <redis_exporter_config>
# Controls the dnsmasq_exporter integration
dnsmasq_exporter: <dnsmasq_exporter_config>

# Controls the elasticsearch_expoter integration
elasticsearch_expoter: <elasticsearch_expoter_config>

# Controls the memcached_exporter integration
memcached_exporter: <memcached_exporter_config>

Expand Down Expand Up @@ -2866,6 +2869,97 @@ Full reference of options:
[leases_path: <string> | default = "/var/lib/misc/dnsmasq.leases"]
```

### elasticsearch_exporter_config
colega marked this conversation as resolved.
Show resolved Hide resolved

The `elasticsearch_exporter_config` block configures the `elasticsearch_exporter` integration,
which is an embedded version of
[`elasticsearch_exporter`](https://github.com/justwatchcom/elasticsearch_exporter). This allows for
the collection of metrics from ElasticSearch servers.

Note that currently, an Agent can only collect metrics from a single ElasticSearch server.
However, the exporter is able to collect the metrics from all nodes through that server configured.

Full reference of options:

```yaml
# Enables the elasticsearch_exporter integration, allowing the Agent to automatically
# collect system metrics from the configured ElasticSearch server address
[enabled: <boolean> | default = false]

# Automatically collect metrics from this integration. If disabled,
# the elasticsearch_exporter integration will be run but not scraped and thus not
# remote-written. Metrics for the integration will be exposed at
# /integrations/elasticsearch_exporter/metrics and can be scraped by an external
# process.
[scrape_integration: <boolean> | default = <integrations_config.scrape_integrations>]

# How often should the metrics be collected? Defaults to
# prometheus.global.scrape_interval.
[scrape_interval: <duration> | default = <global_config.scrape_interval>]

# The timeout before considering the scrape a failure. Defaults to
# prometheus.global.scrape_timeout.
[scrape_timeout: <duration> | default = <global_config.scrape_timeout>]

# Allows for relabeling labels on the target.
relabel_configs:
[- <relabel_config> ... ]

# Relabel metrics coming from the integration, allowing to drop series
# from the integration that you don't care about.
metric_relabel_configs:
[ - <relabel_config> ... ]

# Monitor the exporter itself and include those metrics in the results.
[include_exporter_metrics: <bool> | default = false]

#
# Exporter-specific configuration options
colega marked this conversation as resolved.
Show resolved Hide resolved
#

# HTTP API address of an Elasticsearch node.
[ address : <string> | default = "http://localhost:9200" ]

# Timeout for trying to get stats from Elasticsearch.
[ timeout: <duration> | default = "5s" ]

# Export stats for all nodes in the cluster. If used, this flag will override the flag `node`.
[ all: <boolean> ]

# Node's name of which metrics should be exposed.
[ node: <boolean> ]

# Export stats for indices in the cluster.
[ indices: <boolean> ]

# Export stats for settings of all indices of the cluster.
[ indices_settings: <boolean> ]

# Export stats for cluster settings.
[ cluster_settings: <boolean> ]

# Export stats for shards in the cluster (implies indices).
[ shards: <boolean> ]

# Export stats for the cluster snapshots.
[ snapshots: <boolean> ]

# Cluster info update interval for the cluster label.
[ clusterinfo_interval: <duration> | default = "5m" ]

# Path to PEM file that contains trusted Certificate Authorities for the Elasticsearch connection.
[ ca: <string> ]

# Path to PEM file that contains the private key for client auth when connecting to Elasticsearch.
[ client_private_key: <string> ]

# Path to PEM file that contains the corresponding cert for the private key to connect to Elasticsearch.
[ client_cert: <string> ]

# Skip SSL verification when connecting to Elasticsearch.
[ ssl_skip_verify: <boolean> ]
```

### memcached_exporter_config

The `memcached_exporter_config` block configures the `memcached_exporter`
Expand Down
15 changes: 15 additions & 0 deletions example/docker-compose/docker-compose.integrations.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -95,3 +95,18 @@ services:
image: consul
ports:
- "8500:8500"

elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
environment:
- node.name=elasticsearch
- cluster.name=es-grafana-agent-cluster
- discovery.type=single-node
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
ports:
- "9200:9200"

volumes:
elasticsearch_data:
driver: local
1 change: 1 addition & 0 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ require (
github.com/hashicorp/yamux v0.0.0-20190923154419-df201c70410d // indirect
github.com/joshdk/go-junit v0.0.0-20200702055522-6efcf4050909 // indirect
github.com/jsternberg/zap-logfmt v1.2.0
github.com/justwatchcom/elasticsearch_exporter v1.1.0
github.com/miekg/dns v1.1.31
github.com/ncabatoff/process-exporter v0.7.5
github.com/oklog/run v1.1.0
Expand Down
2 changes: 2 additions & 0 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -1243,6 +1243,8 @@ github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfV
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/jung-kurt/gofpdf v1.0.3-0.20190309125859-24315acbbda5/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes=
github.com/justwatchcom/elasticsearch_exporter v1.1.0 h1:a4VsY/WL2eLUFROJVAKVYS1wBVibjsXDnvXZXGlWPv0=
github.com/justwatchcom/elasticsearch_exporter v1.1.0/go.mod h1:LGKG9gqz9UugCJRoKgfoPIVGEbR3bQYFFHNnc0TOwVA=
github.com/jwilder/encoding v0.0.0-20170811194829-b4e1701a28ef/go.mod h1:Ct9fl0F6iIOGgxJ5npU/IUOhOhqlVrGjyIZc8/MagT0=
github.com/k0kubun/go-ansi v0.0.0-20180517002512-3bf9e2903213/go.mod h1:vNUNkEQ1e29fT/6vq2aBdFsgNPmy8qMdSay1npru+Sw=
github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0/go.mod h1:1NbS8ALrpOvjt0rHPNLyCIeMtbizbir8U//inJ+zuB8=
Expand Down
62 changes: 46 additions & 16 deletions pkg/integrations/collector_integration.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,21 +12,52 @@ import (
"github.com/prometheus/common/version"
)

// CollectorIntegration is an integration exposing metrics from a Prometheus
// collector.
// CollectorIntegration is an integration exposing metrics from one or more Prometheus collectors.
type CollectorIntegration struct {
name string
c prometheus.Collector
cs []prometheus.Collector
includeExporterMetrics bool
runner func(context.Context) error
}

// NewCollectorIntegration creates a basic integration that exposes metrics
// from a prometheus.Collector.
func NewCollectorIntegration(name string, c prometheus.Collector, includeExporterMetrics bool) *CollectorIntegration {
return &CollectorIntegration{
name: name,
c: c,
includeExporterMetrics: includeExporterMetrics,
// NewCollectorIntegration creates a basic integration that exposes metrics from multiple prometheus.Collector.
func NewCollectorIntegration(name string, configs ...CollectorIntegrationConfig) *CollectorIntegration {
i := &CollectorIntegration{
name: name,
runner: func(ctx context.Context) error {
// We don't need to do anything by default, so we can just wait for the context to finish.
<-ctx.Done()
return ctx.Err()
},
}
for _, configure := range configs {
configure(i)
}
return i
}

// CollectorIntegrationConfig defines constructor configuration for NewCollectorIntegration
type CollectorIntegrationConfig func(integration *CollectorIntegration)

// WithCollector adds more collectors to the CollectorIntegration being created.
func WithCollectors(cs ...prometheus.Collector) CollectorIntegrationConfig {
return func(i *CollectorIntegration) {
i.cs = append(i.cs, cs...)
}
}

// WithRunner replaces the runner of the CollectorIntegration.
// The runner function should run while the context provided is not done.
func WithRunner(runner func(context.Context) error) CollectorIntegrationConfig {
return func(i *CollectorIntegration) {
i.runner = runner
}
}

// WithExporterMetricsIncluded can enable the exporter metrics if the flag provided is enabled.
func WithExporterMetricsIncluded(included bool) CollectorIntegrationConfig {
return func(i *CollectorIntegration) {
i.includeExporterMetrics = included
}
}

Expand All @@ -45,8 +76,10 @@ func (i *CollectorIntegration) RegisterRoutes(r *mux.Router) error {

func (i *CollectorIntegration) handler() (http.Handler, error) {
r := prometheus.NewRegistry()
if err := r.Register(i.c); err != nil {
return nil, fmt.Errorf("couldn't register %s: %w", i.name, err)
for _, c := range i.cs {
if err := r.Register(c); err != nil {
return nil, fmt.Errorf("couldn't register %s: %w", i.name, err)
}
}

// Register <integration name>_build_info metrics, generally useful for
Expand Down Expand Up @@ -81,8 +114,5 @@ func (i *CollectorIntegration) ScrapeConfigs() []config.ScrapeConfig {

// Run satisfies Integration.Run.
func (i *CollectorIntegration) Run(ctx context.Context) error {
// We don't need to do anything here, so we can just wait for the context to
// finish.
<-ctx.Done()
return ctx.Err()
return i.runner(ctx)
}
2 changes: 1 addition & 1 deletion pkg/integrations/consul_exporter/consul_exporter.go
Original file line number Diff line number Diff line change
Expand Up @@ -88,5 +88,5 @@ func New(log log.Logger, c *Config) (integrations.Integration, error) {
return nil, err
}

return integrations.NewCollectorIntegration(c.Name(), e, false), nil
return integrations.NewCollectorIntegration(c.Name(), integrations.WithCollectors(e)), nil
}
2 changes: 1 addition & 1 deletion pkg/integrations/dnsmasq_exporter/dnsmasq_exporter.go
Original file line number Diff line number Diff line change
Expand Up @@ -57,5 +57,5 @@ func New(log log.Logger, c *Config) (integrations.Integration, error) {
SingleInflight: true,
}, c.DnsmasqAddress, c.LeasesPath)

return integrations.NewCollectorIntegration(c.Name(), exporter, false), nil
return integrations.NewCollectorIntegration(c.Name(), integrations.WithCollectors(exporter)), nil
}
Loading