Skip to content

Commit

Permalink
chore: some hardening
Browse files Browse the repository at this point in the history
  • Loading branch information
palmerabollo committed Mar 31, 2020
1 parent 0455720 commit 4339ddf
Show file tree
Hide file tree
Showing 4 changed files with 18 additions and 21 deletions.
5 changes: 3 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
FROM golang:1.13.8-alpine3.11 as build
FROM golang:1.14.1-alpine3.11 as build

# Get prebuild libkafka
# Get prebuild libkafka.
# XXX stop using the edgecommunity channel once librdkafka 1.3.0 is officially published
RUN echo "@edge http://dl-cdn.alpinelinux.org/alpine/edge/main" >> /etc/apk/repositories && \
echo "@edgecommunity http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories && \
apk add --no-cache alpine-sdk 'librdkafka@edgecommunity>=1.3.0' 'librdkafka-dev@edgecommunity>=1.3.0'
Expand Down
10 changes: 3 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ The Avro-JSON serialization is the same. See the [Avro schema](./schemas/metric.

### prometheus-kafka-adapter

There is a docker image `telefonica/prometheus-kafka-adapter:1.5.1` [available on Docker Hub](https://hub.docker.com/r/telefonica/prometheus-kafka-adapter/).
There is a docker image `telefonica/prometheus-kafka-adapter:1.6.0` [available on Docker Hub](https://hub.docker.com/r/telefonica/prometheus-kafka-adapter/).

Prometheus-kafka-adapter listens for metrics coming from Prometheus and sends them to Kafka. This behaviour can be configured with the following environment variables:

Expand All @@ -60,9 +60,7 @@ To connect to Kafka over SSL define the following additonal environment variable
- `KAFKA_SSL_CLIENT_KEY_PASS`: Kafka SSL client certificate key password (optional), defaults to `""`
- `KAFKA_SSL_CA_CERT_FILE`: Kafka SSL broker CA certificate file, defaults to `""`

When deployed in a K8s Cluster using Helm and using a Kafka external to the cluster, it might be necessary to define the kafka hostname resolution locally (this fills the /etc/hosts of the container).

Use a custom values.yaml file with section 'hostAliases' (as mentioned in default values.yaml).
When deployed in a Kubernetes cluster using Helm and using a Kafka external to the cluster, it might be necessary to define the kafka hostname resolution locally (this fills the /etc/hosts of the container). Use a custom values.yaml file with section `hostAliases` (as mentioned in default values.yaml).

### prometheus

Expand All @@ -73,9 +71,7 @@ remote_write:
- url: "http://prometheus-kafka-adapter:8080/receive"
```
When deployed in a K8s Cluster using Helm and using an external Prometheus, it might be necessary to expose prometheus-kafka-adapter input port as a node port.
Use a custom values.yaml file to set service.type: NodePort and service.nodeport:<PortNumber> (see comments in default values.yaml)
When deployed in a Kubernetes cluster using Helm and using an external Prometheus, it might be necessary to expose prometheus-kafka-adapter input port as a node port. Use a custom values.yaml file to set `service.type: NodePort` and `service.nodeport: <PortNumber>` (see comments in default values.yaml)

## development

Expand Down
2 changes: 1 addition & 1 deletion helm/prometheus-kafka-adapter/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ replicaCount: 1

image:
repository: telefonica/prometheus-kafka-adapter
tag: 1.4.1
tag: 1.6.0
pullPolicy: IfNotPresent

imagePullSecrets: []
Expand Down
22 changes: 11 additions & 11 deletions main.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,20 +29,20 @@ func main() {
log.Info("creating kafka producer")

kafkaConfig := kafka.ConfigMap{
"bootstrap.servers": kafkaBrokerList,
"compression.codec": kafkaCompression,
"batch.num.messages": kafkaBatchNumMessages,
"go.batch.producer": true, // Enable batch producer (for increased performance).
"go.delivery.reports": false, // per-message delivery reports to the Events() channel
"bootstrap.servers": kafkaBrokerList,
"compression.codec": kafkaCompression,
"batch.num.messages": kafkaBatchNumMessages,
"go.batch.producer": true, // Enable batch producer (for increased performance).
"go.delivery.reports": false, // per-message delivery reports to the Events() channel
}

if kafkaSslClientCertFile != "" && kafkaSslClientKeyFile != "" && kafkaSslCACertFile != "" {
kafkaSslValidation = true
kafkaConfig["security.protocol"] = "ssl"
kafkaConfig["ssl.ca.location"] = kafkaSslCACertFile // CA certificate file for verifying the broker's certificate.
kafkaConfig["ssl.certificate.location"] = kafkaSslClientCertFile // Client's certificate
kafkaConfig["ssl.key.location"] = kafkaSslClientKeyFile // Client's key
kafkaConfig["ssl.key.password"] = kafkaSslClientKeyPass // Key password, if any.
kafkaSslValidation = true
kafkaConfig["security.protocol"] = "ssl"
kafkaConfig["ssl.ca.location"] = kafkaSslCACertFile // CA certificate file for verifying the broker's certificate.
kafkaConfig["ssl.certificate.location"] = kafkaSslClientCertFile // Client's certificate
kafkaConfig["ssl.key.location"] = kafkaSslClientKeyFile // Client's key
kafkaConfig["ssl.key.password"] = kafkaSslClientKeyPass // Key password, if any.
}

producer, err := kafka.NewProducer(&kafkaConfig)
Expand Down

0 comments on commit 4339ddf

Please sign in to comment.