diff --git a/publish/app-logging-ocp-4.2/app-logging-ocp-4.2.md b/publish/app-logging-ocp-4.2/app-logging-ocp-4.2.md index 2b0736d..caf6ff7 100644 --- a/publish/app-logging-ocp-4.2/app-logging-ocp-4.2.md +++ b/publish/app-logging-ocp-4.2/app-logging-ocp-4.2.md @@ -1,9 +1,9 @@ --- permalink: /guides/app-logging-ocp-4-2/ layout: guide-markdown -title: Application Logging with Elasticsearch, Fluentd, and Kibana +title: Application Logging on Red Hat OpenShift Container Platform (RHOCP) 4.3 with Elasticsearch, Fluentd, and Kibana duration: 30 minutes -releasedate: 2020-02-19 +releasedate: 2020-03-26 description: Learn how to do application logging with Elasticsearch, Fluentd, and Kibana. tags: ['logging', 'Elasticsearch', 'Fluentd', 'Kibana'] guide-category: basic @@ -32,7 +32,7 @@ guide-category: basic // --> -**The following guide has been tested with Red Hat OpenShift Container Platform (RHOCP) 4.2/Kabanero 0.3.0.** +**The following guide has been tested with Red Hat OpenShift Container Platform (RHOCP) 4.2/Kabanero 0.3.0 and RHOCP 4.3/Kabanero 0.6.0.** Pod processes running in Kubernetes frequently produce logs. To effectively manage this log data and ensure no loss of log data occurs when a pod terminates, a log aggregation tool should be deployed on the Kubernetes cluster. Log aggregation tools help users persist, search, and visualize the log data that is gathered from the pods across the cluster. Log aggregation tools in the market today include: EFK, LogDNA, Splunk, Datadog, IBM Operations Analytics, etc. When considering log aggregation tools, enterprises will make choices that are inclusive of their journey to cloud, both new cloud native applications running in Kubernetes and their existing traditional IT choices. @@ -46,7 +46,7 @@ One choice for application logging with log aggregation, based on open source, i ## Install cluster logging -To install the cluster logging component, follow the OpenShift guide [Deploying cluster logging](https://docs.openshift.com/container-platform/4.2/logging/cluster-logging-deploying.html) +To install the cluster logging component, follow the OpenShift guide [Deploying cluster logging](https://docs.openshift.com/container-platform/4.3/logging/cluster-logging-deploying.html) After the installation completes without any error, you can see the following pods that are running in the *openshift-logging* namespace. The exact number of pods running for each of the EFK components can vary depending on the configuration specified in the ClusterLogging Custom Resource (CR). @@ -74,7 +74,7 @@ kibana kibana-openshift-logging.apps.host.kabanero.com kibana @@ -120,9 +120,9 @@ See the Kibana dashboard page by using the routes URL OperatorHub. Search for Grafana Operator and install it. Choose prometheus-operator under __A specific namespace on the cluster__ and subscribe. +. Go to OpenShift Container Platform web console and click Operators > OperatorHub. Search for Grafana Operator and install it. For __A specific namespace on the cluster__, choose prometheus-operator, and subscribe. -. Click on Overview and create a Grafana Data Source instance. +. Click Overview and create a Grafana Data Source instance. -. Inside the Grafana Data Source YAML file, make sure *metadata.namespace* is prometheus-operator. Set *spec.datasources.url* to the url of the target datasource. For example, inside [hotspot file=0]`grafana_datasource.yaml` file, the Prometheus service is *prometheus-operated* on port *9090*, so the url is set to __'http://prometheus-operated:9090'__. +. In the Grafana Data Source YAML file, make sure *metadata.namespace* is prometheus-operator. Set *spec.datasources.url* to the URL of the target datasource. For example, inside [hotspot file=0]`grafana_datasource.yaml` file, the Prometheus service is *prometheus-operated* on port *9090*, so the URL is set to __'http://prometheus-operated:9090'__. + [role="code_command hotspot file=0", subs="quotes"] ---- @@ -169,9 +169,9 @@ Refer to the `grafana_datasource.yaml` file ---- + -. Click on Overview and create a Grafana instance. +. Click Overview and create a Grafana instance. -. Inside the Grafana YAML file, make sure *metadata.namespace* is prometheus-operator. You can define the match expression to select which Dashboards you are interested in under *spec.dashboardLabelSelector.matchExpressions*. For example, inside [hotspot file=1]`grafana.yaml` file, the Grafana will discover dashboards with app labels having a value of *grafana*. +. In the Grafana YAML file, make sure *metadata.namespace* is prometheus-operator. You can define the match expression to select which Dashboards you are interested in under *spec.dashboardLabelSelector.matchExpressions*. For example, inside [hotspot file=1]`grafana.yaml` file, the Grafana will discover dashboards with app labels having a value of *grafana*. + [role="code_command hotspot file=1", subs="quotes"] ---- @@ -179,7 +179,7 @@ Refer to the `grafana.yaml` file ---- + -. Click on Overview and create a Grafana Dashboard instance. +. Click Overview and create a Grafana Dashboard instance. . Copy [hotspot file=2]`grafana_dashboard.yaml` to Grafana Dashboard YAML file to check the Data Source is connected and Prometheus endpoints are discoverable. @@ -190,7 +190,7 @@ Apply `grafana_dashboard.yaml` file to check ---- + -. Click on Networking > Routes and go to Grafana's location to see the template dashboard. You can now consume all the application metrics gathered by Prometheus on the Grafana dashboard. +. Click on Networking > Routes and go to Grafana's location to see the sample dashboard. You can now consume all the application metrics gathered by Prometheus on the Grafana dashboard. image::/img/guide/template_grafana_dashboard.png[link="/img/guide/template_grafana_dashboard.png" alt="Template Dashboard"] @@ -198,3 +198,79 @@ image::/img/guide/template_grafana_dashboard.png[link="/img/guide/template_grafa . When importing your own Grafana dashboard, your dashboard should be configured under *spec.json* in Grafana Dashboard YAML file. Make sure under *"__inputs"*, the name matches with your Grafana Data Source's *spec.datasources*. For example, inside [hotspot file=2]`grafana_dashboard.yaml` file, *name* is set to "Prometheus". + +== Configure Prometheus Operator to Detect Service Monitors in Other Namespaces + +By default, the Prometheus Operator only watches the namespace it currently resides in, so in order to get the Prometheus Operator to detect service monitors created in other namespaces, you must apply the following configuration changes. + +. In your monitoring namespace - in this case, the monitoring namespace is `prometheus-operator` - edit the OperatorGroup to add your application's namespace, for example, `myapp`, to the list of targeted namesaces to be watched. This will change the *olm.targetNamespaces* variable that the Prometheus Operator uses for detecting namespaces to include your `myapp` namespace. ++ +[role="command"] +---- +oc edit operatorgroup +---- ++ + ++ +[source,role="no_copy"] +---- +spec: +  targetNamespaces: +  - prometheus-operator +  - myapp +---- ++ + +. Since we have changed the `prometheus-operator` namespace's OperatorGroup to monitor more than one namespace, the operators in this namespace must have the *MultiNamespace* installMode set to *true*. Prometheus Operator installed via OLM has the *MultiNamespace* installMode set to *false* by default, disabling monitoring for more than one namespace, so this must be changed to *true*. ++ +[role="command"] +---- +oc edit csv prometheusoperator.0.32.0 +---- ++ + ++ +[source,role="no_copy"] +---- +spec: +  installModes: +  - supported: true +    type: OwnNamespace +  - supported: true +    type: SingleNamespace +  - supported: true # this line should be true +    type: MultiNamespace +  - supported: false +    type: AllNamespaces +---- ++ + +. The same goes for the Grafana Operator, the *MultiNamespace* installMode should be set to *true*; edit the operator using: ++ +[role="command"] +---- +oc edit csv grafana-operator.v2.0.0  +---- ++ + +. Edit the Prometheus instance to add the *serviceMonitorNamespaceSelector* definition. The empty brackets *{}* allow Prometheus to scrape from *all* namespaces: ++ +[role="command"] +---- +oc edit prometheuses.monitoring.coreos.com prometheus +---- ++ + ++ +[source,role="no_copy"] +---- +spec: +  serviceMonitorNamespaceSelector: {} +---- ++ + +. Restart the Prometheus Operator and Grafana Operator pods to see the changes. + +== Installation Complete + +You now have the Prometheus and Grafana stack installed and configured to monitor your applications. Import custom dashboards and visit the Grafana route to see your metrics visualized. \ No newline at end of file