copyright | lastupdated | ||
---|---|---|---|
|
2018-01-10 |
{:shortdesc: .shortdesc} {:new_window: target="_blank"} {:codeblock: .codeblock} {:screen: .screen} {:pre: .pre}
This tutorial walks you through creating a cluster, configuring the cluster to send logs to the {site.data.keyword.loganalysisshort}} service, deploying an application to the cluster and then using Kibana to view and analyze logs. {:shortdesc}
- Create a Kubernetes cluster.
- Provision the {{site.data.keyword.loganalysisshort}} service.
- Create logging configurations in the cluster.
- View, search and analyze logs in Kibana
{: #prereq}
- Container registry with namespace configured
- IBM Cloud Developer Tools - Script to install docker, kubectl, helm, bx cli and required plug-ins
- Basic understanding of Kubernetes
{: #step1}
- Create a Kubernetes cluster from the {{site.data.keyword.Bluemix}} catalog. Create a Pay-As-You_Go cluster. Log forwarding is not enabled for the Free cluster. {:tip}
- For convenience, use the name
mycluster
to be consistent with this tutorial. - Check the status of your Cluster and Worker Nodes and wait for them to be ready.
NOTE: Do not proceed until your workers are ready. This might take up to one hour.
In this step, you'll configure kubectl to point to your newly created cluster going forward. kubectl is a command line tool that you use to interact with a Kubernetes cluster.
- Use
bx login
to log in interactively. Provide the account under which the cluster is created. You can reconfirm the details by runningbx target
command. - When the cluster is ready, retrieve the cluster configuration:
{: pre}
bx cs cluster-config <cluster-name>
- Copy and paste the export command to set the KUBECONFIG environment variable as directed. To verify whether the KUBECONFIG environment variable is set properly or not, run the following command:
echo $KUBECONFIG
- Check that the
kubectl
command is correctly configured{: pre}kubectl cluster-info
- Helm helps you manage Kubernetes applications through Helm Charts, which helps define, install, and upgrade even the most complex Kubernetes application. After your cluster workers are ready, run the command below to initialize Helm in your cluster.
{: pre}
helm init
{: #forwardlogs}
When an application is deployed, logs are collected automatically by the {{site.data.keyword.containershort}}. To forward these logs to the {{site.data.keyword.loganalysisshort}} service, you must create one or more logging configurations in your cluster that define:
- Where logs are to be forwarded. You can forward logs to the account domain or to a space domain.
- What logs are forwarded to the {{site.data.keyword.loganalysisshort}} service for analysis.
{: #containerlogs}
- From the IBM Cloud Dashboard, select the region, org and space where you want to create your Log Analysis service.
- From the Catalog, select and create a Log Analysis service. If you're unable to create the service, check for an existing instance of the Log Analysis service in your space.
- Ensure that the API key owner for your cluster
bx cs api-key-info mycluster
hasDeveloper
andManager
Cloud Foundry access to the org and space where the Log Analysis service is created. Grant user permissions. - Run the following command to send container log files to the {{site.data.keyword.loganalysisshort}} service:
{: codeblock} where
bx cs logging-config-create mycluster --logsource container --namespace default --type ibm --hostname IngestionHost --port 9091 --org OrgName --space SpaceName
- mycluster is the name of your cluster.
- IngestionHost is the hostname to the logging service in the region where the {{site.data.keyword.loganalysisshort}} service is provisioned. For a list of endpoints, see Endpoints.
- OrgName and SpaceName is the location where the {{site.data.keyword.loganalysisshort}} service is provisioned.
{: #create_application}
The bx dev
tooling greatly cuts down on development time by generating application starters with all the necessary boilerplate, build and configuration code so that you can start coding business logic faster.
-
Start the
bx dev
wizard.bx dev create
{: pre}
-
Select
Backend Service / Web App
>Node
>Web App - Express.js Basic
to create a Node.js starter application. -
Enter a name (
mynodestarter
) and a unique hostname (username-mynodestarter
) for your project. -
Select n to skip adding services. This generates a starter application complete with the code and all the necessary configuration files for local development and deployment to cloud on Cloud Foundry or Kubernetes. For an overview of the files generated, see Project Contents Documentation.
You can build and run the application as you normally would using mvn
for java local development or npm
for node development. You can also build a docker image and run the application in a container to ensure consistent execution locally and on the cloud. Use the following steps to build your docker image.
-
Ensure your local Docker engine is started.
docker ps
{: pre}
-
Change to the generated project directory.
cd <project name>
{: pre}
-
Edit the file
server/server.js
and add the following code to the bottom of the file. This will output various random types of log message every second.setInterval(() => { var randomInt = Math.floor(Math.random() * 10); if (randomInt < 5) logger.info('Cheese is Gouda.'); else if (randomInt >= 5 && randomInt < 8) logger.warn('Cheese is quite smelly.'); else if (randomInt == 8) logger.fatal('Cheese was breeding ground for listeria.'); else logger.error('Cheese is too ripe!'); }, 1000)
-
Build the application.
bx dev build
{: pre}
This might take a few minutes to run as all the application dependencies are downloaded and a Docker image, which contains your application and all the required environment, is built.
-
Run the container.
bx dev run
{: pre}
This uses your local Docker engine to run the docker image that you built in the previous step.
-
After your container starts, go to http://localhost:3000/
{: #deploy}
In this section, we first push the Docker image to the IBM Cloud private container registry, and then create a Kubernetes deployment pointing to that image.
- Find your namespace by listing all the namespace in the registry.
{: pre} If you have a namespace, make note of the name for use later. If you don't have one, create it.
bx cr namespaces
{: pre}bx cr namespace-add <name>
- Find the Container Registry information by running.
{: pre}
bx cr info
- Deploy to your Kubernetes cluster:
{: pre}
bx dev deploy -t container
- When prompted, enter your cluster name.
- Next, enter your image name. Use the following format:
<registry_url>/<namespace>/<projectname>
- Wait a few minutes for your application to be deployed.
- Visit the URL displayed to access the application by
http://ip-address:portnumber/
{: #step8}
The application generates some log data every time you visit its URL. Because of our logging configuration, this data should be forwarded to Log Analysis service and available via Kibana.
From the IBM Cloud Dashboard, select your Log Analysis instance and click Launch.
For more information about other search fields that are relevant to Kubernetes clusters, see Searching logs.
{: #step8}
- In the filtering menu on the left, you can filter down to only see message from the container you are interested in by expanding
kubernetes.container_name_str
and clicking on the container name. - Click on the add button next to message to only see the log messages.
- Adjust the displayed interval by navigating to the upper right and clicking on Last 15 minutes. Adjust the value to Last 24 hours.
- Next to the configuration of the interval is the auto-refresh setting. By default it is switched off, but you can change it.
- Below the configuration is the search field. Here you can enter and define search queries. To filter for all logs reported as app errors and containing one of the defined log levels, enter the following:
message:(WARN|INFO|ERROR|FATAL)
6. Store the search criteria for future use by clicking Save in the configuration bar. Use mylogs as name.
For more information, see Filtering logs in Kibana.
Now that you have a query defined, in this section you will use it as foundation for a chart, a visualization of that data. You will first create visualizations and then use them to compose a dashboard.
- Click on Visualize in the left navigation bar.
- In the list of offered visualizations Locate Pie chart and click on it.
- Select the query mylogs that you saved earlier.
- On the next screen, under Select buckets type, select Split Slices, then for Aggregation choose Filters. Add 4 filters having the values of INFO, WARN, ERROR, and FATAL as shown here:
- Click on Options (right to Data) and activate Donut as view option. Finally, click on the play icon to apply all changes to the chart. Now you should see a Donut Pie Chart similar to this one:
- Adjust the displayed interval by navigating to the upper right and clicking on Last 15 minutes. Adjust the value to Last 24 hours.
- Save the visualization as DonutLogs.
Next, create another visualization for Metric.
- Click on New and pick Metric from the list of offered visualizations and click on the link beginning with [logstash-].
- On the next screen, expand Metric to be able to enter a custom label. Add Log Entries within 24 hours and click on the play icon to update the shown metric.
- Save the visualization as LogCount24.
Once you have added visualizations, they can be used to compose a dashboard. A dashboard is used to display all important metrics and to help indicate the health of your apps and services.
- Click on Dashboard in the left navigation panel, then on Add to start placing existing visualizations onto the empty dashboard.
- Add the log count on the left and the donut chart on the right. Change the size of each component and to move them as desired.
- Click on the arrow in the lower left corner of a component to view changes to a table layout and additional information about the underlying request, response and execution statistics are offered.
- Save the dashboard for future use.
Do you want to learn more? Here are some ideas of what you can do next:
- Deploy another application to the cluster or use an app deployed in a Cloud Foundry environment. The Log Analysis dashboard (Kibana) will show the combined logs of all apps.
- Filter by a single app.
- Add a saved search and metric only for specific error level.
- Build a dashboard for all your apps.
- Documentation for IBM Cloud Log Analysis
- IBM Cloud Log Collection API
- Kibana User Guide: Discovering Your Data
- Kibana User Guide: Visualizing Your Data
- Kibana User Guide: Putting it all Together with Dashboards