Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add chart for distribution of logs in ClickHouse #131

Merged
merged 1 commit into from
Sep 3, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 36 additions & 42 deletions .github/workflows/codeql.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -42,60 +42,54 @@ on:
- 'mkdocs.yml'
- 'README.md'
schedule:
- cron: '0 20 * * 0'
- cron: '21 16 * * 3'

jobs:
codeql:
name: CodeQL
runs-on: ubuntu-20.04
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write

strategy:
fail-fast: false
matrix:
# Override automatic language detection by changing the below list
# Supported options are ['csharp', 'cpp', 'go', 'java', 'javascript', 'python']
language: ['go', 'javascript']
# Learn more...
# https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#overriding-automatic-language-detection
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more:
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed

steps:
- name: Checkout repository
uses: actions/checkout@v2
with:
# We must fetch at least the immediate parents so that if this is
# a pull request then we can checkout the head.
fetch-depth: 2
- name: Checkout repository
uses: actions/checkout@v2

# If this run was triggered by a pull request event, then checkout
# the head of the pull request instead of the merge commit.
- run: git checkout HEAD^2
if: ${{ github.event_name == 'pull_request' }}
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main

# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v1

# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v1
# ℹ️ Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl

# ℹ️ Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language

# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release

#- run: |
# make bootstrap
# make release

- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ NOTE: As semantic versioning states all 0.y.z releases can contain breaking chan
- [#126](https://github.com/kobsio/kobs/pull/126): Show request details when gettings logs from ClickHouse.
- [#127](https://github.com/kobsio/kobs/pull/127): Allow `ILIKE` queries for ClickHouse logs, using the new `=~` operator.
- [#128](https://github.com/kobsio/kobs/pull/128): Allow users to specify dashboards within a Team or Application via the new `inline` property.
- [#131](https://github.com/kobsio/kobs/pull/131): Add chart which shows the distribution of the logs lines in the selected time range for the ClickHouse plugin.

### Fixed

Expand Down
Binary file modified docs/plugins/assets/clickhouse-logs.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/plugins/clickhouse.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ The following options can be used for a panel with the ClickHouse plugin:
| Field | Type | Description | Required |
| ----- | ---- | ----------- | -------- |
| type | string | Set the type for which you want to use the ClickHouse instance. This must be `sql` or `logs` | Yes |
| showChart | boolean | If this is `true` the chart with the distribution of the Documents over the selected time range will be shown. This option is only available when type is `logs`. | No |
| queries | [[]Query](#query) | A list of queries, which can be selected by the user. | Yes |

### Query
Expand Down
10 changes: 6 additions & 4 deletions plugins/clickhouse/clickhouse.go
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ func (router *Router) getLogs(w http.ResponseWriter, r *http.Request) {
render.JSON(w, r, data)
}

func (router *Router) getLogsCount(w http.ResponseWriter, r *http.Request) {
func (router *Router) getLogsStats(w http.ResponseWriter, r *http.Request) {
name := chi.URLParam(r, "name")
query := r.URL.Query().Get("query")
timeStart := r.URL.Query().Get("timeStart")
Expand All @@ -171,16 +171,18 @@ func (router *Router) getLogsCount(w http.ResponseWriter, r *http.Request) {
return
}

count, err := i.GetLogsCount(r.Context(), query, parsedTimeStart, parsedTimeEnd)
count, buckets, err := i.GetLogsStats(r.Context(), query, parsedTimeStart, parsedTimeEnd)
if err != nil {
errresponse.Render(w, r, err, http.StatusBadRequest, "Could not get logs count")
return
}

data := struct {
Count int64 `json:"count"`
Count int64 `json:"count"`
Buckets []instance.Bucket `json:"buckets"`
}{
count,
buckets,
}

render.JSON(w, r, data)
Expand Down Expand Up @@ -219,7 +221,7 @@ func Register(clusters *clusters.Clusters, plugins *plugin.Plugins, config Confi

router.Get("/sql/{name}", router.getSQL)
router.Get("/logs/documents/{name}", router.getLogs)
router.Get("/logs/count/{name}", router.getLogsCount)
router.Get("/logs/stats/{name}", router.getLogsStats)

return router
}
46 changes: 38 additions & 8 deletions plugins/clickhouse/pkg/instance/instance.go
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ func (i *Instance) GetLogs(ctx context.Context, query string, limit, offset, tim
// timestamp of a row is within the selected query range and the parsed query. We also order all the results by the
// timestamp field and limiting the results / using a offset for pagination.
sqlQuery := fmt.Sprintf("SELECT %s FROM %s.logs WHERE timestamp >= ? AND timestamp <= ? %s ORDER BY timestamp DESC LIMIT %d OFFSET %d SETTINGS skip_unavailable_shards = 1", defaultColumns, i.database, conditions, limit, offset)
log.WithFields(logrus.Fields{"query": sqlQuery}).Tracef("sql query")
log.WithFields(logrus.Fields{"query": sqlQuery, "timeStart": timeStart, "timeEnd": timeEnd}).Tracef("sql query")
rows, err := i.client.QueryContext(ctx, sqlQuery, time.Unix(timeStart, 0), time.Unix(timeEnd, 0))
if err != nil {
return nil, nil, 0, offset, err
Expand Down Expand Up @@ -152,35 +152,65 @@ func (i *Instance) GetLogs(ctx context.Context, query string, limit, offset, tim
return documents, fields, time.Now().Sub(queryStartTime).Milliseconds(), offset + limit, nil
}

// GetLogsCount returns the number of documents, which could be returned by the user provided query.
func (i *Instance) GetLogsCount(ctx context.Context, query string, timeStart, timeEnd int64) (int64, error) {
// GetLogsStats returns the number of documents, which could be returned by the user provided query and the distribution
// of the logs over the selected time range.
func (i *Instance) GetLogsStats(ctx context.Context, query string, timeStart, timeEnd int64) (int64, []Bucket, error) {
var count int64
var buckets []Bucket

conditions := ""
if query != "" {
parsedQuery, err := parseLogsQuery(query)
if err != nil {
return 0, err
return 0, nil, err
}

conditions = fmt.Sprintf("AND %s", parsedQuery)
}

sqlQueryCount := fmt.Sprintf("SELECT count(*) FROM %s.logs WHERE timestamp >= ? AND timestamp <= ? %s SETTINGS skip_unavailable_shards = 1", i.database, conditions)
log.WithFields(logrus.Fields{"query": sqlQueryCount}).Tracef("sql count query")
log.WithFields(logrus.Fields{"query": sqlQueryCount, "timeStart": timeStart, "timeEnd": timeEnd}).Tracef("sql count query")
rowsCount, err := i.client.QueryContext(ctx, sqlQueryCount, time.Unix(timeStart, 0), time.Unix(timeEnd, 0))
if err != nil {
return 0, err
return 0, nil, err
}
defer rowsCount.Close()

for rowsCount.Next() {
if err := rowsCount.Scan(&count); err != nil {
return 0, err
return 0, nil, err
}
}

return count, nil
interval := (timeEnd - timeStart) / 30
sqlQueryBuckets := fmt.Sprintf("SELECT toStartOfInterval(timestamp, INTERVAL %d second) AS interval_data , count(*) AS count_data FROM %s.logs WHERE timestamp >= ? AND timestamp <= ? %s GROUP BY interval_data SETTINGS skip_unavailable_shards = 1", interval, i.database, conditions)
log.WithFields(logrus.Fields{"query": sqlQueryBuckets, "timeStart": timeStart, "timeEnd": timeEnd}).Tracef("sql buckets query")
rowsBuckets, err := i.client.QueryContext(ctx, sqlQueryBuckets, time.Unix(timeStart, 0), time.Unix(timeEnd, 0))
if err != nil {
return 0, nil, err
}
defer rowsBuckets.Close()

for rowsBuckets.Next() {
var intervalData time.Time
var countData int64

if err := rowsBuckets.Scan(&intervalData, &countData); err != nil {
return 0, nil, err
}

buckets = append(buckets, Bucket{
Interval: intervalData,
IntervalFormatted: intervalData.Format("01-02 15:04:05"),
Count: countData,
})
}

sort.Slice(buckets, func(i, j int) bool {
return buckets[i].Interval.Before(buckets[j].Interval)
})

return count, buckets, nil
}

// New returns a new ClickHouse instance for the given configuration.
Expand Down
8 changes: 8 additions & 0 deletions plugins/clickhouse/pkg/instance/structs.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,3 +29,11 @@ type Row struct {
FieldsNumber FieldNumber
Log string
}

// Bucket is the struct which is used to represent the distribution of the returned rows for a logs query for the given
// time range.
type Bucket struct {
Interval time.Time `json:"-"`
IntervalFormatted string `json:"interval"`
Count int64 `json:"count"`
}
22 changes: 14 additions & 8 deletions plugins/clickhouse/src/components/page/Logs.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ import { ILogsData } from '../../utils/interfaces';
import { IPluginTimes } from '@kobsio/plugin-core';
import LogsDocuments from '../panel/LogsDocuments';
import LogsFields from './LogsFields';
import LogsHeader from './LogsHeader';
import LogsStats from '../panel/LogsStats';

interface IPageLogsProps {
name: string;
Expand Down Expand Up @@ -111,19 +111,25 @@ const PageLogs: React.FunctionComponent<IPageLogsProps> = ({
</Card>
</GridItem>
<GridItem sm={12} md={12} lg={9} xl={10} xl2={10}>
<LogsStats
name={name}
query={query}
times={times}
took={data.pages[0].took || 0}
isFetchingDocuments={isFetching}
isPanel={false}
/>

<p>&nbsp;</p>

<Card isCompact={true} style={{ maxWidth: '100%', overflowX: 'scroll' }}>
<LogsHeader
name={name}
query={query}
times={times}
took={data.pages[0].took || 0}
isFetchingDocuments={isFetching}
/>
<CardBody>
<LogsDocuments pages={data.pages} fields={fields} showDetails={showDetails} />
</CardBody>
</Card>

<p>&nbsp;</p>

{data.pages[0].documents && data.pages[0].documents.length > 0 ? (
<Card isCompact={true}>
<CardBody>
Expand Down
61 changes: 0 additions & 61 deletions plugins/clickhouse/src/components/page/LogsHeader.tsx

This file was deleted.

14 changes: 14 additions & 0 deletions plugins/clickhouse/src/components/panel/Logs.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,14 @@ import { ILogsData, IQuery } from '../../utils/interfaces';
import { IPluginTimes, PluginCard } from '@kobsio/plugin-core';
import LogsActions from './LogsActions';
import LogsDocuments from '../panel/LogsDocuments';
import LogsStats from './LogsStats';

interface ILogsProps {
name: string;
title: string;
description?: string;
queries: IQuery[];
showChart: boolean;
times: IPluginTimes;
showDetails?: (details: React.ReactNode) => void;
}
Expand All @@ -32,6 +34,7 @@ const Logs: React.FunctionComponent<ILogsProps> = ({
title,
description,
queries,
showChart,
times,
showDetails,
}: ILogsProps) => {
Expand Down Expand Up @@ -135,6 +138,17 @@ const Logs: React.FunctionComponent<ILogsProps> = ({
</Alert>
) : data && data.pages.length > 0 ? (
<div>
{showChart && selectedQuery.query ? (
<LogsStats
name={name}
query={selectedQuery.query}
times={times}
took={data.pages[0].took || 0}
isFetchingDocuments={isFetching}
isPanel={true}
/>
) : null}

<LogsDocuments pages={data.pages} fields={selectedQuery.fields} showDetails={showDetails} />
<p>&nbsp;</p>

Expand Down
Loading