Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add elasticsearch-logger #7643

Merged
merged 57 commits into from
Aug 31, 2022
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
6e1f2d4
add elasticsearch-logging base function
ccxhwmy Aug 10, 2022
446b174
add the support of option of username and password
ccxhwmy Aug 11, 2022
6dfc58c
add the support of elasticsearch authentication
ccxhwmy Aug 13, 2022
54cc283
add docs of elasticsearch plugin
ccxhwmy Aug 13, 2022
9916235
update docker compose file to add elasticsearch config
ccxhwmy Aug 13, 2022
0e96d5c
add test case
ccxhwmy Aug 13, 2022
00fe216
repair ci error
ccxhwmy Aug 13, 2022
d7e95c4
update test case
ccxhwmy Aug 14, 2022
f03e804
set elasticsearch index and type with configure
ccxhwmy Aug 14, 2022
ecbd983
Update apisix/plugins/elasticsearch-logging.lua
ccxhwmy Aug 15, 2022
5765063
Update apisix/plugins/elasticsearch-logging.lua
ccxhwmy Aug 15, 2022
2b8801c
Update docs/zh/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
1f7c530
Update docs/zh/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
219c4fa
Update docs/zh/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
acf8cac
Update docs/zh/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
35a5304
Update docs/zh/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
54ea755
Update docs/zh/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
354e89b
Update docs/zh/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
1b1474b
Update docs/en/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
cf163f7
Update docs/en/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
471dcad
Update docs/en/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
15f3dd7
Update docs/en/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
1b5fda7
Update docs/zh/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
2f2e017
Update docs/zh/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
c392327
Update docs/zh/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
454ff00
Update docs/en/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
78087d7
Update docs/zh/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
55c2ff1
Update docs/zh/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
2dc72a5
Update docs/zh/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
b98d4bd
Update docs/en/latest/plugins/elasticsearch-logging.md
ccxhwmy Aug 15, 2022
edcfa11
modify with suggestions
ccxhwmy Aug 17, 2022
9573140
pass test case
ccxhwmy Aug 17, 2022
a596e28
modify docs and test case
ccxhwmy Aug 18, 2022
c2000cb
format docs
ccxhwmy Aug 18, 2022
b7a306f
Update apisix/plugins/elasticsearch-logger.lua
ccxhwmy Aug 18, 2022
bc4bd75
add test case to check custom log
ccxhwmy Aug 20, 2022
2ce415d
modify docs
ccxhwmy Aug 20, 2022
bd2853b
modify docs
ccxhwmy Aug 20, 2022
9dfd2d5
not check username & password if auth is exist
ccxhwmy Aug 21, 2022
214e9a7
modify test case
ccxhwmy Aug 21, 2022
31e27f0
Update t/plugin/elasticsearch-logger.t
ccxhwmy Aug 22, 2022
b9059ab
update test case
ccxhwmy Aug 22, 2022
b859b90
Merge branch 'feat_elasticsearch_logging' of https://github.com/ccxhw…
ccxhwmy Aug 22, 2022
e7639ca
update docs
ccxhwmy Aug 22, 2022
45ca13a
add log metadata check
ccxhwmy Aug 22, 2022
cb6afd3
Merge branch 'master' into feat_elasticsearch_logging
ccxhwmy Aug 24, 2022
4b37d1b
update endpoint_addr check pattern
ccxhwmy Aug 24, 2022
cd96bc4
Update apisix/plugins/elasticsearch-logger.lua
ccxhwmy Aug 24, 2022
4c4859a
Update t/plugin/elasticsearch-logger.t
ccxhwmy Aug 24, 2022
1a56fd9
update test case
ccxhwmy Aug 24, 2022
902aa05
Update docs/zh/latest/plugins/elasticsearch-logger.md
ccxhwmy Aug 24, 2022
1693a2a
Update docs/zh/latest/plugins/elasticsearch-logger.md
ccxhwmy Aug 24, 2022
9914004
Update docs/zh/latest/plugins/elasticsearch-logger.md
ccxhwmy Aug 24, 2022
05e93e7
Update docs/zh/latest/plugins/elasticsearch-logger.md
ccxhwmy Aug 24, 2022
5d9028a
Update docs/en/latest/plugins/elasticsearch-logger.md
ccxhwmy Aug 24, 2022
5062c8d
remove endpoint_addr end with "/" judgement logic
ccxhwmy Aug 28, 2022
a5972f2
modify reviewer suggestion
ccxhwmy Aug 29, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
154 changes: 154 additions & 0 deletions apisix/plugins/elasticsearch-logging.lua
Original file line number Diff line number Diff line change
@@ -0,0 +1,154 @@
--
-- Licensed to the Apache Software Foundation (ASF) under one or more
-- contributor license agreements. See the NOTICE file distributed with
-- this work for additional information regarding copyright ownership.
-- The ASF licenses this file to You under the Apache License, Version 2.0
-- (the "License"); you may not use this file except in compliance with
-- the License. You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
--

local ngx = ngx
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should move the localized variable after require ...

local core = require("apisix.core")
local ngx_now = ngx.now
local http = require("resty.http")
local log_util = require("apisix.utils.log-util")
local bp_manager_mod = require("apisix.utils.batch-processor-manager")

local DEFAULT_ELASTICSEARCH_SOURCE = "apache-apisix-elasticsearch-logging"

local plugin_name = "elasticsearch-logging"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should call it elasticsearch-logger like the kafka-logger plugin?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should call it elasticsearch-logger like the kafka-logger plugin?

OK, do I need to change all ealsticsearch-logging to elasticsearch-logger? Including the file name?

local batch_processor_manager = bp_manager_mod.new(plugin_name)
local str_format = core.string.format
local str_sub = string.sub


local schema = {
type = "object",
properties = {
endpoint = {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we wrap all the fields in an extra endpoint field?

type = "object",
properties = {
uri = core.schema.uri_def,
index = { type = "string"},
type = { type = "string"},
username = { type = "string"},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can store username & password in an additional field like


So that we can require them easily.

password = { type = "string"},
timeout = {
type = "integer",
minimum = 1,
default = 10
},
ssl_verify = {
type = "boolean",
default = true
}
},
required = { "uri", "index" }
},
},
required = { "endpoint" },
}


local _M = {
version = 0.1,
priority = 413,
name = plugin_name,
schema = batch_processor_manager:wrap_schema(schema),
}


function _M.check_schema(conf)
return core.schema.check(schema, conf)
end


local function get_logger_entry(conf)
local entry = log_util.get_full_log(ngx, conf)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please also support the custom log format.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please also support the custom log format.

How about the reference kafka-logger plugin?

local entry
    if conf.meta_format == "origin" then
        entry = log_util.get_req_original(ctx, conf)
        -- core.log.info("origin entry: ", entry)

    else
        local metadata = plugin.plugin_metadata(plugin_name)
        core.log.info("metadata: ", core.json.delay_encode(metadata))
        if metadata and metadata.value.log_format
          and core.table.nkeys(metadata.value.log_format) > 0
        then
            entry = log_util.get_custom_format_log(ctx, metadata.value.log_format)
            core.log.info("custom log format entry: ", core.json.delay_encode(entry))
        else
            entry = log_util.get_full_log(ngx, conf)
            core.log.info("full log entry: ", core.json.delay_encode(entry))
        end
    end

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do not reference the code of another plugin in one plugin. Unless we pull some generic code into a common module.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do not reference the code of another plugin in one plugin. Unless we pull some generic code into a common module.

Please also support the custom log format.

Is the custom log format like:
https://github.com/apache/apisix/blob/master/docs/en/latest/plugins/kafka-logger.md#metadata

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes

return core.json.encode({
create = {
_index = conf.endpoint.index,
_type = conf.endpoint.type
}
}) .. "\n" ..
core.json.encode({
time = ngx_now(),
host = entry.server.hostname,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why should we invent a format structure for a specific plugin?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why should we invent a format structure for a specific plugin?

I refer to splunk-hec-logging:

local function get_logger_entry(conf)

How about core.json.encode(entry) directly?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we just need to use log_util.get_req_original(ctx, conf) to get entry, This is a json, and ES supports this format.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remember that when we update the entry, we also need to update the png image.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we just need to use log_util.get_req_original(ctx, conf) to get entry, This is a json, and ES supports this format.

I found that log_util.get_req_original(ctx, conf) will return a string instead of json:

function _M.get_req_original(ctx, conf)
local headers = {
ctx.var.request, "\r\n"
}
for k, v in pairs(ngx.req.get_headers()) do
core.table.insert_tail(headers, k, ": ", v, "\r\n")
end
-- core.log.error("headers: ", core.table.concat(headers, ""))
core.table.insert(headers, "\r\n")
if conf.include_req_body then
core.table.insert(headers, ctx.var.request_body)
end
return core.table.concat(headers, "")
end

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my mistake. json string is ok. we can follow this.

source = DEFAULT_ELASTICSEARCH_SOURCE,
request_url = entry.request.url,
request_method = entry.request.method,
request_headers = entry.request.headers,
request_query = entry.request.querystring,
request_size = entry.request.size,
response_headers = entry.response.headers,
response_status = entry.response.status,
response_size = entry.response.size,
latency = entry.latency,
upstream = entry.upstream,
}) .. "\n"
end


local function send_to_elasticsearch(conf, entries)
local httpc, err = http.new()
if not httpc then
return false, str_format("create http error: %s", err)
end

local uri = conf.endpoint.uri ..
(str_sub(conf.endpoint.uri, -1) == "/" and "_bulk" or "/_bulk")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use string.byte would be better

local body = core.table.concat(entries, "")
local headers = {["Content-Type"] = "application/json"}
if conf.endpoint.username and conf.endpoint.password then
local authorization = "Basic " .. ngx.encode_base64(
conf.endpoint.username .. ":" .. conf.endpoint.password
)
headers["Authorization"] = authorization
end

core.log.info("uri: ", uri, ", body: ", body, ", headers: ", core.json.encode(headers))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This headers may contain username and passowrd in base64 format, which we should not output in the log for security reasons.


httpc:set_timeout(conf.endpoint.timeout * 1000)
local resp, err = httpc:request_uri(uri, {
ssl_verify = conf.endpoint.ssl_verify,
method = "POST",
headers = headers,
body = body
})
if not resp then
return false, str_format("RequestError: %s", err or "")
ccxhwmy marked this conversation as resolved.
Show resolved Hide resolved
end

if resp.status ~= 200 then
return false, str_format("response status: %d, response body: %s",
ccxhwmy marked this conversation as resolved.
Show resolved Hide resolved
resp.status, resp.body or "")
end

return true
end


function _M.log(conf, ctx)
local entry = get_logger_entry(conf)

if batch_processor_manager:add_entry(conf, entry) then
return
end

local process = function(entries)
return send_to_elasticsearch(conf, entries)
end

batch_processor_manager:add_entry_to_new_processor(conf, entry, ctx, process)
end


return _M
27 changes: 27 additions & 0 deletions ci/pod/docker-compose.plugin.yml
Original file line number Diff line number Diff line change
Expand Up @@ -197,6 +197,33 @@ services:
SPLUNK_HEC_TOKEN: "BD274822-96AA-4DA6-90EC-18940FB2414C"
SPLUNK_HEC_SSL: "False"

# Elasticsearch Logging Service
elasticsearch-noauth:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.1
restart: unless-stopped
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: -Xms512m -Xmx512m
discovery.type: single-node
xpack.security.enabled: 'false'

elasticsearch-auth:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.1
restart: unless-stopped
ports:
- "9201:9201"
- "9301:9301"
environment:
ES_JAVA_OPTS: -Xms512m -Xmx512m
discovery.type: single-node
ELASTIC_USERNAME: elastic
ELASTIC_PASSWORD: 123456
http.port: 9201
transport.tcp.port: 9301
xpack.security.enabled: 'true'


networks:
apisix_net:
Expand Down
1 change: 1 addition & 0 deletions conf/config-default.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -451,6 +451,7 @@ plugins: # plugin list (sorted by priority)
- public-api # priority: 501
- prometheus # priority: 500
- datadog # priority: 495
- elasticsearch-logging # priority: 413
- echo # priority: 412
- loggly # priority: 411
- http-logger # priority: 410
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 2 additions & 1 deletion docs/en/latest/config.json
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,8 @@
"plugins/google-cloud-logging",
"plugins/splunk-hec-logging",
"plugins/file-logger",
"plugins/loggly"
"plugins/loggly",
"plugins/elasticsearch-logging"
]
}
]
Expand Down
143 changes: 143 additions & 0 deletions docs/en/latest/plugins/elasticsearch-logging.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
---
title: elasticsearch-logging
keywords:
- APISIX
- Plugin
ccxhwmy marked this conversation as resolved.
Show resolved Hide resolved
- Elasticsearch-logging
description: This document contains information about the Apache APISIX elasticsearch-logging Plugin.
---

<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->

## Description

The `elasticsearch-logging` Plugin is used to forward logs to [Elasticsearch](https://www.elastic.co/guide/en/welcome-to-elastic/current/getting-started-general-purpose.html) for analysis and storage.

When the Plugin is enabled, APISIX will serialize the request context information to [Elasticsearch Bulk format](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html#docs-bulk) and submit it to the batch queue. When the maximum batch size is exceeded, the data in the queue is pushed to Elasticsearch. See [batch processor](../batch-processor.md) for more details.

## Attributes

| Name | Required | Default | Description |
| ------------------- | -------- | --------------------------- | ------------------------------------------------------------ |
| endpoint | True | | Elasticsearch endpoint configurations. |
| endpoint.uri | True | | Elasticsearch API endpoint. |
| endpoint.index | True | | Elasticsearch [_index field](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-index-field.html#mapping-index-field) |
| endpoint.type | False | Elasticsearch default value | Elasticsearch [_type field](https://www.elastic.co/guide/en/elasticsearch/reference/7.17/mapping-type-field.html#mapping-type-field) |
| endpoint.username | False | | Elasticsearch [authentication](https://www.elastic.co/guide/en/elasticsearch/reference/current/setting-up-authentication.html) username |
| endpoint.password | False | | Elasticsearch [authentication](https://www.elastic.co/guide/en/elasticsearch/reference/current/setting-up-authentication.html) password |
| endpoint.ssl_verify | False | true | When set to `true` enables SSL verification as per [OpenResty docs](https://github.com/openresty/lua-nginx-module#tcpsocksslhandshake). |
| endpoint.timeout | False | 10 | Elasticsearch send data timeout in seconds. |

This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration.

## Enabling the Plugin

### Full configuration

The example below shows a complete configuration of the Plugin on a specific Route:

```shell
$ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
ccxhwmy marked this conversation as resolved.
Show resolved Hide resolved
{
"plugins":{
"splunk-hec-logging":{
"endpoint":{
"uri": "http://127.0.0.1:9200",
"index": "services",
"type": "collector",
"timeout": 60,
"username": "elastic",
"password": "123456",
"ssl_verify": false
},
"buffer_duration":60,
"max_retry_count":0,
"retry_delay":1,
"inactive_timeout":2,
"batch_max_size":10
}
},
"upstream":{
"type":"roundrobin",
"nodes":{
"127.0.0.1:1980":1
}
},
"uri":"/elasticsearch.do"
}'
```

### Minimal configuration
ccxhwmy marked this conversation as resolved.
Show resolved Hide resolved

The example below shows a bare minimum configuration of the Plugin on a Route:

```shell
$ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
ccxhwmy marked this conversation as resolved.
Show resolved Hide resolved
{
"plugins":{
"splunk-hec-logging":{
"endpoint":{
"uri": "http://127.0.0.1:9200",
"index": "services"
}
}
},
"upstream":{
"type":"roundrobin",
"nodes":{
"127.0.0.1:1980":1
}
},
"uri":"/elasticsearch.do"
}'
```

## Example usage

Once you have configured the Route to use the Plugin, when you make a request to APISIX, it will be logged in your Elasticsearch server:

```shell
$ curl -i http://127.0.0.1:9080/elasticsearch.do?q=hello
ccxhwmy marked this conversation as resolved.
Show resolved Hide resolved
HTTP/1.1 200 OK
...
hello, world
```

You should be able to login and search these logs from your Kibana discover:

![kibana search view](../../../assets/images/plugin/elasticsearch-admin-en.png)

## Disable Plugin

To disable the `elasticsearch-logging` Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.

```shell
$ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
ccxhwmy marked this conversation as resolved.
Show resolved Hide resolved
{
"plugins":{},
"upstream":{
"type":"roundrobin",
"nodes":{
"127.0.0.1:1980":1
}
},
"uri":"/elasticsearch.do"
}'
```
2 changes: 1 addition & 1 deletion docs/zh/latest/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ A/B 测试、金丝雀发布(灰度发布)、蓝绿部署、限流限速、
- 高性能:在单核上 QPS 可以达到 18k,同时延迟只有 0.2 毫秒。
- [故障注入](plugins/fault-injection.md)
- [REST Admin API](admin-api.md):使用 REST Admin API 来控制 Apache APISIX,默认只允许 127.0.0.1 访问,你可以修改 `conf/config.yaml` 中的 `allow_admin` 字段,指定允许调用 Admin API 的 IP 列表。同时需要注意的是,Admin API 使用 key auth 来校验调用者身份,**在部署前需要修改 `conf/config.yaml` 中的 `admin_key` 字段,来保证安全。**
- 外部日志记录器:将访问日志导出到外部日志管理工具。([HTTP Logger](plugins/http-logger.md)、[TCP Logger](plugins/tcp-logger.md)、[Kafka Logger](plugins/kafka-logger.md)、[UDP Logger](plugins/udp-logger.md)、[RocketMQ Logger](plugins/rocketmq-logger.md)、[SkyWalking Logger](plugins/skywalking-logger.md)、[Alibaba Cloud Logging(SLS)](plugins/sls-logger.md)、[Google Cloud Logging](plugins/google-cloud-logging.md)、[Splunk HEC Logging](plugins/splunk-hec-logging.md)、[File Logger](plugins/file-logger.md))
- 外部日志记录器:将访问日志导出到外部日志管理工具。([HTTP Logger](plugins/http-logger.md)、[TCP Logger](plugins/tcp-logger.md)、[Kafka Logger](plugins/kafka-logger.md)、[UDP Logger](plugins/udp-logger.md)、[RocketMQ Logger](plugins/rocketmq-logger.md)、[SkyWalking Logger](plugins/skywalking-logger.md)、[Alibaba Cloud Logging(SLS)](plugins/sls-logger.md)、[Google Cloud Logging](plugins/google-cloud-logging.md)、[Splunk HEC Logging](plugins/splunk-hec-logging.md)、[File Logger](plugins/file-logger.md)、[Elasticsearch Logging](plugins/elasticsearch-logging.md)
- [Helm charts](https://github.com/apache/apisix-helm-chart)

- **高度可扩展**
Expand Down
3 changes: 2 additions & 1 deletion docs/zh/latest/config.json
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,8 @@
"plugins/sls-logger",
"plugins/google-cloud-logging",
"plugins/splunk-hec-logging",
"plugins/file-logger"
"plugins/file-logger",
"plugins/elasticsearch-logging"
]
}
]
Expand Down
Loading