Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to flush the buffer #600

Open
a-hat opened this issue Jun 17, 2019 · 8 comments
Open

failed to flush the buffer #600

a-hat opened this issue Jun 17, 2019 · 8 comments

Comments

@a-hat
Copy link

a-hat commented Jun 17, 2019

Problem

I am getting these errors. Data is loaded into elasticsearch, but I don't know if some records are maybe missing. The timeouts appear regularly in the log.

2019-06-17 14:54:20 +0000 [warn]: #0 [elasticsearch] failed to write data into buffer by buffer overflow action=:block
2019-06-17 14:54:21 +0000 [warn]: #0 [elasticsearch] failed to write data into buffer by buffer overflow action=:block
2019-06-17 14:54:25 +0000 [error]: #0 [elasticsearch] [Faraday::TimeoutError] read timeout reached {:host=>"log-store-es", :port=>9200, :scheme=>"https", :user=>"elastic", :password=><REDACTED>, :protocol=>"https"}
2019-06-17 14:54:25 +0000 [warn]: #0 [elasticsearch] failed to flush the buffer. retry_time=0 next_retry_seconds=2019-06-17 14:54:26 +0000 chunk="58b862b6abf05f6608fff9eb381b083c" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"log-store-es\", :port=>9200, :scheme=>\"https\", :user=>\"elastic\", :password=>\"obfuscated\"}): read timeout reached"
  2019-06-17 14:54:25 +0000 [warn]: #0 suppressed same stacktrace
2019-06-17 14:54:25 +0000 [error]: #0 [elasticsearch] [Faraday::TimeoutError] read timeout reached {:host=>"log-store-es", :port=>9200, :scheme=>"https", :user=>"elastic", :password=><REDACTED>, :protocol=>"https"}
2019-06-17 14:54:25 +0000 [warn]: #0 [elasticsearch] failed to flush the buffer. retry_time=1 next_retry_seconds=2019-06-17 14:54:26 +0000 chunk="58b862ba45f77e5866ef313670d1c387" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"log-store-es\", :port=>9200, :scheme=>\"https\", :user=>\"elastic\", :password=>\"obfuscated\"}): read timeout reached"
  2019-06-17 14:54:25 +0000 [warn]: #0 suppressed same stacktrace

Steps to replicate

Here is the config

<match **>
        @id elasticsearch
        @type elasticsearch
        
        @log_level info

        with_transporter_log true
        validate_client_version true
        ssl_verify false
        log_es_400_reason true
        type_name _doc

        #https://github.com/uken/fluent-plugin-elasticsearch#stopped-to-send-events-on-k8s-why
        reload_connections false
        reconnect_on_error true
        reload_on_failure true

        include_tag_key true
        # Replace with the host/port to your Elasticsearch cluster.
        host "#{ENV['OUTPUT_HOST']}"
        port "#{ENV['OUTPUT_PORT']}"
        scheme "#{ENV['OUTPUT_SCHEME']}"
        ssl_version "#{ENV['OUTPUT_SSL_VERSION']}"
        logstash_format true
        <buffer>
          @type file
          path /var/log/fluentd-buffers/kubernetes.system.buffer
          flush_mode interval
          retry_type exponential_backoff
          flush_thread_count 2
          flush_interval 5s
          retry_forever
          retry_max_interval 30
          chunk_limit_size "#{ENV['OUTPUT_BUFFER_CHUNK_LIMIT']}"
          queue_limit_length "#{ENV['OUTPUT_BUFFER_QUEUE_LIMIT']}"
          overflow_action block
        </buffer>
      </match>

Using Fluentd and ES plugin versions

  • fluentd-elasticsearch-plugin 3.5.2
  • fluentd 1.4.2
  • elasticsearch-plugin 7.1.0
  • elasticsearch 7.1.0
@cosmo0920
Copy link
Collaborator

Increasing request_timeout parameter value may help you?
https://github.com/uken/fluent-plugin-elasticsearch#request_timeout

@liuchintao
Copy link

Increasing request_timeout parameter value may help you?
https://github.com/uken/fluent-plugin-elasticsearch#request_timeout

I met the same problem in my project, but what does cause this problem?
Is it because the memory the fluentd has is too small to cause the problem?

@jorgebirck
Copy link

See #525

@jyotibhanot
Copy link

@a-hat : Did you get the resolution? I am facing the similar issue.

@JonasGroeger
Copy link

We're seeing the same issue with the default memory buffer, according to the docs:

buffer_type memory
flush_interval 60s
retry_limit 17
retry_wait 1.0
num_threads 1

@slw07g
Copy link

slw07g commented May 13, 2022

halp

@frankbecker
Copy link

I am seeing the same issue

@cosmo0920
Copy link
Collaborator

overflow_action block

This parameter is only for batch processing to send records by Fluentd. This parameter always causes process stuck when buffer is full. Please consider to use other choices such as throw_exception or drop_oldest_chunk.

see: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants