Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail to publish to elasticsearch #174

Closed
qiulin opened this issue Jun 30, 2015 · 4 comments
Closed

Fail to publish to elasticsearch #174

qiulin opened this issue Jun 30, 2015 · 4 comments

Comments

@qiulin
Copy link

qiulin commented Jun 30, 2015

Packetbeat runs on a server with nginx, and publishs event to elasticsearch cluster on the other server.

This is log in /var/log/message file.

Jun 30 11:58:22 proxy139 /usr/bin/packetbeat[25356]: publish.go:87: Publishing failed: Fail to publish event
Jun 30 11:58:22 proxy139 /usr/bin/packetbeat[25356]: publish.go:171: Fail to publish event type on output [%!s(*elasticsearch.ElasticsearchOutput=&{zlog-http 15000 0xc2080fcd40 1000000000 10000 map[] 0xc209120cc0}) %!s(*fileout.FileOutput=&{{/var/log/packetbeat packetbeat.log 1024000 7 0xc221939970 1024432}})]: invalid argument
Jun 30 11:58:22 proxy139 /usr/bin/packetbeat[25356]: publish.go:87: Publishing failed: Fail to publish event
Jun 30 11:58:22 proxy139 /usr/bin/packetbeat[25356]: output.go:116: Fail to perform many index operations in a single API call: Post http://192.168.121.214:9200/_bulk: dial tcp 192.168.121.214:9200: too many open files
Jun 30 11:58:22 proxy139 /usr/bin/packetbeat[25356]: output.go:116: Fail to perform many index operations in a single API call: Post http://192.168.121.214:9200/_bulk: dial tcp 192.168.121.214:9200: too many open files
Jun 30 11:58:22 proxy139 /usr/bin/packetbeat[25356]: publish.go:171: Fail to publish event type on output [%!s(*elasticsearch.ElasticsearchOutput=&{zlog-http 15000 0xc2080fcd40 1000000000 10000 map[] 0xc209120cc0}) %!s(*fileout.FileOutput=&{{/var/log/packetbeat packetbeat.log 1024000 7 0xc221939970 1024432}})]: invalid argument
Jun 30 11:58:22 proxy139 /usr/bin/packetbeat[25356]: publish.go:87: Publishing failed: Fail to publish event
Jun 30 11:58:22 proxy139 /usr/bin/packetbeat[25356]: publish.go:171: Fail to publish event type on output [%!s(*elasticsearch.ElasticsearchOutput=&{zlog-http 15000 0xc2080fcd40 1000000000 10000 map[] 0xc209120cc0}) %!s(*fileout.FileOutput=&{{/var/log/packetbeat packetbeat.log 1024000 7 0xc221939970 1024432}})]: invalid argument

This is my packetbeat config file.

################### Packetbeat Agent Configuration Example ######################

# This file contains an overview of various configuration settings. Please consult
# the docs at <http://packetbeat.com/docs/configuration.html> for more details.

# The Packetbeat shipper works by sniffing the network traffic between your
# application components. It inserts meta-data about each transaction into
# Elasticsearch.

############################# Agent ############################################
shipper:

 # The name of the shipper that publishes the network data. It can be used to group 
 # all the transactions sent by a single shipper in the web interface.
 # If this options is not defined, the hostname is used.
 name:

 # The tags of the shipper are included in their own field with each
 # transaction published. Tags make it easy to group transactions by different
 # logical properties.
 tags: ["nginx", "url-redirect", "packetbeat"]

 # Uncomment the following if you want to ignore transactions created
 # by the server on which the shipper is installed. This option is useful
 # to remove duplicates if shippers are installed on multiple servers.
 ignore_outgoing: true

############################# Sniffer ############################################

# Select the network interfaces to sniff the data. You can use the "any"
# keyword to sniff on all connected interfaces.
interfaces:
 device: eth0 
 type: "af_packet"
 buffer_size_mb: 50


############################# Protocols ######################################
protocols:
  http:

    # Configure the ports where to listen for HTTP traffic. You can disable
    # the http protocol by commenting the list of ports.
    ports: [80, 8080, 8000, 5000, 8002]

    # Uncomment the following to hide certain parameters in URL or forms attached
    # to HTTP requests. The names of the parameters are case insensitive.
    # The value of the parameters will be replaced with the 'xxxxx' string.
    # This is generally useful for avoiding storing user passwords or other
    # sensitive information.
    # Only query parameters and top level form parameters are replaced.
    # hide_keywords: ['pass', 'password', 'passwd']
    set_all_headers: true

  mysql:

    # Configure the ports where to listen for MySQL traffic. You can disable
    # MySQL protocol by commenting the list of ports.
    # ports: [3306]

  pgsql:

    # Configure the ports where to listen for Pgsql traffic. You can disable
    # Pgsql protocol by commenting the list of ports.
    # ports: [5432]

  redis:

    # Configure the ports where to listen for Redis traffic. You can disable
    # Redis protocol by commenting the list of ports.
    # ports: [6379]

  thrift:

    # Configure the ports where to listen for Redis traffic. You can disable
    # Redis protocol by commenting the list of ports.
    # ports: [9090]

############################# Output ############################################

# Configure what outputs to use when sending the data collected by packetbeat.
# You can enable one or multiple outputs by setting enabled option to true.
output:

  # Elasticsearch as output
  # Options:
  # host, port: where Elasticsearch is listening on
  # save_topology: specify if the topology is saved in Elasticsearch
  elasticsearch:
    enabled: true
    host: 192.168.121.214
    port: 9200
    save_topology: false


  # Redis as output
  # Options:
  # host, port: where Redis is listening on
  # save_topology: specify if the topology is saved in Redis
  #redis:
  #  enabled: true
  #  host: localhost
  #  port: 6379
  #  save_topology: true

  # File as output
  # Options:
  # path: where to save the files
  # filename: name of the files
  # rotate_every_kb: maximum size of the files in path
  # number of files: maximum number of files in path
  file:
    enabled: true
    path: "/var/log/packetbeat"
    filename: packetbeat.log
    rotate_every_kb: 1000
    number_of_files: 7

############################# Processes ############################################

# Configure the processes to be monitored and how to find them. If a process is
# monitored than Packetbeat attempts to use it's name to fill in the `proc` and
# `client_proc` fields.
# The processes can be found by searching their command line by a given string.
#
# Process matching is optional and can be enabled by uncommenting the following
# lines.
#
procs:
  enabled: false
  monitored:
#    - process: mysqld
#      cmdline_grep: mysqld
#
#    - process: pgsql
#      cmdline_grep: postgres
#
    - process: nginx
      cmdline_grep: nginx
#
#    - process: app
#      cmdline_grep: gunicorn

geoip:
  paths: ["/usr/share/GeoIP/GeoLiteCountry.dat"]

Packetbeat version 1.0.0.Beta1 (amd64) with CentOS 6.6.

@tsg
Copy link
Contributor

tsg commented Jun 30, 2015

Thanks for the good bug report. I tried to reproduce it but so far I didn't manage.

If you can reproduce this, after it happens, can you run this:

sudo ls -l /proc/9263/fd

where you replace 9263 with the pid of the packetbeat process. Thanks!

@tsg
Copy link
Contributor

tsg commented Jul 29, 2015

Hi @qiulin , just wondering if you could reproduce this in the meantime?

@qiulin
Copy link
Author

qiulin commented Jul 30, 2015

Sorry for my late. I can't reproduce this problem. I'll keep trying.

@urso urso added the duplicate label Oct 23, 2015
@urso
Copy link

urso commented Oct 23, 2015

in logs it says:

Jun 30 11:58:22 proxy139 /usr/bin/packetbeat[25356]: output.go:116: Fail to perform many index operations in a single API call: Post http://192.168.121.214:9200/_bulk: dial tcp 192.168.121.214:9200: too many open files

Too many open files. duplicate #226.

@urso urso closed this as completed Oct 23, 2015
urso pushed a commit that referenced this issue Dec 2, 2015
tsg added a commit that referenced this issue Dec 2, 2015
tsg pushed a commit to tsg/beats that referenced this issue Jan 20, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants