Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for tuning a receive buffer of the UDP socket #36

Merged
merged 2 commits into from
Sep 30, 2016
Merged

Add support for tuning a receive buffer of the UDP socket #36

merged 2 commits into from
Sep 30, 2016

Conversation

hasso
Copy link
Contributor

@hasso hasso commented Sep 14, 2016

In case of larger collectd traffic there is a need to increase a receive buffer of the UDP socket to avoid drops. Increasing defaults in OS is not often what you really want to do, as it changes default for all sockets in the system.

With go-collectd you can create your own socket, apply all needed options and pass it to the Server (collectd/go-collectd#22).

Probably there should be defer to close the socket as well somewhere, but I'm not really sure how it should be used with goroutines.

@hasso
Copy link
Contributor Author

hasso commented Sep 29, 2016

No comments?

@juliusv
Copy link
Member

juliusv commented Sep 29, 2016

@hasso Sorry and thanks for the ping. This fells through the cracks.

I'm wondering, the default connection setup in your new vendoring (when the user doesn't specify their own Conn) has a few things which are omitted in your own setup, but which seem sensible: https://github.com/hasso/collectd_exporter/blob/b177f50aa861d5db6e919bc667d73240d9674a08/vendor/collectd.org/network/server.go#L43-L70

  • the setting of the addr if it's empty
  • setting the multicast interface name (though not sure of the significance)

Maybe add those to your custom connection setup too?

@@ -248,6 +250,25 @@ func startCollectdServer(ctx context.Context, w api.Writer) {
log.Fatalf("Unknown security level %q. Must be one of \"None\", \"Sign\" and \"Encrypt\".", *collectdSecurity)
}

laddr, err := net.ResolveUDPAddr("udp", *collectdAddress)
if err != nil {
log.Fatal(err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These log.Fatals should have some wrapper error message before the raw returned error, like the log.Fatal above in line 250. In this case here, something like:

log.Fatalf("Failed to resolve binary protocol listening UDP address %q: %v", *collectdAddress, err)

srv.Conn, err = net.ListenUDP("udp", laddr)
}
if err != nil {
log.Fatal(err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here and below.

@juliusv
Copy link
Member

juliusv commented Sep 29, 2016

Never closing the connection is probably ok in this case, as it would only get closed when the program exits anyways.

@hasso
Copy link
Contributor Author

hasso commented Sep 30, 2016

I made log messages more informative now. The code already handles the case when address is empty – in collectd_exporter context this means that binary protocol server isn't started at all. Which is completely valid use case of course.

I'm not sure about multicast – I don't use it and I'm not even sure that it should be there. Yes, I know that collectd supports that, but I consider such use a problematic can of worms in practice. Adding support for specifying interface is easy, but it adds another command line option which wouldn't be used in reality, probably. If you say it's OK, I can add the code of course.

@juliusv
Copy link
Member

juliusv commented Sep 30, 2016

Good point about the empty address!

About multicast: no real idea either, but yeah, after reading the code once again I tend to agree.

Thanks! 👍

@juliusv juliusv merged commit 9c1da6a into prometheus:master Sep 30, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants