Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate how we are exposing the api endpoints #3801

Closed
ph opened this issue Aug 26, 2015 · 4 comments
Closed

Evaluate how we are exposing the api endpoints #3801

ph opened this issue Aug 26, 2015 · 4 comments

Comments

@ph
Copy link
Contributor

ph commented Aug 26, 2015

The API will need to expose his endpoints with a webserver and we also have a http input providing his own HTTP server. It's probably a good time to see if we offer the HTTP server as a first class service that plugins authors can hook into and add their own endpoints.

PRO:

  • Reduce the code
  • Offer a centralized configuration for the certificate/port/ip (probably via a logstash.yml)
  • Mimic what elasticsearch is doing with their plugin architecture

CON:

  • Shared the certificate
  • Use same port
  • Performance or blocking in the input side could possibly impact the debugging api?

Ref: #2611

@ph ph added the monitoring label Sep 9, 2015
@ph ph changed the title Metrics: Evaluate how we are exposing the api endpoints Evaluate how we are exposing the api endpoints Sep 11, 2015
@ph ph added v2.1.0 and removed v2.0.0 2.0 labels Sep 11, 2015
@suyograo suyograo added v5.0.0 and removed v2.1.0 labels Oct 20, 2015
@ph ph assigned purbon Dec 7, 2015
@purbon
Copy link
Contributor

purbon commented Dec 9, 2015

I like this idea, I think some of your concern can be simplified by providing proper service interaction, for example I don't see special issues with:

  • Providing different ports per service invocation.
  • Also providing different certificates.

Only important things for me to consider for now are performance implications, but at the end of the day we've similar ones, just the code is distributed and not provided only by logstash-core.

Worth exploring the idea of having this services, even if for now is just to expose #2611, but with a more general service design. What do you think?

@ph
Copy link
Contributor Author

ph commented Dec 9, 2015

Yeah, you are correct if we expose different ports which would be in fact a different puma thread pool the problem would be solve if one of the service make a hang.

But I think the added complexity to do that doesn't give a lot of values unless we have multiples different inputs requiring a webserver, which isn't the case right now we can always reevaluate that at a later time.

@purbon
Copy link
Contributor

purbon commented Dec 9, 2015

Agree a lot with you.

/purbon

On Wed, 9 Dec 2015 15:27 Pier-Hugues Pellerin notifications@github.com
wrote:

Yeah, you are correct if we expose different ports which would be in fact
a different puma thread pool the problem would be solve if one of the
service make a hang.

But I think the added complexity to do that doesn't give a lot of values
unless we have multiples different inputs requiring a webserver, which
isn't the case right now we can always reevaluate that at a later time.


Reply to this email directly or view it on GitHub
#3801 (comment).

@ph
Copy link
Contributor Author

ph commented Jan 12, 2016

We have decided and implement a different service decoupled from the http input server.
see https://github.com/elastic/logstash/tree/feature/metrics

@ph ph closed this as completed Jan 12, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants