Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Agent] Standardized local metadata #14461

Closed
elasticmachine opened this issue Oct 1, 2019 · 8 comments · Fixed by #17894
Closed

[Agent] Standardized local metadata #14461

elasticmachine opened this issue Oct 1, 2019 · 8 comments · Fixed by #17894
Assignees

Comments

@elasticmachine
Copy link
Collaborator

Original comment by @nchaulet:

Description

To be able to display metadata to the user like this we should agree on a set of metadata that fleet agent is going to send os, memory, fleet_version, maybe ip too?

Screen Shot 2019-10-01 at 10 18 52 AM

@elasticmachine
Copy link
Collaborator Author

Original comment by @ph:

When you are talking about memory, are you talking about maximun memory?
Not sure about the IPs adresses, I mean, it's quite common that a host has multiples addresses?
What is the use case for having the IP?

@elasticmachine
Copy link
Collaborator Author

Original comment by @nchaulet:

@ph the goal of having standardized metadata to identify a specific agent, I created this issue to think about what metadata make sense, but happy to remove some or add new ones,
For the IP, I used some SASS monitoring that display the IP for an agent, but I never used that information, so maybe we can remove it.

@elasticmachine
Copy link
Collaborator Author

Original comment by @ph:

@nchaulet @mattapperson concerning the local metadata, these are system metadata and a user cannot change that?

I see that in the Enrollment response we have two type of metadata: local metadata and user-provided metadata. I think we should also have the same concept at enrollment time? WDYT?

@elasticmachine
Copy link
Collaborator Author

Original comment by @ph:

nevermind, I had a brain freeze we already have that in the enroll request.

@philippkahr
Copy link
Contributor

philippkahr commented Nov 11, 2019

Hi @ph ,

I hope it is ok that I jump in here. I am looking forward to fleet management! I am not sure if this is the right issue though, but anyway. I come from a SysOps / DevOps position, and there would be something interesting in the metadata. Also, I think you are missing the hostname / container name in the metadata?

Let's say I have ubuntu with apache/httpd installed. When I throw the beats agent against that host, I wait a few minutes until beats have done some sort of autodiscovery. E.g. filebeat checking if /var/log/httpd is available, since filebeat has an apache module. Beats would report it to the metadata where I can easily click on enable apache log collection and I would automatically have that. Additionally, I could query for which hosts have an httpd installed && where am I not collecting logs.

I am having more of those situations where the infrastructure is getting bigger and bigger and no one is sure what service is running where, and if the service is doing anything. I am currently working my way around with auditbeat packages discovery and have a little ansible playbook that deploys the configuration based on the packages that auditbeat reports.

Elastic-Fleet-Autodiscover

@ph
Copy link
Contributor

ph commented Nov 19, 2019

Yes we really want to have service discovery on the host from the running agent but this wont be done from day one. Its also linked to another project we have to actually define how the modules will look like.

cc @ruflin

@ph ph changed the title Standardized local metadata [Agent] Standardized local metadata Nov 19, 2019
@michalpristas
Copy link
Contributor

related or dup: #15500

@elasticmachine
Copy link
Collaborator Author

Pinging @elastic/ingest-management (Team:ingest-management)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants