-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[discuss] Removal of kibana.index configuration setting #60053
Comments
Pinging @elastic/kibana-operations (Team:Operations) |
/cc @peterschretlen |
Historically, a separate The two use cases that stand out the most to me are localization and scaling out Kibana. While our localization story still needs to mature, the translations are done at the Kibana instance level. So in order to have Kibana in two languages, you would need two Kibana instances pointing at the same Elasticsearch cluster. Scaling out Kibana, whether that's for reporting or general task management is another reason why you might want multiple Kibana's pointing to a cluster. Generally, I feel like Spaces doesn't quite cover all multi-tenant needs and we'll need a way to support multiple Kibana instances and configurations pointing to the same Elasticsearch cluster. I'll send over some additional data shortly. cc: @skearns64 @VijayDoshi for additional thoughts |
I think it's a good eventual goal. Also +1 to alex. maybe we can come up with a list that would support this eventuality. A configuration for supporting multi kibana's in the same cluster, e.g. appending server.name to the index or something. |
Aside from multi-tenancy, are there reasons someone might want to rename
In the scaling case won't the instances share the same Kibana index (multi-instance but single-tenant) - perhaps there are exception to this case? |
I'm not aware of anyone changing the If users are trying to scale out Kibana, I'd anticipate them leaving these settings alone. Same with localization support, they can run multiple instances with different languages without changing these settings. |
While working through how Kibana will transition to system indices this conversation came up again. Allowing the user to manually specify the index names which are used for system-indices goes against the premise of system-indices, where these are treated as an implementation detail of the product. I'd like to propose the following two paths forward:
SolutionsRequire the use of SpacesThis option would require the least amount of effort and would lead to the simplest implementation of system indices. The use of Spaces still allows the user to provision Kibana in a highly-available manner and have different instances supporting different localization settings. Add a
|
When migrating from legacy multi-tenancy via
For Alerts and Actions, there are complexities introduced by their reliance on "encrypted saved-object attributes" which are encrypted before being stored in the |
Thanks for the additional detail on migrations @kobelb. I think it'd be worth digging in a bit further to see what the scope of work would be to support the different types of saved objects listed above. At a high level, here are my findings after some internal follow ups, research and analyzing usage.
In order to move forward with the removal of
As long as we have a plan and guidelines for migration (a script does not seem possible) and effectively communicate that plan to our community and customer base, I'm +1 on migrating to system indices in Kibana assuming there is no strong opposition from stack leads and other teams at Elastic. |
To touch on ML specifically, @droberts195 doesn't think this will be an issue as the lack of isolation of ML with legacy multi-tenancy is considered a bug. There is an issue here to track integration with Spaces currently targeting 7.11. |
Agreed. I chatted briefly with @XavierM about the Security team's cases and timeline saved-objects. I've attempted to summarize our discussion below. For the rest of the saved-objects which aren't currently importable/exportable, I think it'd be worthwhile for us to answer the following questions:
@spong do you mind answering these questions for the detection engine? TimelinesThe security team built their own import/export UI for timeline saved-objects. This is good because we at least have a way to migrate timelines from a legacy tenant to a Space, but there are complications with integrating timelines into the saved-object management import/export. A timeline itself is modeled using a If we were to make the timeline use saved-object references, it wouldn't solve all of our problems since we don't want the notes and pinned events to be listed on the saved-object management screens and just want them to be automatically included when the timeline is exported. Saved-object references have repercussions on the behavior of authorization once sharing saved-objects in multiple spaces is implemented, which I don't think we want for timelines since end-users won't think of timelines and their associated notes and events as distinct entities. So, we, unfortunately, have more than one reason to figure out a solution to this problem. When confronted with a similar problem when discussing how to reduce the time to visualize, we decided that the Dashboard saved-object itself should embed the Visualizations which are only used in the context of the specific Dashboard. This would at a minimum require that our saved-object migrations allow us to combine all saved-object types into just a CasesThere isn't a way to import or export cases at the moment. Cases are using saved-object references to associate the |
@kobelb apologies that I missed this. I'm going to pull in a couple other people to help out. @peluja1012 @spong @FrankHassanabad @paul-tavares - regarding the questions below:
In the Endpoint Mgmt cases, right now we're only using Saved Objects with the lists plugin for Trusted Apps (unreleased 7.10 feature). Same list plugin as Exceptions. I'm looking for some help to answer the above questions as I don't want to mislead anyone. @paul-tavares which additional SOs does Trusted Apps introduce? I imagine it'd be very similar to Exceptions. |
So on the Detections side we've got a few different types of SO's we're working with spread across two separate plugins: Security Solution PluginDetection Rules (Backed by Alerting/Actions SO) Lists PluginException Lists (Both agnostic and non-agnostic, use same mapping, and can reference non-SO To answer the above questions:
None of these SO's are currently exportable/importable via saved-object management. This was (is?) not exposed via the Alerting/Actions framework, and so couldn't be implemented for
All Alerting/Actions backed SO's do, but
Generally speaking the relationship is as follows: A And as mentioned above, an
We are not currently leveraging the SO reference array as this was not exposed from the Alerting/Actions framework. All references are maintained via custom fields.
A little bit of everything here. It would be useful for users to back up their entire cluster by exporting all of their Rules, Actions, associated Timeline Templates, Exceptions and linked Value Lists, just a single Rule and all referenced objects, or just individual Rules, Actions, Exceptions, etc. Hopefully this helps, and is the right amount of information you're looking for. There's obviously quite a bit more here, so happy to dive deeper in certain areas if it'd be helpful! 🙂 |
@mikecote this discussion likely interests you because of the current inability to export alerts/actions. Also, have you all investigated whether or not it's feasible to have Alert/Actions utilize saved-object references? |
@kobelb Thanks for the ping. The alerts are currently using SO references to reference their connectors (actions) so we're all good on that front. I can provide some extra steps / challenges we're faced when it comes to import / export of alerts and connectors:
|
Thank you everyone who has helped out with this discussion thus far. We've been able to identify quite a few common issues that prevent saved-object export/import from being used to migrate from legacy multi-tenancy to Spaces:
However, there are a few outstanding questions that would help further flesh out these limitations. In an effort to minimize the cognitive burden, there's quite a bit of redundancy below. If your name doesn't appear in the heading of a section, please feel free to ignore it! ML - @droberts195Once ML jobs are migrated to be space-specific, is my understanding correct that all ML jobs will show up in all "legacy multi-tenancy" deployments of Kibana in all Spaces? If that's the case, then we shouldn't have to do any export/import of ML jobs. Endpoint - @kevinlogI don't think that we have an answer to the following questions for the
Ingest Manager - @ruflinFor the Ingest Manager specific saved objects, would you mind answering the following questions? At the time this issue was originally authored, they were the following:
APM - @sqren
Uptime - @andrewvc
|
@jen-huang @nchaulet Could one of you follow up on the questions above for Ingest Manager? |
Yes, this is true immediately after upgrading to the version of Kibana where the "ML in Spaces" project is completed. So if we suppose that release ends up being 7.11 then:
Things get more complicated after that though:
So if a consolidation of "legacy multi-tenancy" deployments into a single deployment is done after 7.11 then deciding what to do with the saved objects that store the space-awareness for each ML job would need some special handling. I guess the migration would have to choose one deployment as the favoured one, keep the ML job saved objects from it, and discard the ML job saved objects from other deployments. Then an administrator could rearrange things manually after that initial migration. To be honest though, ML doesn't work brilliantly with "legacy multi-tenancy" today. For example, if you create a data frame analytics job then we create an index pattern to make it easy to look at what's in the destination index. That index pattern will only exist in the deployment that the job was created in though. So if you try to navigate to the destination index in another deployment then the expected index pattern won't exist. So I imagine that most users who use the "legacy multi-tenancy" architecture either don't use ML or have found other workarounds, for example disabling the ML Kibana app in all but one of the Kibana deployments. |
|
I do not think we should be able to import/export our saved object:
No
Yes we have save object related to each other but they are not using reference (enrolment api key are linked to an agent policy, agent package policy are linked to agent policy) |
@droberts195 thanks for the detailed explanation, it's much appreciated. My primary goal is that when we remove legacy multi-tenancy, that end-users don't lose all of their data in the non-default tenant and have to manually recreate it. If my understanding is correct, in the worst-case scenario the end-user is using a non-default tenant to create ML jobs after ML Jobs become space aware, and when they are forced to use the default tenant going forward, they'll lose the spaces that these ML Jobs have been assigned. Does this seem like a tolerable experience to you, or should we invest the effort to ensuring that we can migrate this information from the non-default tenant to the default tenant? @nchaulet am I being naive in thinking that ideally, agent policies would be able to be imported/exported between different instances of Kibana? Is your concern that they're so tied to integrations, which include ES assets, that it's infeasible to add this ability? |
agent policies are really coupled to integrations, so it would probably do not make sense to export/import without having integrations properly installed. |
Yes, this is correct. The worst that will happen when combining all tenants into one is that the spaces that the jobs are visible in end up being lost or wrong in the combined tenant. The jobs themselves cannot get lost because we consider what Elasticsearch reports as the source of truth for which jobs exist. So in the worst case after combining the tenants an administrator will have to go to the job management list and make sure all the jobs are in the appropriate spaces in the combined tenant. Given that ML was never designed with multi-tenanted Kibana in mind and doesn't work well in it today I don't think it's that bad at all really. |
Yes
No
No
Not that I'm aware of. |
On Observability we need a lot of test data to be streamed continuously. This requires significant resources (CPU, RAM and disk space). Instead of having every dev do this we have a single cluster with test data, that every dev can connect to. So multiple local Kibana instances connect to a single central Elasticsearch instance.
Last time I tried using CCS it wasn't supported between a local cluster and Elastic Cloud. So it might be difficult to replicate what we have today. Will investigate. |
One reason we switched to using our own individual kibana.index settings in Observability was because when we all just connected to a single remote cluster from a local Kibana using the same .kibana index, it would sometimes lead to nasty migration race conditions. In those situations we'd somewhat regularly see "please delete .kibana1 index” etc. Is that a known issue? If we remove this kibana.index setting, would the advice to all devs (inside and outside of Elastic) be to never connect to a single ES from multiple different Kibana instances? |
@rudolf and I discussed the migration issue here. |
It's been decided, we will be removing the |
It seems like the observability shared cluster use case is still a bit up in the air. We'd appreciate help and guidance on how to best move forward with that since cross-cluster search and migration race conditions have prevented us from working on a shared cluster in the past, without the kibana.index setting. |
@jasonrhodes absolutely, I'll include accomodating for this in the new issue outlining the approach. |
A lot has changed since
kibana.index
was introduced in terms of supporting multi-tenancy and there is a lot of confusion around it. Over time, there have been additional configuration properties which were added that also need to be modified for those with multiple Kibana instances using the same Elasticsearch cluster (xpack.reporting.index
,xpack.task_manager.index
). From what I can tell, we have never actually fully documented this functionality outside of the Kibana Settings doc.The built-in
system_user
role is based on using the defaults for these indices. For users modifying these, they will need to create new users/roles to access these indices. This adds to the upgrade burden as they need to modify them when we modify the default privilages.We now have Spaces, which allow for isolation of Kibana saved objects and continue to improve upon this feature like sharing to other spaces. For those who require a separate Kibana instance, I would be interested to understand why. To continue support for this, a user could have a small Elasticsearch cluster per Kibana instance, and Cross-Cluster Search would be used to access the shared data. There is obviously more overhead both in terms of resources and management than before.
This change would greatly reduce the complexity of migrating to System Indices.
@alexfrancoeur @arisonl @ppf2 can any of you provide insights to this being used currently and issues with removing this in 8.0?
The text was updated successfully, but these errors were encountered: