Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[migration v6.5] Another Kibana instance appears to be migrating the index #25464

Closed
CamiloSierraH opened this issue Nov 9, 2018 · 29 comments
Closed
Labels
bug Fixes for quality problems that affect the customer experience Feature:Saved Objects Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc

Comments

@CamiloSierraH
Copy link

Kibana version: 6.5.0

Elasticsearch version: 6.5.0

Server OS version:

Browser version:

Browser OS version:

Original install method (e.g. download page, yum, from source, etc.):
From source

Describe the bug:
I'm using Elasticsearch and Kibana in v6.4.3 and i'm testing to migrate to v6.5.0.
When i start Kibana for the fist time un v6.5.0 i stop the process during the migration and i have an empty browser page for Kibana

Steps to reproduce:
To reproduce, i start kibana and i stop when the logs where on this stage:

  log   [14:00:01.131] [info][migrations] Creating index .kibana_2.
  log   [14:00:01.221] [info][migrations] Reindexing .kibana to .kibana_1

At that point the response of aliases cat request are :

.security .security-6 - - -

and if i try to restart the Kibana service, i have an empty page in the browser and in the logs i have this message:

log   [14:00:20.457] [warning][migrations] Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting Kibana.

capture d ecran 2018-11-09 a 10 53 45

I delete .kibana_2 index as mentioned in the logs, using this Curl request:

curl -XDELETE 'http://localhost:9200/.kibana_2'  --header "content-type: application/JSON" -u elastic -p

I restart Kibana and i have this message:

[warning][migrations] Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana.

I delete .kibana_1 index as mentioned in the logs, using this Curl request:

curl -XDELETE 'http://localhost:9200/.kibana_1'  --header "content-type: application/JSON" -u elastic -p

before deleting the index .kibana_1 we need to verify that in my elasticsearch server i have the index .kibana ?
I ask this because if i understand .kibana_1 is the copy of .kibana and .kibana is deleted when the migration is finished. So if i delete as requested .kibana_1 and .kibana was already deleted i may lose all the Dashboards/visualization i have stored? i am right?

I restart Kibana and this time everything works, Kibana is back on the browser, and in the logs i have the logs:

[migration] finished in 688ms.

@bhavyarm
Expected behavior:

Screenshots (if relevant):

Errors in browser console (if relevant):

Provide logs and/or server output (if relevant):

Any additional context:

@timroes timroes added bug Fixes for quality problems that affect the customer experience Team:Operations Team label for Operations Team labels Nov 14, 2018
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-operations

@tylersmalley
Copy link
Contributor

@CamiloSierraH - thanks for the report. This issue is caused by stopping the process which is in charge of handling the migration. This "locking" is to handle having multiple Kibana instances.

With this issue I see two possible problems:

  • We should add a note to "Reindexing .kibana to .kibana_X" stating not to stop the Kibana process.
  • The index name in the messaging for subsequent re-tries during of the re-index might not be correct.

log [14:00:20.457] [warning][migrations] Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting Kibana.

There is more information available here: https://www.elastic.co/guide/en/kibana/current/upgrade-migrations.html

@Hendler
Copy link

Hendler commented Nov 15, 2018

same issue in a docker/dev environment

@lnx01
Copy link

lnx01 commented Nov 15, 2018

Same issue - upgraded from 6.4.0 to 6.5.0 using DEB package - appears to be stuck on "Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting Kibana."

Deleting .kibana_2 and restarting causes the same thing to happen, gets stuck on message above.

Kibana UI says "Kibana server is not ready yet" -- cannot access /status either, same message.

@miko-code
Copy link

miko-code commented Nov 15, 2018

Same issue as @lnx01 upgrade from 6.4.x to 6.5.0

@gmeriaux
Copy link

I have the same issue and was working on my test instance. Actually, I have no access to kibana.
Have you got a quick solution to find back my UI? Is it possible to downgrade ELK stack or just kibana?

@CamiloSierraH
Copy link
Author

@gmeriaux you need to follow this steps to have back your kibana instance -> https://www.elastic.co/guide/en/kibana/current/release-notes-6.5.0.html#known-issues-6.5.0

@gheffern
Copy link

@gmeriaux I had success with just downgrading kibana and removing the indexes .kibana_1 and .kibana_2

@taku333
Copy link

taku333 commented Nov 20, 2018

@CamiloSierraH,@gheffern
thanks!!!
l am having trouble upgrading in the Windows environment(6.4.3 ⇨6.5.0)

.kibna2 indice delete after started kibna

@taku333
Copy link

taku333 commented Nov 21, 2018

There. was no. problem with version 6.5.1

I am having success upgrading in Windows environment(6.4.3⇨6.5.1)

@jorgelbg
Copy link

jorgelbg commented Dec 8, 2018

Found a similar issue while upgrading. Turns out that was related to a .tasks closed index. Kibana was failing with an index_closed_exception this index is not usually used by Kibana (was closed automatically by curator a long time ago).

pevma added a commit to StamusNetworks/SELKS that referenced this issue Dec 14, 2018
@LiaoTung
Copy link

LiaoTung commented Jan 8, 2019

I noticed that Kibana should be at full stop before deleting the indices. Although Kibana remained slow for a few minutes right before restarting - perhaps to reconstruct the both indices - it came up with all the data intact.

$ curl-XGET "https://localhost:9200/_cat/indices"| grep kibana
...
green open .kibana_2 kVo3hhokTP2hVUSfmPkGVA 1 0 181 0 184.2kb 184.2kb
green open .kibana_1 mHhRaCqKStq6bL1qRLxMVA 1 0 178 0 170.9kb 170.9kb

@walkerxk
Copy link

i have the error message: "Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana."
after i use curl -XDELETE http://localhost:9200/.kibana_1 to delete the index, and restart kibana.,i get the same message.
the version of elk are all 6.5.4.
image

@soar
Copy link

soar commented Feb 5, 2019

I've faced the same problem while upgrading from 6.4.2 to 6.5.4

@Timmy93
Copy link

Timmy93 commented Feb 5, 2019

I had the same problem migrating from 6.4.3 to 6.6.0.

I solved deleting the 3 indexes: .kibana, .kibana_1 and .kibana_2 and restarting the kibana service.

I used the following command from linux bash:
curl -X DELETE "localhost:9200/.kibana_2" && curl -X DELETE "localhost:9200/.kibana_1" && curl -X DELETE "localhost:9200/.kibana"

@lucabelluccini
Copy link
Contributor

lucabelluccini commented Feb 26, 2019

Hello, the instructions provided by @Timmy93 is destructive and you will lose all the Dashboards and Visualizations.

The migration process is explained at this documentation page.

@godekdls
Copy link

godekdls commented Apr 24, 2019

I was upgrading kibana from 6.4.0 to 6.7.1. I had the same issue, so I deleted indices of .kibana_N. As @lucabelluccini mentioned, I lost all of my dashboards and index patterns.
I'm planning to upgrade one more to 7.0.0 but I really don't want to lose kibana objects again. Is there any way to deal with this issue without deleting indices?

@godekdls
Copy link

I just found out there was another kibana instance on the same es cluster. All set after upgrading the rest! This was my bad. Plz plz plz make sure you don't have any other instance on the same cluster.

@notque
Copy link

notque commented Apr 29, 2019

Same issue on upgrade, but now creating the index pattern is taking forever, to the point i'm concerned it isn't working at all.

@tylersmalley
Copy link
Contributor

@notque - for that issue I would recommend inspecting the ES logs as it's unrelated to Kibana migrations.

@swhi3635
Copy link

I ran into this issue as well when upgrading from 6.6.2 to 6.8.0 on 1 of 3 Kibana instances using the same ES cluster.

After stopping Kibana on all 3 and deleting the .kibana_2 index, I started the updated instance and kept seeing this in the logs:

kibana[8682]: {"type":"log","@timestamp":"2019-06-18T18:34:46Z","tags":["info","migrations"],"pid":8682,"message":"Creating index .kibana_2."}
kibana[8682]: {"type":"log","@timestamp":"2019-06-18T18:34:46Z","tags":["info","migrations"],"pid":8682,"message":"Migrating .kibana_1 saved objects to .kibana_2"}
kibana[8765]: {"type":"log","@timestamp":"2019-06-18T18:34:55Z","tags":["info","migrations"],"pid":8765,"message":"Creating index .kibana_2."}
kibana[8765]: {"type":"log","@timestamp":"2019-06-18T18:34:55Z","tags":["warning","migrations"],"pid":8765,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting Kibana."}

Instead of deleting the .kibana_2 index again, I updated the alias for .kibana to point to .kibana_2. This ended up solving the issue for me:

curl -X POST "localhost:9200/_aliases" -H 'Content-Type: application/json' -d'
{
    "actions" : [
        { "remove" : { "index" : ".kibana_1", "alias" : ".kibana" } },
        { "add" : { "index" : ".kibana_2", "alias" : ".kibana" } }
    ]
}
'

@Mekk
Copy link

Mekk commented Jul 24, 2019

Just for curiosity: I restarted kibana too soon after upgrade from 7.0 to 7.2 and got stuck in „Kibana server is not ready" (+ finally found log entry that another kibana instance appears…). Fortunately message suggested which index should I delete.

It would really nice if kibana could pick up migration by itself.

@PhaedrusTheGreek
Copy link
Contributor

This stuck state can also happen if Elasticsearch allocation is disabled when Kibana is upgraded

@priyanka-git-account
Copy link

I had the same error.(kibana 6.8.2) 3 instances were running in my site, .kibana, .kibana_1 and .kibana_2. Followed the below steps:

1.Stop Kibana service.
2.Deleted .kibana_2 and .kibana index -
curl -XDELETE localhost:9200/index/.kibana_2
curl -XDELETE localhost:9200/index/.kibana
3.Then point .kibana_1 index to alias .kibana -
curl -X POST "localhost:9200/_aliases" -H 'Content-Type: application/json' -d' { "actions" : [ { "add" : { "index" : ".kibana_1", "alias" : ".kibana" } } ] }'
4.Restart Kibana service.
Kibana is loaded again successfully.

@tylersmalley tylersmalley added the Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc label Apr 13, 2020
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-platform (Team:Platform)

@tylersmalley tylersmalley removed the Team:Operations Team label for Operations Team label Apr 13, 2020
@shemukr
Copy link

shemukr commented Apr 15, 2020

[elastic@sjc-v2-elk-l01 ~]$ curl localhost:9200
{
"name" : "master-1",
"cluster_name" : "sjc-v2",
"cluster_uuid" : "g-MOWUQGQMmgOUaCP0cdYA",
"version" : {
"number" : "7.6.2",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
"build_date" : "2020-03-26T06:34:37.794943Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

LOGS

log [03:47:03.301] [info][savedobjects-service] Starting saved objects migrations
log [03:47:03.312] [info][savedobjects-service] Creating index .kibana_task_manager_1.
log [03:47:03.316] [info][savedobjects-service] Creating index .kibana_1.
Could not create APM Agent configuration: Request Timeout after 30000ms
log [03:47:33.313] [warning][savedobjects-service] Unable to connect to Elasticsearch. Error: Request Timeout after 30000ms
log [03:47:35.817] [warning][savedobjects-service] Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_task_manager_1/6jHlllmtTmGSJI3vco_KJw] already exists, with { index_uuid="6jHlllmtTmGSJI3vco_KJw" & index=".kibana_task_manager_1" }
log [03:47:35.818] [warning][savedobjects-service] Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_1 and restarting Kibana.
log [03:47:35.828] [warning][savedobjects-service] Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_1/xvwnY15cQaStFRV-qjMbaA] already exists, with { index_uuid="xvwnY15cQaStFRV-qjMbaA" & index=".kibana_1" }
log [03:47:35.829] [warning][savedobjects-service] Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana.

I am getting same error, I deleted below indices and restarted, but same erro.

[elastic@sjc-v2-elk-l01 ~]$ curl localhost:9200/_cat/indices
red open .kibana_task_manager_1 6jHlllmtTmGSJI3vco_KJw 1 1
red open .apm-agent-configuration uD5uuI-nQa-qucX3wx3HIQ 1 1
red open .kibana_1 xvwnY15cQaStFRV-qjMbaA 1 1

@lucabelluccini
Copy link
Contributor

Hello @shemukr
The indices are in red state and the problem seems not related to saved objects migration.
Please reach out on http://discuss.elastic.co/ (with the output of GET _cluster/allocation/explain).

@joshdover
Copy link
Contributor

This is currently the expected behavior when a migration fails, but will be improved by #52202 which will for automated retries of failed migrations without user intervention.

@chtourou-youssef
Copy link

same issue here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Fixes for quality problems that affect the customer experience Feature:Saved Objects Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc
Projects
None yet
Development

No branches or pull requests