Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kibana server is not ready yet #25806

Closed
st3rling opened this issue Nov 16, 2018 · 41 comments
Closed

Kibana server is not ready yet #25806

st3rling opened this issue Nov 16, 2018 · 41 comments
Labels
Team:Operations Team label for Operations Team

Comments

@st3rling
Copy link

After upgrading ELK to 6.5 from 6.4.3 I get error message in browser:
Kibana server is not ready yet

elasticsearch -V
Version: 6.5.0, Build: default/rpm/816e6f6/2018-11-09T18:58:36.352602Z, JVM: 1.8.0_161

https://www.elastic.co/guide/en/kibana/current/release-notes-6.5.0.html#known-issues-6.5.0 doesn't apply since I don't have X-Pack

Kibana log:

{"type":"log","@timestamp":"2018-11-16T16:14:02Z","tags":["info","migrations"],"pid":6147,"message":"Creating index .kibana_1."}
{"type":"log","@timestamp":"2018-11-16T16:14:02Z","tags":["warning","migrations"],"pid":6147,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana."}

Deleting kibana_1 and restarting Kibana just recreates kibana_1 and produces same error.

@tylersmalley
Copy link
Contributor

How many instances of Kibana do you have running? Are you sure no other Kibana instance are pointing to this index?

@tylersmalley tylersmalley added the Team:Operations Team label for Operations Team label Nov 16, 2018
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-operations

@st3rling
Copy link
Author

Pretty sure only one instance is running:
[root@rconfig-elk-test tmp]# sudo systemctl status kibana
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2018-11-16 11:13:52 EST; 1h 41min ago
Main PID: 6147 (node)
Tasks: 10
Memory: 292.0M
CGroup: /system.slice/kibana.service
└─6147 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana....
Nov 16 11:13:52 rconfig-elk-test systemd[1]: Started Kibana.
Nov 16 11:13:52 rconfig-elk-test systemd[1]: Starting Kibana...

[root@rconfig-elk-test tmp]# ps -ef | grep -i kibana
root 6081 3357 0 11:10 pts/1 00:00:00 tail -f /var/log/kibana/error.log
kibana 6147 1 0 11:13 ? 00:00:46 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
root 7078 1763 0 12:55 pts/0 00:00:00 grep --color=auto -i kibana

I'm attaching full error log from Kibana. Somewhere in the middle you will see fatal error:
"message":"Request Timeout after 30000ms"}

Then it goes through bunch of plugin message again and ends with:

"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana."}

Elasticsearch is running:

[root@rconfig-elk-test tmp]# curl -GET http://10.10.99.144:9200
{
"name" : "node-1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "x-Am75D7TuCpewv7H9_45A",
"version" : {
"number" : "6.5.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "816e6f6",
"build_date" : "2018-11-09T18:58:36.352602Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
kibana error log.txt

@tylersmalley
Copy link
Contributor

Can you try deleting both .kibana_1 and .kibana_2 then restarting?

@st3rling
Copy link
Author

I only have .kibana_1 index:

[root@rconfig-elk-test tmp]# curl -GET http://10.10.99.144:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .monitoring-es-6-2018.11.13 bKC7vTkYQ5KJITjDR5ERdA 1 0 0 0 261b 261b
green open .monitoring-kibana-6-2018.11.13 VaXQT5fHQrSbtTiuYeQK5w 1 0 0 0 261b 261b
red open .kibana_1 oQH-qDGaTLWGqngLWLv0fg 1 0
green open .monitoring-es-6-2018.11.15 L4_xGqabTDqCa1YGP9ZoQw 1 0 428 64 483.8kb 483.8kb
green open ha-network-t3-2018.11 o9FlJzUjTUi59gT-ahAZQQ 5 0 15505 0 4.2mb 4.2mb
red open .monitoring-es-6-2018.11.16 UNszj2VJTHuhk670gjo6Zg 1 0
green open .monitoring-kibana-6-2018.11.15 J797uGhLRVO709cayCE2hQ 1 0 0 0 261b 261b
green open undefined-undefined-t2-2018.11 blNcjE-MTVWfFLAtkQ5eTw 5 0 514 0 462.6kb 462.6kb
[root@rconfig-elk-test tmp]#

I did try deleting .kibana_1 and restarting kibana - it produces same exact log

@tylersmalley
Copy link
Contributor

Can I get the Elasticsearch logs during this time?

@st3rling
Copy link
Author

Attached full log (for some reason elastic logs have proper timestamps and kibana's timestamps are ahead)
elasticsearch.zip

@CRCinAU
Copy link

CRCinAU commented Nov 20, 2018

I've just come across this same thing. CentOS 7.5, upgraded to ES 6.5.0 today.

Everything else restarts except Kibana.

@cdoggyd
Copy link

cdoggyd commented Nov 20, 2018

I'm having the same issue on CentOS 7.5.1804 after upgrading ELK to 6.5.0 last night.

@littlebunch
Copy link

I too had the same issue after a CentOS 7.5 upgrade. After verifying that .kibana had been migrated to .kibana_1, I deleted .kibana and then manually aliased .kibana to .kibana_1. Seems to have resolved the issue.

@CRCinAU
Copy link

CRCinAU commented Nov 20, 2018

Any chance of posting a step by step for myself and probably others that have / will be hit by this?

@littlebunch
Copy link

First, try deleting the versioned indices and then restart as suggested above:

curl -XDELETE http://localhost:9200/.kibana_1
systemctl restart cabana

This worked for me on other servers and don't know why it didn't on this particular one -- I'm no kibana expert.

If it doesn't work then verify you have a versioned index that's been created, e.g. byte counts the same, etc. After that then delete the original .kibana:

curl -XDELETE http://localhost:9200/.kibana

then alias it:

curl -X POST "localhost:9200/_aliases" -H 'Content-Type: application/json' -d' { "actions" : [ { "add" : { "index" : ".kibana_1", "alias" : ".kibana" } } ] }'

Then restart kibana.
HTH

@tylersmalley
Copy link
Contributor

@littlebunch if you have a versioned index, .kibana should not exist and it should already be an alias.

For anyone still experiencing this - IF the original .kibana index still exists, and there is a message telling you another instance is still running - try deleting the .kibana_1 and .kibana_2 if it exists. Then restart Kibana and provide the logs there, and additionally any logs from Elasticsearch. I presume something is failing along the way.

@tylersmalley
Copy link
Contributor

@st3rling in your logs there are a lot of primary shart timeouts. Not sure if that is related.

[2018-11-16T11:13:59,384][WARN ][o.e.x.m.e.l.LocalExporter] [node-1] unexpected error while indexing monitoring document
org.elasticsearch.xpack.monitoring.exporter.ExportException: UnavailableShardsException[[.monitoring-es-6-2018.11.16][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-es-6-2018.11.16][0]] containing [25] requests]]
	at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:128) ~[?:?]

@littlebunch
Copy link

@tylersmalley Yes, I agree. As I stated, the suggested "fix" worked for me on other servers not this particular one. Don't know why -- just reporting what I had to do to get it to work.

@tylersmalley
Copy link
Contributor

Yes, and thank you for sharing your experience. Just making sure folks don't accidentally delete their data.

@littlebunch
Copy link

It was a desperation move, yes, and it was on a dev machine. ;) I will poke around the logs a bit more later today and see if I find anything that might shed some light.

@st3rling
Copy link
Author

@tylersmalley Those errors were due to:
"no allocations are allowed due to cluster setting [cluster.routing.allocation.enable=none]"

Fixed it by issuing:

cluster.routing.allocation.enable : "all"

Restarted Elastic and Kibana
After running:
curl -GET http://10.10.99.144:9200/_cluster/allocation/explain
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"unable to find any unassigned shards to explain [ClusterAllocationExplainRequest[useAnyUnassignedShard=true,includeYesDecisions?=false]"}],"type":"illegal_argument_exception","reason":"unable to find any unassigned shards to explain [ClusterAllocationExplainRequest[useAnyUnassignedShard=true,includeYesDecisions?=false]"},"status":400}

The error is gone from Elasticsearch log but I'm having same issues and same messages in Kibana log. Both logs are attached.
kibana error log.txt
elasticsearch.log

@pjhampton
Copy link
Contributor

I got this issue after there was a version mismatch between elasticsearch and kibana. Maybe I'm being captain obvious here, but make sure both kibana and elasticsearch are 6.5.0

@st3rling
Copy link
Author

Yep, they are:
/usr/share/kibana/bin/kibana -V
6.5.0
/usr/share/elasticsearch/bin/elasticsearch -V
Version: 6.5.0, Build: default/rpm/816e6f6/2018-11-09T18:58:36.352602Z, JVM: 1.8.0_161

@tylersmalley
Copy link
Contributor

@st3rling I believe that allocation setting is what initially caused the migration issue in Kibana. If that is the case, you should be able to delete the .kibana_1 and .kibana_2 indices and try again.

@CRCinAU
Copy link

CRCinAU commented Nov 21, 2018

Ok, with a few hints from this topic, I found we didn't have a 'kibana_1' index, but we did have 'kibana_2'.

I removed this via:

curl -XDELETE http://localhost:9200/.kibana_2

Restarted Kibana and everything was happy again.

@st3rling
Copy link
Author

@tylersmalley Yes, that did it and all is happy now. User error - I missed step 8 in the rolling upgrades guide:
https://www.elastic.co/guide/en/elasticsearch/reference/6.4/rolling-upgrades.html
Thank you Tyler and all others for your help and input.

@rubpa
Copy link

rubpa commented Nov 30, 2018

Faced this issue on one (single-node) cluster. Here's what happened:

Error:

{"type":"log","@timestamp":"2018-11-30T05:51:01Z","tags":["warning","migrations"],"pid":24358,"message":"Another Kibana instance appears to
be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_7 and restarting Kibana."}

List of indices:

curl -X GET "localhost:9200/_cat/indices/.kib*?v&s=index"
health status index     uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana-6 MSm8voH-RrOkZsihyEZpRw   1   1        123            3    269.4kb        269.4kb
green  open   .kibana_7 hJ4NByb6SJOKpUGCNqtRoA   1   0          0            0       261b           261b

Deleted .kibana_7:

curl -X DELETE "localhost:9200/.kibana_7"

Restart kibana:

systemctl restart kibana
{"type":"log","@timestamp":"2018-11-30T05:58:45Z","tags":["info","migrations"],"pid":24883,"message":"Creating index .kibana_7."}
{"type":"log","@timestamp":"2018-11-30T05:58:45Z","tags":["info","migrations"],"pid":24883,"message":"Migrating .kibana-6 saved objects to .kibana_7"}
{"type":"log","@timestamp":"2018-11-30T05:58:46Z","tags":["info","migrations"],"pid":24883,"message":"Pointing alias .kibana to .kibana_7."}
{"type":"log","@timestamp":"2018-11-30T05:58:46Z","tags":["info","migrations"],"pid":24883,"message":"Finished in 1307ms."}
{"type":"log","@timestamp":"2018-11-30T05:58:46Z","tags":["listening","info"],"pid":24883,"message":"Server running at http://x.x.x.x:5601"}

@jorgelbg
Copy link

jorgelbg commented Dec 8, 2018

Found a similar issue, related to a closed index named .tasks #25464 (comment)

@oligarx
Copy link

oligarx commented Dec 12, 2018

Same issue. Anyone have work around?

@uhlhosting
Copy link

uhlhosting commented Jan 10, 2019

Had same issue and we did this:
Notice that what I've made to fix this was to check all indices that had 4-6~Kb in size and basically removed them. The one that I did not had was .kibana had only .kibana_1 and one that was getting my attention was .trigered_watches . once that was removed and .kibana_1 all worked.

Note: At first attempt I removed only .kibana_1 and this did not helped I had to double check my indices and then I found the size correlation, hence the kibana_1 index was rebuilt upon restart.

curl -GET http://127.0.0.1:9200/_cat/indices?v
curl -XDELETE http://localhost:9200/.triggered_watches
curl -XDELETE http://localhost:9200/.kibana_1

Followed by
sudo systemctl restart kibana

was bringing back my instance.

Hope this helps!

@josephsmartinez
Copy link

josephsmartinez commented Jan 10, 2019

First, try deleting the versioned indices and then restart as suggested above:

curl -XDELETE http://localhost:9200/.kibana_1
systemctl restart cabana

This worked for me on other servers and don't know why it didn't on this particular one -- I'm no kibana expert.

If it doesn't work then verify you have a versioned index that's been created, e.g. byte counts the same, etc. After that then delete the original .kibana:

curl -XDELETE http://localhost:9200/.kibana

then alias it:

curl -X POST "localhost:9200/_aliases" -H 'Content-Type: application/json' -d' { "actions" : [ { "add" : { "index" : ".kibana_1", "alias" : ".kibana" } } ] }'

Then restart kibana.
HTH

Thank you. This fixed my issue loading Kibana. (Kibana server is not ready yet)

The below command fixed the issue.

curl -XDELETE http://localhost:9200/.kibana

@shoeb240
Copy link

Can you try deleting both .kibana_1 and .kibana_2 then restarting?

I had sililar issue, it worked for me

@shoeb240
Copy link

Also I had to do is:

localhost:9200/.kibana_2/_settings

{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}

@suresh466
Copy link

I got this issue after there was a version mismatch between elasticsearch and kibana. Maybe I'm being captain obvious here, but make sure both kibana and elasticsearch are 6.5.0

Thanks a ton!

@makibroshett
Copy link

makibroshett commented Feb 21, 2019

While migrating from ELK 6.3.2 to 6.5.0 I ran into this issue, but none of the suggested fixes worked, and i too don't have security so the known issue is not applicable...

The way I worked around this was to:

  • shutdown kibana
  • delete the .kibana_1 and .kibana_2
  • re-index my .kibana as .kibana_3
  • create an alias .kibana_main pointing to .kibana3
  • change my kibana config so that KIBANA_INDEX is set to .kibana_main
  • restart kibana

It looks like a stunt but it worked (see logs below).
The fix by elastic supposes we're using security, but I suggest it'd be completed for users not using security.

Hope this helps

Olivier

Logs:

{"type":"log","@timestamp":"2019-02-21T09:02:10Z","tags":["warning","stats-collection"],"pid":1,"message":"Unable to fetch data from kibana collector"} {"type":"log","@timestamp":"2019-02-21T09:02:10Z","tags":["license","info","xpack"],"pid":1,"message":"Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active"} {"type":"log","@timestamp":"2019-02-21T09:02:12Z","tags":["reporting","warning"],"pid":1,"message":"Enabling the Chromium sandbox provides an additional layer of protection."} {"type":"log","@timestamp":"2019-02-21T09:02:12Z","tags":["info","migrations"],"pid":1,"message":"Creating index .kibana_main_4."} {"type":"log","@timestamp":"2019-02-21T09:02:13Z","tags":["info","migrations"],"pid":1,"message":"Migrating .kibana_3 saved objects to .kibana_main_4"} {"type":"log","@timestamp":"2019-02-21T09:02:13Z","tags":["info","migrations"],"pid":1,"message":"Pointing alias .kibana_main to .kibana_main_4."} {"type":"log","@timestamp":"2019-02-21T09:02:14Z","tags":["info","migrations"],"pid":1,"message":"Finished in 1353ms."} {"type":"log","@timestamp":"2019-02-21T09:02:14Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"} {"type":"log","@timestamp":"2019-02-21T09:02:14Z","tags":["status","plugin:spaces@6.5.0","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}

@godekdls
Copy link

godekdls commented Apr 24, 2019

@rubpa @makibroshett
Haven't you lost your kibana objects like dashboards or index patterns in that way?

@makibroshett
Copy link

@godekdls
No I don’t remember losing all these objects.

@DundiVedantam
Copy link

After recreation of kibana index , I have not lost my kibana objects

@wbartussek
Copy link

wbartussek commented Oct 16, 2019

The sympton was the same in my case: the infamous message "kibana server is not ready yet", however kibana was running (Ubuntu 18.04, Kibana 7.4 on ES 7.4). In fact I found .kibana_1, .kibana_2 and .kibana_3 among my indices. Deleting them and restarting kibana had no visible effect. Then I discovered that there are three more kibana related indices: .kibana_task_manager_1, .kibana_task_manager_2 and .kibana_task_manager_3. After deleting them and restarting kibana, kibana came back with its well known home page and dashboard.

@dberry99
Copy link

I just wanted to say thank you for this @ontoport .

I had upgraded from 7.2 -> 7.5. , and afterwards was stuck with "kibana server is not ready yet" and couldn't find any indication in the logs (with verbose: true) as to why it could not be ready. No breaking change looked relevant, and rpmnew kibana.yml files didn't show a new parameter I had not already defined....

So after seeing your post and with nothing to lose in my home lab, I did the below:

curl -XGET localhost:9200/_cat/shards |grep -i kibana_task

(output cut for brevity)
~
.kibana_task_manager_1 0 p STARTED
.kibana_task_manager_2 0 p STARTED
.kibana_task_manager 0 p STARTED
~

curl -XDELETE localhost:9200/.kibana_task_manager/
curl -XDELETE localhost:9200/.kibana_task_manager_1/
curl -XDELETE localhost:9200/.kibana_task_manager_2/

After this I restarted kibana and all is well now. Thank you.

@Renu353
Copy link

Renu353 commented Jan 7, 2020

Hi Dberry,

Iam using kibana 7.2,I want to upgrade entire elk stack to 7.5,will you pls tell me what are the necessary changes for the upgradation process including logstash.

@priyanka-git-account
Copy link

priyanka-git-account commented Jan 21, 2020

I had the same issue.(kibana 6.8.2) 3 instances were running in my site, .kibana, .kibana_1 and .kibana_2. Followed the below steps:

  1. Stop Kibana service.
  2. Deleted .kibana_2 and .kibana index -
    curl -XDELETE localhost:9200/index/.kibana_2
    curl -XDELETE localhost:9200/index/.kibana
  3. Then point .kibana_1 index to alias .kibana -
    curl -X POST "localhost:9200/_aliases" -H 'Content-Type: application/json' -d' { "actions" : [ { "add" : { "index" : ".kibana_1", "alias" : ".kibana" } } ] }'
  4. Restart Kibana service.
    Kibana is loaded again successfully.

@tylersmalley
Copy link
Contributor

I am going to close this issue as it's been taken over from the original issue and contains a lot of miss-information.

When upgrading, some users have come into issues where the migrations didn't complete successfully. When that happens and Kibana is restarted it will output an error message with what is required to resolve.

@sethuramvenkatesh
Copy link

Can I get the Elasticsearch logs during this time?

Yes, until the fluentd is exporting to ES healthily, your logs are sane and saved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Operations Team label for Operations Team
Projects
None yet
Development

No branches or pull requests