Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update to v8.6.0 #814

Closed
wants to merge 1 commit into from
Closed

Update to v8.6.0 #814

wants to merge 1 commit into from

Conversation

docker-elk-updater[bot]
Copy link
Contributor

Automated changes by create-pull-request GitHub action

@antoineco
Copy link
Collaborator

The issue seems to be caused by a race. Locally, I see the following log entries immediately after sending data to Logstash:

docker-elk-elasticsearch-1  | {"@timestamp":"2023-01-17T13:54:07.316Z", "log.level": "INFO", "message":"[.ds-logs-generic-default-2023.01.17-000001] creating index, cause [initialize_data_stream], templates [logs], shards [1]/[1]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[elasticsearch][masterService#updateTask][T#1]","log.logger":"org.elasticsearch.cluster.metadata.MetadataCreateIndexService","elasticsearch.cluster.uuid":"K_g_RxUhTE-5juCd-Y9Wxw","elasticsearch.node.id":"JwrZcq42S_CVVZ4aefqqUA","elasticsearch.node.name":"elasticsearch","elasticsearch.cluster.name":"docker-cluster"}
docker-elk-elasticsearch-1  | {"@timestamp":"2023-01-17T13:54:07.317Z", "log.level": "INFO", "message":"adding data stream [logs-generic-default] with write index [.ds-logs-generic-default-2023.01.17-000001], backing indices [], and aliases []", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[elasticsearch][masterService#updateTask][T#1]","log.logger":"org.elasticsearch.cluster.metadata.MetadataCreateDataStreamService","elasticsearch.cluster.uuid":"K_g_RxUhTE-5juCd-Y9Wxw","elasticsearch.node.id":"JwrZcq42S_CVVZ4aefqqUA","elasticsearch.node.name":"elasticsearch","elasticsearch.cluster.name":"docker-cluster"}

However, inside GitHub Actions, Elasticsearch is apparently not done initialising by the time we send the "docker-elk" message to Logstash.

Running tests one more time.

@antoineco
Copy link
Collaborator

antoineco commented Jan 18, 2023

The core test suite was indeed racy.

However, there is another issue with the Fleet server, which seems to now be listening on 127.0.0.1:8220 by default instead of 0.0.0.0:8220, probably because TLS isn't enabled.

Ref. elastic/elastic-agent#2197

@antoineco
Copy link
Collaborator

We will have to wait at least until v8.6.2 for this to be patched: elastic/elastic-agent#2198

Closing

@antoineco antoineco closed this Jan 27, 2023
@antoineco antoineco deleted the update/main branch January 27, 2023 15:58
@antoineco antoineco mentioned this pull request Jan 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant