-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The settings.index.mapping settings from the index template are not always being applied on index creation #30486
Comments
Pinging @elastic/es-core-infra |
Do I get it right that the total field limit setting doesn't get applied, but that other settings DO get applied? I'm also wondering whether you have several templates that match your index names, or only the |
Correct, the rest of the settings in the index template are applied successfully, and most days the field limit is as well. I did output all the index templates and go through them, this is the only one that applies to logstash-* |
Here is all the templates that exist on this setup: $ curl -XGET 127.0.0.1:9200/template |
We are having an identical problem but on ES 6.3.0 and ingest via NiFi instead of Logstash. In our case mappings are being applied but the template is not that sets our number of shards (from 5 to 40) and our refresh_interval (from 1s to 300s). When the templates aren't properly applied to our indices we aren't able to ingest at acceptable rates. Currently, we are creating indices in advance and then validating their settings as a workaround. |
I don't know if this is related, but I have a template with number_of_replicas as 0, but indexes are created with number_of_replicas as 1. The other setting I overrode (number_of_shards) is applied correctly: https://discuss.elastic.co/t/set-up-index-template-with-0-replicas-always-creates-with-1/143010 |
This has been open for quite a while, and we haven't made much progress on this due to focus in other areas. For now I'm going to close this as something we aren't planning on implementing. We can re-open it later if needed. |
I am currently running the ELK stack ver 6.2.2 with x-pack on CentOS 7.x and I have been randomly seeing this issue on the three clusters I run that need a greater than 1000 total_fields.limit value. Most days the index gets created with our set limit of 3000 and randomly it does not. We created a monitor watching the Logstash logs for errors about the 1000 field limit. Here is an example from this morning:
This instance was alerting this morning so I checked the current days index settings:
$ curl -XGET 127.0.0.1:9200/logstash-2018.05.09/_settings
{
"logstash-2018.05.09": {
"settings": {
"index": {
"routing": {
"allocation": {
"require": {
"tag": "fast"
}
}
},
"refresh_interval": "5s",
"number_of_shards": "6",
"provided_name": "logstash-2018.05.09",
"merge": {
"scheduler": {
"max_thread_count": "1"
}
},
"creation_date": "1516324816144",
"number_of_replicas": "1",
"uuid": "pmMZplQtQYKA5YtiqRMwXA",
"version": {
"created": "5060399",
"upgraded": "6020299"
}
}
}
}
}
No setting for the increased filed limit are present, so I manually updated the limit on this index:
$ curl -XPUT 127.0.0.1:9200/logstash-2018.05.09/_settings -H'Content-Type: application/json' -d '{"index.mapping.total_fields.limit": 3000}'
And checked again to verify:
$ curl -XGET 127.0.0.1:9200/logstash-2018.05.09/_settings
{
"logstash-2018.05.09": {
"settings": {
"index": {
"routing": {
"allocation": {
"require": {
"tag": "fast"
}
}
},
"mapping": {
"total_fields": {
"limit": "3000"
}
},
"refresh_interval": "5s",
"number_of_shards": "6",
"provided_name": "logstash-2018.05.09",
"merge": {
"scheduler": {
"max_thread_count": "1"
}
},
"creation_date": "1516324816144",
"number_of_replicas": "1",
"uuid": "pmMZplQtQYKA5YtiqRMwXA",
"version": {
"created": "5060399",
"upgraded": "6020299"
}
}
}
}
}
Field limit settings are now present.
Here is the index template in use:
$ curl -XGET 127.0.0.1:9200/_template/logstash
{"logstash":{"order":0,"index_patterns":["logstash-"],"settings":{"index":{"routing":{"allocation":{"require":{"tag":"fast"}}},"mapping":{"total_fields":{"limit":"3000"}},"refresh_interval":"5s","number_of_shards":"6","number_of_replicas":"1","merge":{"scheduler":{"max_thread_count":"1"}}}},"mappings":{"default":{"dynamic_templates":[{"message_field":{"mapping":{"index":true,"norms":false,"type":"text"},"match_mapping_type":"string","match":"message"}},{"string_fields":{"mapping":{"index":true,"norms":false,"type":"text","fields":{"raw":{"ignore_above":256,"index":true,"type":"keyword","doc_values":true}}},"match_mapping_type":"string","match":""}},{"float_fields":{"mapping":{"type":"float","doc_values":true},"match_mapping_type":"double","match":""}},{"date_fields":{"mapping":{"type":"date","doc_values":true},"match_mapping_type":"date","match":""}},{"geo_point_fields":{"mapping":{"type":"geo_point","doc_values":true},"match_mapping_type":"string","match":"*"}}],"properties":{"@timestamp":{"type":"date","doc_values":true},"geoip":{"dynamic":true,"type":"object","properties":{"latitude":{"type":"float","doc_values":true},"ip":{"type":"ip","doc_values":true},"location":{"type":"geo_point","doc_values":true},"longitude":{"type":"float","doc_values":true}}},"level":{"type":"text"},"alert_occurrences":{"type":"integer"},"@Version":{"index":true,"type":"keyword","doc_values":true},"response_time":{"type":"integer"},"ResponseTime":{"type":"float"},"Win32Status":{"type":"integer"},"event":{"type":"object","properties":{"timestamp":{"type":"date"}}},"facility":{"type":"text"},"HTTPStatus":{"type":"integer"},"response_size":{"type":"integer"},"delay_sec":{"type":"integer"},"xdelay_sec":{"type":"integer"},"SourceGeo":{"dynamic":true,"type":"object","properties":{"latitude":{"type":"float","doc_values":true},"ip":{"type":"ip","doc_values":true},"location":{"type":"geo_point","doc_values":true},"longitude":{"type":"float","doc_values":true}}},"DestinationGeo":{"dynamic":true,"type":"object","properties":{"latitude":{"type":"float","doc_values":true},"ip":{"type":"ip","doc_values":true},"location":{"type":"geo_point","doc_values":true},"longitude":{"type":"float","doc_values":true}}},"lobos_request":{"type":"object","properties":{"queryParams":{"type":"object","properties":{"datetime":{"type":"text"},"endDate":{"type":"text"},"start":{"type":"text"},"startDate":{"type":"text"}}}}}}}},"aliases":{}}}
I left the JSON compressed to not make this issue description extra long. But you can see the setting applied includes the field setting, and most days it gets set properly. Roughly every third day one of the three clusters with the increased field limit setting for some reason does not get it applied to the new index, but all the other settings in the index template are present.
The text was updated successfully, but these errors were encountered: