-
Notifications
You must be signed in to change notification settings - Fork 24.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
concurrent inserts cause an index to be deleted #1619
Comments
Remove mapping is only being called when you actually do |
I do not delete data in my indexing code. Its just performing inserts. The application that performs the inserts is an http server written in go. If I do call delete explicitly I can see it happen in the logs : like so [2012-01-18 01:40:35,920][INFO ][cluster.metadata ] [Tag] [users] deleting index However, I am not seeing this. I am seeing this [2012-01-18 00:11:32,057][INFO ][cluster.metadata ] [Tag] [[users]] remove_mapping [user] Each time remove mapping is called it seems the old index is deleted and a new index is created. If I change the code in the http server from using 4 cpus and change it to use 1 cpu - the problem goes away - as in I dont see remove_mapping getting called and index keeps getting bigger. In go I do this to force it to use 1 cpu. runtime.GOMAXPROCS(4) |
I am not talking about deleting an index, but deleting a mapping. In your case, for an index called |
Hmm - Thats strange. From my understanding there are no DELETE requests going to elasticsearch - just PUT requests I will see if I can reproduce this with a simpler example. |
Here is a custom build: http://dl.dropbox.com/u/2136051/elasticsearch-0.18.8-SNAPSHOT.zip that logs when delete mapping is called, can you run it with it and see if you see the log. The logging is INFO level and states: "<------ DELETE MAPPING CALLED!". |
OK. I updated all the nodes on the cluster to the custom build - elasticsearch-0.18.8-SNAPSHOT.zip |
I assume that you don't see the |
Yep. I don't see any log messages related to mappings. I don't see the remove_mapping message. It's been running happily for quite some time now - as in the index is growing and if I search for id:1 I can still see it. |
Maybe its the 0.18.5 version? Thats weird, since there wasn't a problem like that. Well, if it works, you can run with 0.18.7 since the 0.18.8 snap you have has almost no changes on top of it. |
Thanks. Currently at 15 million records and climbing. I don't see any issues. Closing |
This is the x-pack side of the removal of `accumulateExceptions()` for both `TransportNodesAction` and `TransportTasksAction`. There are occasional, random failures that occur during API calls that are silently ignored from the caller's perspective, which also leads to weird API responses that have no response and also no errors, which is obviously untrue.
This is the x-pack side of the removal of `accumulateExceptions()` for both `TransportNodesAction` and `TransportTasksAction`. There are occasional, random failures that occur during API calls that are silently ignored from the caller's perspective, which also leads to weird API responses that have no response and also no errors, which is obviously untrue.
* es/6.x: (155 commits) Make persistent tasks work. Made persistent tasks executors pluggable. Removed ClientHelper dependency from PersistentTasksService. Added AllocatedPersistentTask#waitForPersistentTaskStatus(...) that delegates to PersistentTasksService#waitForPersistentTaskStatus(...) Add adding ability to associate an ID with tasks. Remove InternalClient and InternalSecurityClient (#3054) Make the persistent task status available to PersistentTasksExecutor.nodeOperation(...) method Refactor/to x content fragments2 (#2329) Make AllocatedPersistentTask members volatile (#2297) Moves more classes over to ToXContentObject/Fragment (#2283) Adapt to upstream changes made to AbstractStreamableXContentTestCase (#2117) Move tribe to a module (#2088) Persistent Tasks: remove unused isCurrentStatus method (#2076) Call initialising constructor of BaseTasksRequest (#1771) Always Accumulate Transport Exceptions (#1619) Pass down the provided timeout. Fix static / version based BWC tests (#1456) Don't call ClusterService.state() in a ClusterStateUpdateTask Separate publishing from applying cluster states Persistent tasks: require allocation id on task completion (#1107) Fixes compile errors in Eclipse due to generics ...
I have an application that performs concurrent inserts on an elasticsearch index and is using 4 cpus on a cluster with 5 nodes using 0.18.5 - a single node has 16 cores and inserts are being performed on the node with the elasticsearch instance.
At a certain point in time (and its not that predictable) I see remove_mapping in the logs and a new index is being referenced when using count.
If I force/bind the application that performs the inserts to one cpu the problem goes away ; as in the index does not get rebuilt and index counts don't get reset and I don't see remove_mapping in the logs
The text was updated successfully, but these errors were encountered: