-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add geo data cache #340
Add geo data cache #340
Conversation
c6ed538
to
c022042
Compare
src/main/java/org/opensearch/geospatial/ip2geo/cache/GeoDataCache.java
Outdated
Show resolved
Hide resolved
public class GeoDataCache { | ||
private Cache<CacheKey, Map<String, Object>> cache; | ||
|
||
public GeoDataCache(final long maxSize) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this be singleton?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That make sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cannot make it as a singleton as the maxSize property is coming from the caller.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it mean you will have multiple cache per cluster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There will be single cache per node. Any suggestion on how you want to make it as singleton? I would leave the code as it is and let the user of this class to decide. Because same can happen for many other classes as well which are singleton but does not have any protection on creating multiple instances.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is a suggestion to make class singleton per maxSize, We can use a factoryPattern to create a FactoryClass that provides GetDataCacheObjects for a specific size. If for an input size the object is already created we return the already created object otherwise we create an return the Object.
Also, we can make that GeoDataCache object creation is only accessible via the Factory class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is not true singleton; having single instance for each size. Also, there is no use case where we want single instance for each cache size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other option is making this as a private inner class of another class in which this cache is used that I can make in a next PR.
c022042
to
0ab004f
Compare
bac73a6
to
9b57b0f
Compare
Codecov Report
@@ Coverage Diff @@
## feature/ip2geo #340 +/- ##
====================================================
- Coverage 87.20% 86.95% -0.25%
+ Complexity 738 725 -13
====================================================
Files 91 91
Lines 2696 2660 -36
Branches 210 214 +4
====================================================
- Hits 2351 2313 -38
- Misses 261 264 +3
+ Partials 84 83 -1
|
@heemin32 attach an issue with this PR. Also add some information about this PR, what it contains |
There is no issue specific to this PR. I don't want to add the IP2Geo issue as there so many PR and it will spam the issue page. It contains a class implementing a cache... |
public class GeoDataCache { | ||
private Cache<CacheKey, Map<String, Object>> cache; | ||
|
||
public GeoDataCache(final long maxSize) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is a suggestion to make class singleton per maxSize, We can use a factoryPattern to create a FactoryClass that provides GetDataCacheObjects for a specific size. If for an input size the object is already created we return the already created object otherwise we create an return the Object.
Also, we can make that GeoDataCache object creation is only accessible via the Factory class.
* | ||
* @param maxSize | ||
*/ | ||
public void updateMaxSize(final long maxSize) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of questions on this function:
- can you provide some details why we need this function? Not able to think why we need such a function?
- If the cache from where you are copying the data gets updated/deleted during the time of copying of data to new Cache object then the new cache will be stale.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- A user can change cache size dynamically through configuration change.
- There is no public update/delete methods on cached value. Deletion can happen by automatic eviction by policy(cache size in this case). However, it is okay to keep the supposed to be deleted data a little longer.
Cache<CacheKey, Map<String, Object>> temp = CacheBuilder.<CacheKey, Map<String, Object>>builder().setMaximumWeight(maxSize).build(); | ||
int count = 0; | ||
Iterator<GeoDataCache.CacheKey> it = cache.keys().iterator(); | ||
while (it.hasNext() && count < maxSize) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an interesting condition. We are providing the update to max size, and if new size if less than the old size we are copying less data. This adds more doubt in my mind why we are even providing this functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not to throw away all the cached value when size is changed.
|
||
@AllArgsConstructor | ||
@EqualsAndHashCode | ||
private static class CacheKey { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have provided this very awesome object CacheKey, we should expose this class to other parts of the code so that cache key can be built and used directly for get and put function.
It will make the interface of get/put easy to use. Now exposing this has downside, lets say going forward if we want to update the cache key, in that case either we can create an internal CacheKey class that can be used, if we don't want all part of code start populating new fields in new cache key.
Given that thought process I think we should expose this CacheKey class to external parts of the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so. There is no need for other class to know about it. There is no use case where caller need to construct the key itself. Actually, the interface of get is simpler by hiding the key.
final String indexName, | ||
final String ip, | ||
final Function<String, Map<String, Object>> retrieveFunction | ||
) throws ExecutionException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should catch this ExecutionException, and wrap it in another custom Exception and then throw it. We shouldn't expose the internals of Cache to outside parts of the code.
In my understanding GeoCache is just an abstraction over the OpenSearch Common Cache. Hence it make sense to avoid exposing internals out to other part of the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ExecutionException
is from java.util.concurrent
I don't see a benefit of wrapping it into another exception.
import org.opensearch.common.cache.Cache; | ||
import org.opensearch.common.cache.CacheBuilder; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why we are going with an internal OpenSearch cache implementation, when there are other tested and more prominent libraries for cache like Google Guava etc. Example: https://github.com/opensearch-project/k-NN/blob/8e2ad4595bd4f7c64d0b312cac30de8a42e4bf0b/src/main/java/org/opensearch/knn/index/memory/NativeMemoryCacheManager.java#L15-L14
Can we do some deep-dive here to figure out which one to choose and why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Current GeoIP processor use OpenSearch cache implementation which is well tested. Why do we want to explore new option here when there is working option already by adding additional dependency on third party library?
Signed-off-by: Heemin Kim <heemin@amazon.com>
9b57b0f
to
f788d37
Compare
Signed-off-by: Heemin Kim <heemin@amazon.com>
Signed-off-by: Heemin Kim <heemin@amazon.com>
* Update gradle version to 7.6 (#265) Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> * Exclude lombok generated code from jacoco coverage report (#268) Signed-off-by: Heemin Kim <heemin@amazon.com> * Make jacoco report to be generated faster in local (#267) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update dependency org.json:json to v20230227 (#273) Co-authored-by: mend-for-github-com[bot] <50673670+mend-for-github-com[bot]@users.noreply.github.com> * Baseline owners and maintainers (#275) Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> * Add Auto Release Workflow (#288) Signed-off-by: Naveen Tatikonda <navtat@amazon.com> * Change package for Strings.hasText (#314) Signed-off-by: Heemin Kim <heemin@amazon.com> * Adding release notes for 2.8 (#323) Signed-off-by: Martin Gaievski <gaievski@amazon.com> * Add 2.9.0 release notes (#350) Signed-off-by: Junqiu Lei <junqiu@amazon.com> * Update packages according to a change in OpenSearch core (#353) Signed-off-by: Heemin Kim <heemin@amazon.com> * Implement creation of ip2geo feature (#257) * Update gradle version to 7.6 (#265) Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> * Implement creation of ip2geo feature * Implementation of ip2geo datasource creation * Implementation of ip2geo processor creation Signed-off-by: Heemin Kim <heemin@amazon.com> --------- Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> Signed-off-by: Heemin Kim <heemin@amazon.com> Co-authored-by: Vijayan Balasubramanian <balasvij@amazon.com> * Added unit tests with some refactoring of codes (#271) * Add Unit tests * Set cache true for search query * Remove in memory cache implementation (Two way door decision) * Relying on search cache without custom cache * Renamed datasource state from FAILED to CREATE_FAILED * Renamed class name from *Helper to *Facade * Changed updateIntervalInDays to updateInterval * Changed value type of default update_interval from TimeValue to Long * Read setting value from cluster settings directly Signed-off-by: Heemin Kim <heemin@amazon.com> * Sync from main (#280) * Update gradle version to 7.6 (#265) Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> * Exclude lombok generated code from jacoco coverage report (#268) Signed-off-by: Heemin Kim <heemin@amazon.com> * Make jacoco report to be generated faster in local (#267) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update dependency org.json:json to v20230227 (#273) Co-authored-by: mend-for-github-com[bot] <50673670+mend-for-github-com[bot]@users.noreply.github.com> * Baseline owners and maintainers (#275) Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> --------- Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> Signed-off-by: Heemin Kim <heemin@amazon.com> Co-authored-by: Vijayan Balasubramanian <balasvij@amazon.com> Co-authored-by: mend-for-github-com[bot] <50673670+mend-for-github-com[bot]@users.noreply.github.com> * Add datasource name validation (#281) Signed-off-by: Heemin Kim <heemin@amazon.com> * Refactoring of code (#282) 1. Change variable name from datasourceName to name 2. Change variable name from id to name 3. Added helper methods in test code Signed-off-by: Heemin Kim <heemin@amazon.com> * Change field name from md5 to sha256 (#285) Signed-off-by: Heemin Kim <heemin@amazon.com> * Implement get datasource api (#279) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update index option (#284) 1. Make geodata index as hidden 2. Make geodata index as read only allow delete after creation is done 3. Refresh datasource index immediately after update Signed-off-by: Heemin Kim <heemin@amazon.com> * Make some fields in manifest file as mandatory (#289) Signed-off-by: Heemin Kim <heemin@amazon.com> * Create datasource index explicitly (#283) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add wrapper class of job scheduler lock service (#290) Signed-off-by: Heemin Kim <heemin@amazon.com> * Remove all unused client attributes (#293) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update copyright header (#298) Signed-off-by: Heemin Kim <heemin@amazon.com> * Run system index handling code with stashed thread context (#297) Signed-off-by: Heemin Kim <heemin@amazon.com> * Reduce lock duration and renew the lock during update (#299) Signed-off-by: Heemin Kim <heemin@amazon.com> * Implements delete datasource API (#291) Signed-off-by: Heemin Kim <heemin@amazon.com> * Set User-Agent in http request (#300) Signed-off-by: Heemin Kim <heemin@amazon.com> * Implement datasource update API (#292) Signed-off-by: Heemin Kim <heemin@amazon.com> * Refactoring test code (#302) Make buildGeoJSONFeatureProcessorConfig method to be more general Signed-off-by: Heemin Kim <heemin@amazon.com> * Add ip2geo processor integ test for failure case (#303) Signed-off-by: Heemin Kim <heemin@amazon.com> * Bug fix and refactoring of code (#305) 1. Bugfix: Ingest metadata can be null if there is no processor created 2. Refactoring: Moved private method to another class for better testing support 3. Refactoring: Set some private static final variable as public so that unit test can use it 4. Refactoring: Changed string value to static variable Signed-off-by: Heemin Kim <heemin@amazon.com> * Add integration test for Ip2GeoProcessor (#306) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add ConcurrentModificationException (#308) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add integration test for UpdateDatasource API (#307) Signed-off-by: Heemin Kim <heemin@amazon.com> * Bug fix on lock management and few performance improvements (#310) * Release lock before response back to caller for update/delete API * Release lock in background task for creation API * Change index settings to improve indexing performance Signed-off-by: Heemin Kim <heemin@amazon.com> * Change index setting from read_only_allow_delete to write (#311) read_only_allow_delete does not block write to an index. The disk-based shard allocator may add and remove this block automatically. Therefore, use index.blocks.write instead. Signed-off-by: Heemin Kim <heemin@amazon.com> * Fix bug in get datasource API and improve memory usage (#313) Signed-off-by: Heemin Kim <heemin@amazon.com> * Change package for Strings.hasText (#314) (#317) Signed-off-by: Heemin Kim <heemin@amazon.com> * Remove jitter and move index setting from DatasourceFacade to DatasourceExtension (#319) Signed-off-by: Heemin Kim <heemin@amazon.com> * Do not index blank value and do not enrich null property (#320) Signed-off-by: Heemin Kim <heemin@amazon.com> * Move index setting keys to constants (#321) Signed-off-by: Heemin Kim <heemin@amazon.com> * Return null index name for expired data (#322) Return null index name for expired data so that it can be deleted by clean up process. Clean up process exclude current index from deleting. Signed-off-by: Heemin Kim <heemin@amazon.com> * Add new fields in datasource (#325) Signed-off-by: Heemin Kim <heemin@amazon.com> * Delete index once it is expired (#326) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add restoring event listener (#328) In the listener, we trigger a geoip data update Signed-off-by: Heemin Kim <heemin@amazon.com> * Reverse forcemerge and refresh order (#331) Otherwise, opensearch does not clear old segment files Signed-off-by: Heemin Kim <heemin@amazon.com> * Removed parameter and settings (#332) * Removed first_only parameter * Removed max_concurrency and batch_size setting first_only parameter was added as current geoip processor has it. However, the parameter have no benefit for ip2geo processor as we don't do a sequantial search for array data but use multi search. max_concurrency and batch_size setting is removed as these are only reveal internal implementation and could be a future blocker to improve performance later. Signed-off-by: Heemin Kim <heemin@amazon.com> * Add a field in datasource for current index name (#333) Signed-off-by: Heemin Kim <heemin@amazon.com> * Delete GeoIP data indices after restoring complete (#334) We don't want to use restored GeoIP data indices. Therefore we delete the indices once restoring process complete. When GeoIP metadata index is restored, we create a new GeoIP data index instead. Signed-off-by: Heemin Kim <heemin@amazon.com> * Use bool query for array form of IPs (#335) Signed-off-by: Heemin Kim <heemin@amazon.com> * Run update/delete request in a new thread (#337) This is not to block transport thread Signed-off-by: Heemin Kim <heemin@amazon.com> * Remove IP2Geo processor validation (#336) Cannot query index to get data to validate IP2Geo processor. Will add validation when we decide to store some of data in cluster state metadata. Signed-off-by: Heemin Kim <heemin@amazon.com> * Acquire lock sychronously (#339) By acquiring lock asychronously, the remaining part of the code is being run by transport thread which does not allow blocking code. We want only single update happen in a node using single thread. However, it cannot be acheived if I acquire lock asynchronously and pass the listener. Signed-off-by: Heemin Kim <heemin@amazon.com> * Added a cache to store datasource metadata (#338) Signed-off-by: Heemin Kim <heemin@amazon.com> * Changed class name and package (#341) Signed-off-by: Heemin Kim <heemin@amazon.com> * Refactoring of code (#342) 1. Changed class name from Ip2GeoCache to Ip2GeoCachedDao 2. Moved the Ip2GeoCachedDao from cache to dao package Signed-off-by: Heemin Kim <heemin@amazon.com> * Add geo data cache (#340) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add cache layer to reduce GeoIp data retrieval latency (#343) Signed-off-by: Heemin Kim <heemin@amazon.com> * Use _primary in query preference and few changes (#347) 1. Use _primary preference to get datasource metadata so that it can read the latest data. RefreshPolicy.IMMEDIATE won't refresh replica shards immediately according to #346 2. Update datasource metadata index mapping 3. Move batch size from static value to setting Signed-off-by: Heemin Kim <heemin@amazon.com> * Wait until GeoIP data to be replicated to all data nodes (#348) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update packages according to a change in OpenSearch core (#354) * Update packages according to a change in OpenSearch core Signed-off-by: Heemin Kim <heemin@amazon.com> * Update packages according to a change in OpenSearch core (#353) Signed-off-by: Heemin Kim <heemin@amazon.com> --------- Signed-off-by: Heemin Kim <heemin@amazon.com> --------- Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> Signed-off-by: Heemin Kim <heemin@amazon.com> Signed-off-by: Naveen Tatikonda <navtat@amazon.com> Signed-off-by: Martin Gaievski <gaievski@amazon.com> Signed-off-by: Junqiu Lei <junqiu@amazon.com> Co-authored-by: Vijayan Balasubramanian <balasvij@amazon.com> Co-authored-by: mend-for-github-com[bot] <50673670+mend-for-github-com[bot]@users.noreply.github.com> Co-authored-by: Naveen Tatikonda <navtat@amazon.com> Co-authored-by: Martin Gaievski <gaievski@amazon.com> Co-authored-by: Junqiu Lei <junqiu@amazon.com>
* Implement creation of ip2geo feature (#257) * Update gradle version to 7.6 (#265) Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> * Implement creation of ip2geo feature * Implementation of ip2geo datasource creation * Implementation of ip2geo processor creation Signed-off-by: Heemin Kim <heemin@amazon.com> --------- Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> Signed-off-by: Heemin Kim <heemin@amazon.com> Co-authored-by: Vijayan Balasubramanian <balasvij@amazon.com> * Added unit tests with some refactoring of codes (#271) * Add Unit tests * Set cache true for search query * Remove in memory cache implementation (Two way door decision) * Relying on search cache without custom cache * Renamed datasource state from FAILED to CREATE_FAILED * Renamed class name from *Helper to *Facade * Changed updateIntervalInDays to updateInterval * Changed value type of default update_interval from TimeValue to Long * Read setting value from cluster settings directly Signed-off-by: Heemin Kim <heemin@amazon.com> * Sync from main (#280) * Update gradle version to 7.6 (#265) Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> * Exclude lombok generated code from jacoco coverage report (#268) Signed-off-by: Heemin Kim <heemin@amazon.com> * Make jacoco report to be generated faster in local (#267) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update dependency org.json:json to v20230227 (#273) Co-authored-by: mend-for-github-com[bot] <50673670+mend-for-github-com[bot]@users.noreply.github.com> * Baseline owners and maintainers (#275) Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> --------- Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> Signed-off-by: Heemin Kim <heemin@amazon.com> Co-authored-by: Vijayan Balasubramanian <balasvij@amazon.com> Co-authored-by: mend-for-github-com[bot] <50673670+mend-for-github-com[bot]@users.noreply.github.com> * Add datasource name validation (#281) Signed-off-by: Heemin Kim <heemin@amazon.com> * Refactoring of code (#282) 1. Change variable name from datasourceName to name 2. Change variable name from id to name 3. Added helper methods in test code Signed-off-by: Heemin Kim <heemin@amazon.com> * Change field name from md5 to sha256 (#285) Signed-off-by: Heemin Kim <heemin@amazon.com> * Implement get datasource api (#279) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update index option (#284) 1. Make geodata index as hidden 2. Make geodata index as read only allow delete after creation is done 3. Refresh datasource index immediately after update Signed-off-by: Heemin Kim <heemin@amazon.com> * Make some fields in manifest file as mandatory (#289) Signed-off-by: Heemin Kim <heemin@amazon.com> * Create datasource index explicitly (#283) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add wrapper class of job scheduler lock service (#290) Signed-off-by: Heemin Kim <heemin@amazon.com> * Remove all unused client attributes (#293) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update copyright header (#298) Signed-off-by: Heemin Kim <heemin@amazon.com> * Run system index handling code with stashed thread context (#297) Signed-off-by: Heemin Kim <heemin@amazon.com> * Reduce lock duration and renew the lock during update (#299) Signed-off-by: Heemin Kim <heemin@amazon.com> * Implements delete datasource API (#291) Signed-off-by: Heemin Kim <heemin@amazon.com> * Set User-Agent in http request (#300) Signed-off-by: Heemin Kim <heemin@amazon.com> * Implement datasource update API (#292) Signed-off-by: Heemin Kim <heemin@amazon.com> * Refactoring test code (#302) Make buildGeoJSONFeatureProcessorConfig method to be more general Signed-off-by: Heemin Kim <heemin@amazon.com> * Add ip2geo processor integ test for failure case (#303) Signed-off-by: Heemin Kim <heemin@amazon.com> * Bug fix and refactoring of code (#305) 1. Bugfix: Ingest metadata can be null if there is no processor created 2. Refactoring: Moved private method to another class for better testing support 3. Refactoring: Set some private static final variable as public so that unit test can use it 4. Refactoring: Changed string value to static variable Signed-off-by: Heemin Kim <heemin@amazon.com> * Add integration test for Ip2GeoProcessor (#306) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add ConcurrentModificationException (#308) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add integration test for UpdateDatasource API (#307) Signed-off-by: Heemin Kim <heemin@amazon.com> * Bug fix on lock management and few performance improvements (#310) * Release lock before response back to caller for update/delete API * Release lock in background task for creation API * Change index settings to improve indexing performance Signed-off-by: Heemin Kim <heemin@amazon.com> * Change index setting from read_only_allow_delete to write (#311) read_only_allow_delete does not block write to an index. The disk-based shard allocator may add and remove this block automatically. Therefore, use index.blocks.write instead. Signed-off-by: Heemin Kim <heemin@amazon.com> * Fix bug in get datasource API and improve memory usage (#313) Signed-off-by: Heemin Kim <heemin@amazon.com> * Change package for Strings.hasText (#314) (#317) Signed-off-by: Heemin Kim <heemin@amazon.com> * Remove jitter and move index setting from DatasourceFacade to DatasourceExtension (#319) Signed-off-by: Heemin Kim <heemin@amazon.com> * Do not index blank value and do not enrich null property (#320) Signed-off-by: Heemin Kim <heemin@amazon.com> * Move index setting keys to constants (#321) Signed-off-by: Heemin Kim <heemin@amazon.com> * Return null index name for expired data (#322) Return null index name for expired data so that it can be deleted by clean up process. Clean up process exclude current index from deleting. Signed-off-by: Heemin Kim <heemin@amazon.com> * Add new fields in datasource (#325) Signed-off-by: Heemin Kim <heemin@amazon.com> * Delete index once it is expired (#326) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add restoring event listener (#328) In the listener, we trigger a geoip data update Signed-off-by: Heemin Kim <heemin@amazon.com> * Reverse forcemerge and refresh order (#331) Otherwise, opensearch does not clear old segment files Signed-off-by: Heemin Kim <heemin@amazon.com> * Removed parameter and settings (#332) * Removed first_only parameter * Removed max_concurrency and batch_size setting first_only parameter was added as current geoip processor has it. However, the parameter have no benefit for ip2geo processor as we don't do a sequantial search for array data but use multi search. max_concurrency and batch_size setting is removed as these are only reveal internal implementation and could be a future blocker to improve performance later. Signed-off-by: Heemin Kim <heemin@amazon.com> * Add a field in datasource for current index name (#333) Signed-off-by: Heemin Kim <heemin@amazon.com> * Delete GeoIP data indices after restoring complete (#334) We don't want to use restored GeoIP data indices. Therefore we delete the indices once restoring process complete. When GeoIP metadata index is restored, we create a new GeoIP data index instead. Signed-off-by: Heemin Kim <heemin@amazon.com> * Use bool query for array form of IPs (#335) Signed-off-by: Heemin Kim <heemin@amazon.com> * Run update/delete request in a new thread (#337) This is not to block transport thread Signed-off-by: Heemin Kim <heemin@amazon.com> * Remove IP2Geo processor validation (#336) Cannot query index to get data to validate IP2Geo processor. Will add validation when we decide to store some of data in cluster state metadata. Signed-off-by: Heemin Kim <heemin@amazon.com> * Acquire lock sychronously (#339) By acquiring lock asychronously, the remaining part of the code is being run by transport thread which does not allow blocking code. We want only single update happen in a node using single thread. However, it cannot be acheived if I acquire lock asynchronously and pass the listener. Signed-off-by: Heemin Kim <heemin@amazon.com> * Added a cache to store datasource metadata (#338) Signed-off-by: Heemin Kim <heemin@amazon.com> * Changed class name and package (#341) Signed-off-by: Heemin Kim <heemin@amazon.com> * Refactoring of code (#342) 1. Changed class name from Ip2GeoCache to Ip2GeoCachedDao 2. Moved the Ip2GeoCachedDao from cache to dao package Signed-off-by: Heemin Kim <heemin@amazon.com> * Add geo data cache (#340) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add cache layer to reduce GeoIp data retrieval latency (#343) Signed-off-by: Heemin Kim <heemin@amazon.com> * Use _primary in query preference and few changes (#347) 1. Use _primary preference to get datasource metadata so that it can read the latest data. RefreshPolicy.IMMEDIATE won't refresh replica shards immediately according to #346 2. Update datasource metadata index mapping 3. Move batch size from static value to setting Signed-off-by: Heemin Kim <heemin@amazon.com> * Wait until GeoIP data to be replicated to all data nodes (#348) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update packages according to a change in OpenSearch core (#354) * Update packages according to a change in OpenSearch core Signed-off-by: Heemin Kim <heemin@amazon.com> * Update packages according to a change in OpenSearch core (#353) Signed-off-by: Heemin Kim <heemin@amazon.com> --------- Signed-off-by: Heemin Kim <heemin@amazon.com> --------- Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> Signed-off-by: Heemin Kim <heemin@amazon.com> Co-authored-by: Vijayan Balasubramanian <balasvij@amazon.com> Co-authored-by: mend-for-github-com[bot] <50673670+mend-for-github-com[bot]@users.noreply.github.com>
* Implement creation of ip2geo feature (#257) * Update gradle version to 7.6 (#265) Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> * Implement creation of ip2geo feature * Implementation of ip2geo datasource creation * Implementation of ip2geo processor creation Signed-off-by: Heemin Kim <heemin@amazon.com> --------- Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> Signed-off-by: Heemin Kim <heemin@amazon.com> Co-authored-by: Vijayan Balasubramanian <balasvij@amazon.com> * Added unit tests with some refactoring of codes (#271) * Add Unit tests * Set cache true for search query * Remove in memory cache implementation (Two way door decision) * Relying on search cache without custom cache * Renamed datasource state from FAILED to CREATE_FAILED * Renamed class name from *Helper to *Facade * Changed updateIntervalInDays to updateInterval * Changed value type of default update_interval from TimeValue to Long * Read setting value from cluster settings directly Signed-off-by: Heemin Kim <heemin@amazon.com> * Sync from main (#280) * Update gradle version to 7.6 (#265) Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> * Exclude lombok generated code from jacoco coverage report (#268) Signed-off-by: Heemin Kim <heemin@amazon.com> * Make jacoco report to be generated faster in local (#267) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update dependency org.json:json to v20230227 (#273) Co-authored-by: mend-for-github-com[bot] <50673670+mend-for-github-com[bot]@users.noreply.github.com> * Baseline owners and maintainers (#275) Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> --------- Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> Signed-off-by: Heemin Kim <heemin@amazon.com> Co-authored-by: Vijayan Balasubramanian <balasvij@amazon.com> Co-authored-by: mend-for-github-com[bot] <50673670+mend-for-github-com[bot]@users.noreply.github.com> * Add datasource name validation (#281) Signed-off-by: Heemin Kim <heemin@amazon.com> * Refactoring of code (#282) 1. Change variable name from datasourceName to name 2. Change variable name from id to name 3. Added helper methods in test code Signed-off-by: Heemin Kim <heemin@amazon.com> * Change field name from md5 to sha256 (#285) Signed-off-by: Heemin Kim <heemin@amazon.com> * Implement get datasource api (#279) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update index option (#284) 1. Make geodata index as hidden 2. Make geodata index as read only allow delete after creation is done 3. Refresh datasource index immediately after update Signed-off-by: Heemin Kim <heemin@amazon.com> * Make some fields in manifest file as mandatory (#289) Signed-off-by: Heemin Kim <heemin@amazon.com> * Create datasource index explicitly (#283) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add wrapper class of job scheduler lock service (#290) Signed-off-by: Heemin Kim <heemin@amazon.com> * Remove all unused client attributes (#293) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update copyright header (#298) Signed-off-by: Heemin Kim <heemin@amazon.com> * Run system index handling code with stashed thread context (#297) Signed-off-by: Heemin Kim <heemin@amazon.com> * Reduce lock duration and renew the lock during update (#299) Signed-off-by: Heemin Kim <heemin@amazon.com> * Implements delete datasource API (#291) Signed-off-by: Heemin Kim <heemin@amazon.com> * Set User-Agent in http request (#300) Signed-off-by: Heemin Kim <heemin@amazon.com> * Implement datasource update API (#292) Signed-off-by: Heemin Kim <heemin@amazon.com> * Refactoring test code (#302) Make buildGeoJSONFeatureProcessorConfig method to be more general Signed-off-by: Heemin Kim <heemin@amazon.com> * Add ip2geo processor integ test for failure case (#303) Signed-off-by: Heemin Kim <heemin@amazon.com> * Bug fix and refactoring of code (#305) 1. Bugfix: Ingest metadata can be null if there is no processor created 2. Refactoring: Moved private method to another class for better testing support 3. Refactoring: Set some private static final variable as public so that unit test can use it 4. Refactoring: Changed string value to static variable Signed-off-by: Heemin Kim <heemin@amazon.com> * Add integration test for Ip2GeoProcessor (#306) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add ConcurrentModificationException (#308) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add integration test for UpdateDatasource API (#307) Signed-off-by: Heemin Kim <heemin@amazon.com> * Bug fix on lock management and few performance improvements (#310) * Release lock before response back to caller for update/delete API * Release lock in background task for creation API * Change index settings to improve indexing performance Signed-off-by: Heemin Kim <heemin@amazon.com> * Change index setting from read_only_allow_delete to write (#311) read_only_allow_delete does not block write to an index. The disk-based shard allocator may add and remove this block automatically. Therefore, use index.blocks.write instead. Signed-off-by: Heemin Kim <heemin@amazon.com> * Fix bug in get datasource API and improve memory usage (#313) Signed-off-by: Heemin Kim <heemin@amazon.com> * Change package for Strings.hasText (#314) (#317) Signed-off-by: Heemin Kim <heemin@amazon.com> * Remove jitter and move index setting from DatasourceFacade to DatasourceExtension (#319) Signed-off-by: Heemin Kim <heemin@amazon.com> * Do not index blank value and do not enrich null property (#320) Signed-off-by: Heemin Kim <heemin@amazon.com> * Move index setting keys to constants (#321) Signed-off-by: Heemin Kim <heemin@amazon.com> * Return null index name for expired data (#322) Return null index name for expired data so that it can be deleted by clean up process. Clean up process exclude current index from deleting. Signed-off-by: Heemin Kim <heemin@amazon.com> * Add new fields in datasource (#325) Signed-off-by: Heemin Kim <heemin@amazon.com> * Delete index once it is expired (#326) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add restoring event listener (#328) In the listener, we trigger a geoip data update Signed-off-by: Heemin Kim <heemin@amazon.com> * Reverse forcemerge and refresh order (#331) Otherwise, opensearch does not clear old segment files Signed-off-by: Heemin Kim <heemin@amazon.com> * Removed parameter and settings (#332) * Removed first_only parameter * Removed max_concurrency and batch_size setting first_only parameter was added as current geoip processor has it. However, the parameter have no benefit for ip2geo processor as we don't do a sequantial search for array data but use multi search. max_concurrency and batch_size setting is removed as these are only reveal internal implementation and could be a future blocker to improve performance later. Signed-off-by: Heemin Kim <heemin@amazon.com> * Add a field in datasource for current index name (#333) Signed-off-by: Heemin Kim <heemin@amazon.com> * Delete GeoIP data indices after restoring complete (#334) We don't want to use restored GeoIP data indices. Therefore we delete the indices once restoring process complete. When GeoIP metadata index is restored, we create a new GeoIP data index instead. Signed-off-by: Heemin Kim <heemin@amazon.com> * Use bool query for array form of IPs (#335) Signed-off-by: Heemin Kim <heemin@amazon.com> * Run update/delete request in a new thread (#337) This is not to block transport thread Signed-off-by: Heemin Kim <heemin@amazon.com> * Remove IP2Geo processor validation (#336) Cannot query index to get data to validate IP2Geo processor. Will add validation when we decide to store some of data in cluster state metadata. Signed-off-by: Heemin Kim <heemin@amazon.com> * Acquire lock sychronously (#339) By acquiring lock asychronously, the remaining part of the code is being run by transport thread which does not allow blocking code. We want only single update happen in a node using single thread. However, it cannot be acheived if I acquire lock asynchronously and pass the listener. Signed-off-by: Heemin Kim <heemin@amazon.com> * Added a cache to store datasource metadata (#338) Signed-off-by: Heemin Kim <heemin@amazon.com> * Changed class name and package (#341) Signed-off-by: Heemin Kim <heemin@amazon.com> * Refactoring of code (#342) 1. Changed class name from Ip2GeoCache to Ip2GeoCachedDao 2. Moved the Ip2GeoCachedDao from cache to dao package Signed-off-by: Heemin Kim <heemin@amazon.com> * Add geo data cache (#340) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add cache layer to reduce GeoIp data retrieval latency (#343) Signed-off-by: Heemin Kim <heemin@amazon.com> * Use _primary in query preference and few changes (#347) 1. Use _primary preference to get datasource metadata so that it can read the latest data. RefreshPolicy.IMMEDIATE won't refresh replica shards immediately according to #346 2. Update datasource metadata index mapping 3. Move batch size from static value to setting Signed-off-by: Heemin Kim <heemin@amazon.com> * Wait until GeoIP data to be replicated to all data nodes (#348) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update packages according to a change in OpenSearch core (#354) * Update packages according to a change in OpenSearch core Signed-off-by: Heemin Kim <heemin@amazon.com> * Update packages according to a change in OpenSearch core (#353) Signed-off-by: Heemin Kim <heemin@amazon.com> --------- Signed-off-by: Heemin Kim <heemin@amazon.com> --------- Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> Signed-off-by: Heemin Kim <heemin@amazon.com> Co-authored-by: Vijayan Balasubramanian <balasvij@amazon.com> Co-authored-by: mend-for-github-com[bot] <50673670+mend-for-github-com[bot]@users.noreply.github.com> (cherry picked from commit 0cd9153)
* Implement creation of ip2geo feature (#257) * Update gradle version to 7.6 (#265) Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> * Implement creation of ip2geo feature * Implementation of ip2geo datasource creation * Implementation of ip2geo processor creation Signed-off-by: Heemin Kim <heemin@amazon.com> --------- Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> Signed-off-by: Heemin Kim <heemin@amazon.com> Co-authored-by: Vijayan Balasubramanian <balasvij@amazon.com> * Added unit tests with some refactoring of codes (#271) * Add Unit tests * Set cache true for search query * Remove in memory cache implementation (Two way door decision) * Relying on search cache without custom cache * Renamed datasource state from FAILED to CREATE_FAILED * Renamed class name from *Helper to *Facade * Changed updateIntervalInDays to updateInterval * Changed value type of default update_interval from TimeValue to Long * Read setting value from cluster settings directly Signed-off-by: Heemin Kim <heemin@amazon.com> * Sync from main (#280) * Update gradle version to 7.6 (#265) Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> * Exclude lombok generated code from jacoco coverage report (#268) Signed-off-by: Heemin Kim <heemin@amazon.com> * Make jacoco report to be generated faster in local (#267) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update dependency org.json:json to v20230227 (#273) Co-authored-by: mend-for-github-com[bot] <50673670+mend-for-github-com[bot]@users.noreply.github.com> * Baseline owners and maintainers (#275) Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> --------- Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> Signed-off-by: Heemin Kim <heemin@amazon.com> Co-authored-by: Vijayan Balasubramanian <balasvij@amazon.com> Co-authored-by: mend-for-github-com[bot] <50673670+mend-for-github-com[bot]@users.noreply.github.com> * Add datasource name validation (#281) Signed-off-by: Heemin Kim <heemin@amazon.com> * Refactoring of code (#282) 1. Change variable name from datasourceName to name 2. Change variable name from id to name 3. Added helper methods in test code Signed-off-by: Heemin Kim <heemin@amazon.com> * Change field name from md5 to sha256 (#285) Signed-off-by: Heemin Kim <heemin@amazon.com> * Implement get datasource api (#279) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update index option (#284) 1. Make geodata index as hidden 2. Make geodata index as read only allow delete after creation is done 3. Refresh datasource index immediately after update Signed-off-by: Heemin Kim <heemin@amazon.com> * Make some fields in manifest file as mandatory (#289) Signed-off-by: Heemin Kim <heemin@amazon.com> * Create datasource index explicitly (#283) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add wrapper class of job scheduler lock service (#290) Signed-off-by: Heemin Kim <heemin@amazon.com> * Remove all unused client attributes (#293) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update copyright header (#298) Signed-off-by: Heemin Kim <heemin@amazon.com> * Run system index handling code with stashed thread context (#297) Signed-off-by: Heemin Kim <heemin@amazon.com> * Reduce lock duration and renew the lock during update (#299) Signed-off-by: Heemin Kim <heemin@amazon.com> * Implements delete datasource API (#291) Signed-off-by: Heemin Kim <heemin@amazon.com> * Set User-Agent in http request (#300) Signed-off-by: Heemin Kim <heemin@amazon.com> * Implement datasource update API (#292) Signed-off-by: Heemin Kim <heemin@amazon.com> * Refactoring test code (#302) Make buildGeoJSONFeatureProcessorConfig method to be more general Signed-off-by: Heemin Kim <heemin@amazon.com> * Add ip2geo processor integ test for failure case (#303) Signed-off-by: Heemin Kim <heemin@amazon.com> * Bug fix and refactoring of code (#305) 1. Bugfix: Ingest metadata can be null if there is no processor created 2. Refactoring: Moved private method to another class for better testing support 3. Refactoring: Set some private static final variable as public so that unit test can use it 4. Refactoring: Changed string value to static variable Signed-off-by: Heemin Kim <heemin@amazon.com> * Add integration test for Ip2GeoProcessor (#306) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add ConcurrentModificationException (#308) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add integration test for UpdateDatasource API (#307) Signed-off-by: Heemin Kim <heemin@amazon.com> * Bug fix on lock management and few performance improvements (#310) * Release lock before response back to caller for update/delete API * Release lock in background task for creation API * Change index settings to improve indexing performance Signed-off-by: Heemin Kim <heemin@amazon.com> * Change index setting from read_only_allow_delete to write (#311) read_only_allow_delete does not block write to an index. The disk-based shard allocator may add and remove this block automatically. Therefore, use index.blocks.write instead. Signed-off-by: Heemin Kim <heemin@amazon.com> * Fix bug in get datasource API and improve memory usage (#313) Signed-off-by: Heemin Kim <heemin@amazon.com> * Change package for Strings.hasText (#314) (#317) Signed-off-by: Heemin Kim <heemin@amazon.com> * Remove jitter and move index setting from DatasourceFacade to DatasourceExtension (#319) Signed-off-by: Heemin Kim <heemin@amazon.com> * Do not index blank value and do not enrich null property (#320) Signed-off-by: Heemin Kim <heemin@amazon.com> * Move index setting keys to constants (#321) Signed-off-by: Heemin Kim <heemin@amazon.com> * Return null index name for expired data (#322) Return null index name for expired data so that it can be deleted by clean up process. Clean up process exclude current index from deleting. Signed-off-by: Heemin Kim <heemin@amazon.com> * Add new fields in datasource (#325) Signed-off-by: Heemin Kim <heemin@amazon.com> * Delete index once it is expired (#326) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add restoring event listener (#328) In the listener, we trigger a geoip data update Signed-off-by: Heemin Kim <heemin@amazon.com> * Reverse forcemerge and refresh order (#331) Otherwise, opensearch does not clear old segment files Signed-off-by: Heemin Kim <heemin@amazon.com> * Removed parameter and settings (#332) * Removed first_only parameter * Removed max_concurrency and batch_size setting first_only parameter was added as current geoip processor has it. However, the parameter have no benefit for ip2geo processor as we don't do a sequantial search for array data but use multi search. max_concurrency and batch_size setting is removed as these are only reveal internal implementation and could be a future blocker to improve performance later. Signed-off-by: Heemin Kim <heemin@amazon.com> * Add a field in datasource for current index name (#333) Signed-off-by: Heemin Kim <heemin@amazon.com> * Delete GeoIP data indices after restoring complete (#334) We don't want to use restored GeoIP data indices. Therefore we delete the indices once restoring process complete. When GeoIP metadata index is restored, we create a new GeoIP data index instead. Signed-off-by: Heemin Kim <heemin@amazon.com> * Use bool query for array form of IPs (#335) Signed-off-by: Heemin Kim <heemin@amazon.com> * Run update/delete request in a new thread (#337) This is not to block transport thread Signed-off-by: Heemin Kim <heemin@amazon.com> * Remove IP2Geo processor validation (#336) Cannot query index to get data to validate IP2Geo processor. Will add validation when we decide to store some of data in cluster state metadata. Signed-off-by: Heemin Kim <heemin@amazon.com> * Acquire lock sychronously (#339) By acquiring lock asychronously, the remaining part of the code is being run by transport thread which does not allow blocking code. We want only single update happen in a node using single thread. However, it cannot be acheived if I acquire lock asynchronously and pass the listener. Signed-off-by: Heemin Kim <heemin@amazon.com> * Added a cache to store datasource metadata (#338) Signed-off-by: Heemin Kim <heemin@amazon.com> * Changed class name and package (#341) Signed-off-by: Heemin Kim <heemin@amazon.com> * Refactoring of code (#342) 1. Changed class name from Ip2GeoCache to Ip2GeoCachedDao 2. Moved the Ip2GeoCachedDao from cache to dao package Signed-off-by: Heemin Kim <heemin@amazon.com> * Add geo data cache (#340) Signed-off-by: Heemin Kim <heemin@amazon.com> * Add cache layer to reduce GeoIp data retrieval latency (#343) Signed-off-by: Heemin Kim <heemin@amazon.com> * Use _primary in query preference and few changes (#347) 1. Use _primary preference to get datasource metadata so that it can read the latest data. RefreshPolicy.IMMEDIATE won't refresh replica shards immediately according to #346 2. Update datasource metadata index mapping 3. Move batch size from static value to setting Signed-off-by: Heemin Kim <heemin@amazon.com> * Wait until GeoIP data to be replicated to all data nodes (#348) Signed-off-by: Heemin Kim <heemin@amazon.com> * Update packages according to a change in OpenSearch core (#354) * Update packages according to a change in OpenSearch core Signed-off-by: Heemin Kim <heemin@amazon.com> * Update packages according to a change in OpenSearch core (#353) Signed-off-by: Heemin Kim <heemin@amazon.com> --------- Signed-off-by: Heemin Kim <heemin@amazon.com> --------- Signed-off-by: Vijayan Balasubramanian <balasvij@amazon.com> Signed-off-by: Heemin Kim <heemin@amazon.com> Co-authored-by: Vijayan Balasubramanian <balasvij@amazon.com> Co-authored-by: mend-for-github-com[bot] <50673670+mend-for-github-com[bot]@users.noreply.github.com> (cherry picked from commit 0cd9153) Co-authored-by: Heemin Kim <heemin@amazon.com>
Description
Add geo data cache
Issues Resolved
N/A
Check List
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.