Skip to content

Commit

Permalink
Merge branch 'main' into main-gauge-type
Browse files Browse the repository at this point in the history
Signed-off-by: Gagan Juneja <gagandeepjuneja@gmail.com>
  • Loading branch information
Gaganjuneja authored Mar 14, 2024
2 parents 4c9768b + ce43f30 commit 895fc00
Show file tree
Hide file tree
Showing 85 changed files with 2,455 additions and 1,172 deletions.
5 changes: 4 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
- Remote reindex: Add support for configurable retry mechanism ([#12561](https://github.com/opensearch-project/OpenSearch/pull/12561))
- [Admission Control] Integrate IO Usage Tracker to the Resource Usage Collector Service and Emit IO Usage Stats ([#11880](https://github.com/opensearch-project/OpenSearch/pull/11880))
- Tracing for deep search path ([#12103](https://github.com/opensearch-project/OpenSearch/pull/12103))
- [Admission Control] Integrated IO Based AdmissionController to AdmissionControl Framework ([#12583](https://github.com/opensearch-project/OpenSearch/pull/12583))

### Dependencies
- Bump `log4j-core` from 2.18.0 to 2.19.0
Expand Down Expand Up @@ -113,6 +114,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
- Add kuromoji_completion analyzer and filter ([#4835](https://github.com/opensearch-project/OpenSearch/issues/4835))
- The org.opensearch.bootstrap.Security should support codebase for JAR files with classifiers ([#12586](https://github.com/opensearch-project/OpenSearch/issues/12586))
- [Metrics Framework] Adds support for asynchronous gauge metric type. ([#12642](https://github.com/opensearch-project/OpenSearch/issues/12642))
- Make search query counters dynamic to support all query types ([#12601](https://github.com/opensearch-project/OpenSearch/pull/12601))

### Dependencies
- Bump `peter-evans/find-comment` from 2 to 3 ([#12288](https://github.com/opensearch-project/OpenSearch/pull/12288))
Expand All @@ -132,7 +134,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
- Bump `codecov/codecov-action` from 3 to 4 ([#12585](https://github.com/opensearch-project/OpenSearch/pull/12585))
- Bump `org.apache.zookeeper:zookeeper` from 3.9.1 to 3.9.2 ([#12580](https://github.com/opensearch-project/OpenSearch/pull/12580))
- Bump `org.codehaus.woodstox:stax2-api` from 4.2.1 to 4.2.2 ([#12579](https://github.com/opensearch-project/OpenSearch/pull/12579))
- Bump Jackson version from 2.16.1 to 2.16.2 ([#12611](https://github.com/opensearch-project/OpenSearch/pull/12611))
- Bump Jackson version from 2.16.1 to 2.17.0 ([#12611](https://github.com/opensearch-project/OpenSearch/pull/12611), [#12662](https://github.com/opensearch-project/OpenSearch/pull/12662))
- Bump `aws-sdk-java` from 2.20.55 to 2.20.86 ([#12251](https://github.com/opensearch-project/OpenSearch/pull/12251))
- Bump `reactor-netty` from 1.1.15 to 1.1.17 ([#12633](https://github.com/opensearch-project/OpenSearch/pull/12633))
- Bump `reactor` from 3.5.14 to 3.5.15 ([#12633](https://github.com/opensearch-project/OpenSearch/pull/12633))
Expand All @@ -152,6 +154,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
- Add a system property to configure YamlParser codepoint limits ([#12298](https://github.com/opensearch-project/OpenSearch/pull/12298))
- Prevent read beyond slice boundary in ByteArrayIndexInput ([#10481](https://github.com/opensearch-project/OpenSearch/issues/10481))
- Fix the "highlight.max_analyzer_offset" request parameter with "plain" highlighter ([#10919](https://github.com/opensearch-project/OpenSearch/pull/10919))
- Prevent unnecessary fetch sub phase processor initialization during fetch phase execution ([#12503](https://github.com/opensearch-project/OpenSearch/pull/12503))
- Warn about deprecated and ignored index.mapper.dynamic index setting ([#11193](https://github.com/opensearch-project/OpenSearch/pull/11193))
- Fix `terms` query on `float` field when `doc_values` are turned off by reverting back to `FloatPoint` from `FloatField` ([#12499](https://github.com/opensearch-project/OpenSearch/pull/12499))
- Fix get task API does not refresh resource stats ([#11531](https://github.com/opensearch-project/OpenSearch/pull/11531))
Expand Down
4 changes: 2 additions & 2 deletions buildSrc/version.properties
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ bundled_jdk = 21.0.2+13
# optional dependencies
spatial4j = 0.7
jts = 1.15.0
jackson = 2.16.2
jackson_databind = 2.16.2
jackson = 2.17.0
jackson_databind = 2.17.0
snakeyaml = 2.1
icu4j = 70.1
supercsv = 2.4.0
Expand Down
1 change: 0 additions & 1 deletion client/sniffer/licenses/jackson-core-2.16.2.jar.sha1

This file was deleted.

1 change: 1 addition & 0 deletions client/sniffer/licenses/jackson-core-2.17.0.jar.sha1
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
a6e5058ef9720623c517252d17162f845306ff3a

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
880a742337010da4c851f843d8cac150e22dff9f

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
7173e9e1d4bc6d7ca03bc4eeedcd548b8b580b34
1 change: 0 additions & 1 deletion libs/core/licenses/jackson-core-2.16.2.jar.sha1

This file was deleted.

1 change: 1 addition & 0 deletions libs/core/licenses/jackson-core-2.17.0.jar.sha1
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
a6e5058ef9720623c517252d17162f845306ff3a
1 change: 0 additions & 1 deletion libs/x-content/licenses/jackson-core-2.16.2.jar.sha1

This file was deleted.

1 change: 1 addition & 0 deletions libs/x-content/licenses/jackson-core-2.17.0.jar.sha1
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
a6e5058ef9720623c517252d17162f845306ff3a

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
6833c8573452d583e4af650a7424d547606b2501

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
f10183857607fde789490d33ea46372a2d2b0c72

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
57a963c6258c49febc11390082d8503f71bb15a9

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
880a742337010da4c851f843d8cac150e22dff9f

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
7173e9e1d4bc6d7ca03bc4eeedcd548b8b580b34

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
880a742337010da4c851f843d8cac150e22dff9f

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
7173e9e1d4bc6d7ca03bc4eeedcd548b8b580b34

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
880a742337010da4c851f843d8cac150e22dff9f

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
7173e9e1d4bc6d7ca03bc4eeedcd548b8b580b34

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
880a742337010da4c851f843d8cac150e22dff9f

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
7173e9e1d4bc6d7ca03bc4eeedcd548b8b580b34

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
fbe3c274a39cef5538ca8688ac7e2ad0053a6ffa

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
3fab507bba9d477e52ed2302dc3ddbd23cbae339

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
e07032ce170277213ac4835169ca79fa0340c7b5

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
880a742337010da4c851f843d8cac150e22dff9f

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
7173e9e1d4bc6d7ca03bc4eeedcd548b8b580b34
Original file line number Diff line number Diff line change
Expand Up @@ -32,10 +32,12 @@

package org.opensearch.gateway;

import org.opensearch.Version;
import org.opensearch.action.admin.cluster.configuration.AddVotingConfigExclusionsAction;
import org.opensearch.action.admin.cluster.configuration.AddVotingConfigExclusionsRequest;
import org.opensearch.action.admin.cluster.configuration.ClearVotingConfigExclusionsAction;
import org.opensearch.action.admin.cluster.configuration.ClearVotingConfigExclusionsRequest;
import org.opensearch.action.admin.cluster.reroute.ClusterRerouteResponse;
import org.opensearch.action.admin.cluster.shards.ClusterSearchShardsGroup;
import org.opensearch.action.admin.cluster.shards.ClusterSearchShardsResponse;
import org.opensearch.action.admin.indices.recovery.RecoveryResponse;
Expand All @@ -46,6 +48,7 @@
import org.opensearch.cluster.coordination.ElectionSchedulerFactory;
import org.opensearch.cluster.metadata.IndexMetadata;
import org.opensearch.cluster.node.DiscoveryNode;
import org.opensearch.cluster.routing.ShardRouting;
import org.opensearch.cluster.routing.UnassignedInfo;
import org.opensearch.cluster.service.ClusterService;
import org.opensearch.common.settings.Settings;
Expand All @@ -63,6 +66,8 @@
import org.opensearch.indices.recovery.RecoveryState;
import org.opensearch.indices.replication.common.ReplicationLuceneIndex;
import org.opensearch.indices.store.ShardAttributes;
import org.opensearch.indices.store.TransportNodesListShardStoreMetadataBatch;
import org.opensearch.indices.store.TransportNodesListShardStoreMetadataHelper;
import org.opensearch.plugins.Plugin;
import org.opensearch.test.InternalSettingsPlugin;
import org.opensearch.test.InternalTestCluster.RestartCallback;
Expand All @@ -82,8 +87,11 @@
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ExecutionException;
import java.util.stream.IntStream;

import static java.util.Collections.emptyMap;
import static java.util.Collections.emptySet;
import static org.opensearch.cluster.coordination.ClusterBootstrapService.INITIAL_CLUSTER_MANAGER_NODES_SETTING;
import static org.opensearch.cluster.metadata.IndexMetadata.SETTING_NUMBER_OF_REPLICAS;
import static org.opensearch.cluster.metadata.IndexMetadata.SETTING_NUMBER_OF_SHARDS;
Expand Down Expand Up @@ -817,6 +825,131 @@ public void testShardFetchCorruptedShardsUsingBatchAction() throws Exception {
assertTrue(nodeGatewayStartedShards.primary());
}

public void testSingleShardStoreFetchUsingBatchAction() throws ExecutionException, InterruptedException {
String indexName = "test";
DiscoveryNode[] nodes = getDiscoveryNodes();
TransportNodesListShardStoreMetadataBatch.NodesStoreFilesMetadataBatch response = prepareAndSendRequest(
new String[] { indexName },
nodes
);
Index index = resolveIndex(indexName);
ShardId shardId = new ShardId(index, 0);
TransportNodesListShardStoreMetadataBatch.NodeStoreFilesMetadata nodeStoreFilesMetadata = response.getNodesMap()
.get(nodes[0].getId())
.getNodeStoreFilesMetadataBatch()
.get(shardId);
assertNodeStoreFilesMetadataSuccessCase(nodeStoreFilesMetadata, shardId);
}

public void testShardStoreFetchMultiNodeMultiIndexesUsingBatchAction() throws Exception {
internalCluster().startNodes(2);
String indexName1 = "test1";
String indexName2 = "test2";
DiscoveryNode[] nodes = getDiscoveryNodes();
TransportNodesListShardStoreMetadataBatch.NodesStoreFilesMetadataBatch response = prepareAndSendRequest(
new String[] { indexName1, indexName2 },
nodes
);
ClusterSearchShardsResponse searchShardsResponse = client().admin().cluster().prepareSearchShards(indexName1, indexName2).get();
for (ClusterSearchShardsGroup clusterSearchShardsGroup : searchShardsResponse.getGroups()) {
ShardId shardId = clusterSearchShardsGroup.getShardId();
ShardRouting[] shardRoutings = clusterSearchShardsGroup.getShards();
assertEquals(2, shardRoutings.length);
for (ShardRouting shardRouting : shardRoutings) {
TransportNodesListShardStoreMetadataBatch.NodeStoreFilesMetadata nodeStoreFilesMetadata = response.getNodesMap()
.get(shardRouting.currentNodeId())
.getNodeStoreFilesMetadataBatch()
.get(shardId);
assertNodeStoreFilesMetadataSuccessCase(nodeStoreFilesMetadata, shardId);
}
}
}

public void testShardStoreFetchNodeNotConnectedUsingBatchAction() {
DiscoveryNode nonExistingNode = new DiscoveryNode("foo", buildNewFakeTransportAddress(), emptyMap(), emptySet(), Version.CURRENT);
String indexName = "test";
TransportNodesListShardStoreMetadataBatch.NodesStoreFilesMetadataBatch response = prepareAndSendRequest(
new String[] { indexName },
new DiscoveryNode[] { nonExistingNode }
);
assertTrue(response.hasFailures());
assertEquals(1, response.failures().size());
assertEquals(nonExistingNode.getId(), response.failures().get(0).nodeId());
}

public void testShardStoreFetchCorruptedIndexUsingBatchAction() throws Exception {
internalCluster().startNodes(2);
String index1Name = "test1";
String index2Name = "test2";
prepareIndices(new String[] { index1Name, index2Name }, 1, 1);
Map<ShardId, ShardAttributes> shardAttributesMap = prepareRequestMap(new String[] { index1Name, index2Name }, 1);
Index index1 = resolveIndex(index1Name);
ShardId shardId1 = new ShardId(index1, 0);
ClusterSearchShardsResponse searchShardsResponse = client().admin().cluster().prepareSearchShards(index1Name).get();
assertEquals(2, searchShardsResponse.getNodes().length);

// corrupt test1 index shards
corruptShard(searchShardsResponse.getNodes()[0].getName(), shardId1);
corruptShard(searchShardsResponse.getNodes()[1].getName(), shardId1);
ClusterRerouteResponse clusterRerouteResponse = client().admin().cluster().prepareReroute().setRetryFailed(false).get();
DiscoveryNode[] discoveryNodes = getDiscoveryNodes();
TransportNodesListShardStoreMetadataBatch.NodesStoreFilesMetadataBatch response;
response = ActionTestUtils.executeBlocking(
internalCluster().getInstance(TransportNodesListShardStoreMetadataBatch.class),
new TransportNodesListShardStoreMetadataBatch.Request(shardAttributesMap, discoveryNodes)
);
Map<ShardId, TransportNodesListShardStoreMetadataBatch.NodeStoreFilesMetadata> nodeStoreFilesMetadata = response.getNodesMap()
.get(discoveryNodes[0].getId())
.getNodeStoreFilesMetadataBatch();
// We don't store exception in case of corrupt index, rather just return an empty response
assertNull(nodeStoreFilesMetadata.get(shardId1).getStoreFileFetchException());
assertEquals(shardId1, nodeStoreFilesMetadata.get(shardId1).storeFilesMetadata().shardId());
assertTrue(nodeStoreFilesMetadata.get(shardId1).storeFilesMetadata().isEmpty());

Index index2 = resolveIndex(index2Name);
ShardId shardId2 = new ShardId(index2, 0);
assertNodeStoreFilesMetadataSuccessCase(nodeStoreFilesMetadata.get(shardId2), shardId2);
}

private void prepareIndices(String[] indices, int numberOfPrimaryShards, int numberOfReplicaShards) {
for (String index : indices) {
createIndex(
index,
Settings.builder()
.put(SETTING_NUMBER_OF_SHARDS, numberOfPrimaryShards)
.put(SETTING_NUMBER_OF_REPLICAS, numberOfReplicaShards)
.build()
);
index(index, "type", "1", Collections.emptyMap());
flush(index);
}
}

private TransportNodesListShardStoreMetadataBatch.NodesStoreFilesMetadataBatch prepareAndSendRequest(
String[] indices,
DiscoveryNode[] nodes
) {
Map<ShardId, ShardAttributes> shardAttributesMap = null;
prepareIndices(indices, 1, 1);
shardAttributesMap = prepareRequestMap(indices, 1);
TransportNodesListShardStoreMetadataBatch.NodesStoreFilesMetadataBatch response;
return ActionTestUtils.executeBlocking(
internalCluster().getInstance(TransportNodesListShardStoreMetadataBatch.class),
new TransportNodesListShardStoreMetadataBatch.Request(shardAttributesMap, nodes)
);
}

private void assertNodeStoreFilesMetadataSuccessCase(
TransportNodesListShardStoreMetadataBatch.NodeStoreFilesMetadata nodeStoreFilesMetadata,
ShardId shardId
) {
assertNull(nodeStoreFilesMetadata.getStoreFileFetchException());
TransportNodesListShardStoreMetadataHelper.StoreFilesMetadata storeFileMetadata = nodeStoreFilesMetadata.storeFilesMetadata();
assertFalse(storeFileMetadata.isEmpty());
assertEquals(shardId, storeFileMetadata.shardId());
assertNotNull(storeFileMetadata.peerRecoveryRetentionLeases());
}

private void assertNodeGatewayStartedShardsHappyCase(
TransportNodesListGatewayStartedShardsBatch.NodeGatewayStartedShard nodeGatewayStartedShards
) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@

package org.opensearch.indices.store;

import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;

import org.apache.logging.log4j.Logger;
import org.opensearch.action.admin.cluster.health.ClusterHealthResponse;
import org.opensearch.action.admin.cluster.state.ClusterStateResponse;
Expand Down Expand Up @@ -60,9 +62,9 @@
import org.opensearch.indices.recovery.PeerRecoveryTargetService;
import org.opensearch.plugins.Plugin;
import org.opensearch.test.InternalTestCluster;
import org.opensearch.test.OpenSearchIntegTestCase;
import org.opensearch.test.OpenSearchIntegTestCase.ClusterScope;
import org.opensearch.test.OpenSearchIntegTestCase.Scope;
import org.opensearch.test.ParameterizedStaticSettingsOpenSearchIntegTestCase;
import org.opensearch.test.disruption.BlockClusterStateProcessing;
import org.opensearch.test.transport.MockTransportService;
import org.opensearch.transport.ConnectTransportException;
Expand All @@ -85,7 +87,16 @@
import static org.hamcrest.Matchers.equalTo;

@ClusterScope(scope = Scope.TEST, numDataNodes = 0)
public class IndicesStoreIntegrationIT extends OpenSearchIntegTestCase {
public class IndicesStoreIntegrationIT extends ParameterizedStaticSettingsOpenSearchIntegTestCase {
public IndicesStoreIntegrationIT(Settings nodeSettings) {
super(nodeSettings);
}

@ParametersFactory
public static Collection<Object[]> parameters() {
return remoteStoreSettings;
}

@Override
protected Settings nodeSettings(int nodeOrdinal) { // simplify this and only use a single data path
return Settings.builder()
Expand Down
Loading

0 comments on commit 895fc00

Please sign in to comment.