Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] _cluster/stats API returning incorrect cluster_manager count #6103

Closed
sandeshkr419 opened this issue Jan 31, 2023 · 31 comments · Fixed by #6331
Closed

[BUG] _cluster/stats API returning incorrect cluster_manager count #6103

sandeshkr419 opened this issue Jan 31, 2023 · 31 comments · Fixed by #6331
Assignees
Labels
bug Something isn't working distributed framework

Comments

@sandeshkr419
Copy link
Contributor

sandeshkr419 commented Jan 31, 2023

Describe the bug
_cluster/stats API returns wrong count of nodes with cluster_manager role.

To Reproduce
Steps to reproduce the behavior:

  1. Create a multi-node cluster on OS 2.3 (I tried it on 2.3), lets say with 3 nodes with cluster_manager role.
  2. Check response of _cat/nodes - which should show correct roles of each nodes.
ip         heap.percent ram.percent cpu load_1m load_5m load_15m node.role node.roles                                   cluster_manager name
10.0.3.37            40          77   0    0.00    0.00     0.00 dir       data,ingest,remote_cluster_client            -               data-node
10.0.5.179           45          77   0    0.00    0.00     0.00 dir       data,ingest,remote_cluster_client            -               data-node
10.0.4.180           12          76   0    0.00    0.02     0.01 -         ml                                           -               ml-node
10.0.4.224           41          76   0    0.00    0.00     0.00 mmr       cluster_manager,master,remote_cluster_client -               manager-node
10.0.4.16            37          78   0    0.00    0.00     0.00 dir       data,ingest,remote_cluster_client            -               data-node
10.0.3.181           13          76   0    0.00    0.00     0.00 mmr       cluster_manager,master,remote_cluster_client -               manager-node
10.0.5.122           17          76   0    0.01    0.01     0.00 mmr       cluster_manager,master,remote_cluster_client *               seed
  1. Check response of _cluster/stats
.
.
.
"nodes" : {
    "count" : {
      "total" : 7,
      "cluster_manager" : 6,
      "coordinating_only" : 0,
      "data" : 3,
      "ingest" : 3,
      "master" : 6,
      "ml" : 1,
      "remote_cluster_client" : 6
    },
    "versions" : [
      "2.3.0"
    ],
.
.
.

Expected behavior
Count of cluster_manager and master should be 3 in above case.

Plugins
None

Screenshots
None

Host/Environment (please complete the following information):

  • Version: 2.3

Additional context
The above response was correct till OS 1.3.x

@sandeshkr419 sandeshkr419 added bug Something isn't working untriaged labels Jan 31, 2023
@sandeshkr419 sandeshkr419 changed the title [BUG] _cluster/stats API returning wrong cluster_manager count [BUG] _cluster/stats API returning incorrect cluster_manager count Jan 31, 2023
@shwetathareja
Copy link
Member

shwetathareja commented Jan 31, 2023

Bug is due to the fact it is iterating over all the roles (Master & Cluster Manager are 2 roles associated with nodes) and it will end up counting twice.

                for (DiscoveryNodeRole role : nodeInfo.getNode().getRoles()) {
                    // TODO: Remove the 'if' condition and only keep the statement in 'else' after removing MASTER_ROLE.
                    // As of 2.0, CLUSTER_MANAGER_ROLE is added, and it should be taken as MASTER_ROLE
                    if (role.isClusterManager()) {
                        roles.merge(DiscoveryNodeRole.MASTER_ROLE.roleName(), 1, Integer::sum);
                        roles.merge(DiscoveryNodeRole.CLUSTER_MANAGER_ROLE.roleName(), 1, Integer::sum);

@sandeshkr419 do you want to pick up the fix?

@andrross
Copy link
Member

I assigned the issue to you @sandeshkr419. Thanks in advance for the help! If for some reason you can't get to this then please provide an update here.

@dblock
Copy link
Member

dblock commented Feb 7, 2023

Ouch. If someone wants to start with a failing unit test that'd be awesome.

@sandeshkr419
Copy link
Contributor Author

I started working on this and noticed this difference in 'master' role in single node vs dedicated cluster-manager setup:

Single node / localhost:

curl "localhost:9200/_cat/nodes?v"
ip        heap.percent ram.percent cpu load_1m load_5m load_15m node.role node.roles                                        cluster_manager name
127.0.0.1           33         100  12    8.40                  dimr      cluster_manager,data,ingest,remote_cluster_client *               runTask-0

Dedicated cluster-manager cluster:

curl "<cluster-endpoint>/_cat/nodes?v"
ip         heap.percent ram.percent cpu load_1m load_5m load_15m node.role node.roles                                   cluster_manager name
10.0.3.110           62          70   0    0.00    0.00     0.00 mmr       cluster_manager,master,remote_cluster_client -               manager-node
10.0.3.72            60          69   0    0.00    0.00     0.00 dir       data,ingest,remote_cluster_client            -               data-node
10.0.4.132           20          70   0    0.00    0.00     0.00 dir       data,ingest,remote_cluster_client            -               data-node
10.0.5.42            26          69   0    0.04    0.02     0.00 dir       data,ingest,remote_cluster_client            -               data-node
10.0.4.156           39          69   0    0.00    0.00     0.00 mmr       cluster_manager,master,remote_cluster_client -               manager-node
10.0.5.39            47          70   0    0.00    0.04     0.01 mmr       cluster_manager,master,remote_cluster_client *               seed

Quick question: Any reason, why is the deprecated master role associated in a dedicated cluster-manager setup but not in single node setup? Should this behavior be consistent in both cases?

@sandeshkr419
Copy link
Contributor Author

sandeshkr419 commented Feb 9, 2023

Making this change actually counts correctly based on roles. So for the above output of _cat/nodes the _cluster/stats output looks like (after the above change):

Single node setup (master count is 0 since role is not attached):

.
.
"nodes" : {
    "count" : {
      "total" : 1,
      "cluster_manager" : 1,
      "coordinating_only" : 0,
      "data" : 1,
      "ingest" : 1,
      "master" : 0,
      "remote_cluster_client" : 1,
      "search" : 0
    }
.
.

Multi-node setup (master count is 3 because master role is attached):

.
.
"nodes" : {
    "count" : {
      "total" : 5,
      "cluster_manager" : 3,
      "coordinating_only" : 0,
      "data" : 2,
      "ingest" : 2,
      "master" : 3,
      "remote_cluster_client" : 5,
      "search" : 0
    }
.
.

So making change to _cluster/stats API now depends on what behavior with roles do we want with dedicated & non-dedicated setup.

@dblock
Copy link
Member

dblock commented Feb 9, 2023

It sounds reasonable. Consider whether there's a breaking change here or just a bug fix. Write a bunch of specs that describe the various scenarios, and let's see if any existing ones break.

@andrross
Copy link
Member

I started working on this and noticed this difference in 'master' role in single node vs dedicated cluster-manager setup:

@tlfeng Can you provide any insight into why the code behaves this way?

@tlfeng
Copy link
Collaborator

tlfeng commented Feb 10, 2023

I will carefully look into the problem and provide insight. I'm surprised to see the REST API response is different for single node and multi-node cluster. The current test may only cover the situation of single-node cluster.

@sandeshkr419
Copy link
Contributor Author

sandeshkr419 commented Feb 14, 2023

Looking into @tlfeng changes: #2424, it seems like the original idea was to replace the "master" role to "cluster_manager" role just and not have both the roles assigned to node.
It seems like a gap in testing where we have not validated/tested the same for a dedicated cluster-manager setup.

Present output for _nodes/_all for cluster-manager nodes:

....
"<node-id>" : {
      "name" : "manager-node",
      "transport_address" : "<>",
      "host" : "<>",
      "ip" : "<>",
      "version" : "2.5.1",
      "build_type" : "tar",
      "build_hash" : "415b06c6ea7d69735a392e7d69963f4a22547af7",
      "total_indexing_buffer" : 107374182,
      "roles" : [
        "cluster_manager",
        "master",
        "remote_cluster_client"
      ],
...

Looks like bug is with the roles attached to cluster-manager nodes in dedicated setup. I'll find out the code fixes for this - how to have only cluster-manager role and not both. The count should correct automatically after that. Does this sounds okay?

@sandeshkr419
Copy link
Contributor Author

sandeshkr419 commented Feb 15, 2023

This is where we are adding deprecated master role: https://github.com/opensearch-project/OpenSearch/blob/main/server/src/main/java/org/opensearch/node/Node.java#L454

I have verified the changes by removing this line and checking the output of _cluster/stats and _cat/nodes for both the setups - dedicated and non-dedicated.

Let me verify any other impact of this change before raising the PR.

@shwetathareja
Copy link
Member

@sandeshkr419 Please raise the [DRAFT] PR, so it is easy to comment and track.

@sandeshkr419
Copy link
Contributor Author

sandeshkr419 commented Feb 16, 2023

Raised a draft PR to get early feedback on changes.

Getting correct responses for _cat/nodes, _cluster/stats after the above draft changes - responses mentioned in PR description. Getting same responses with _nodes/_cluster_manager and _nodes/_master.

The test fails now as node-roles are not updated when cluster is initialized with deprecated master settings with the error:

Expected: <{search=0, data=0, remote_cluster_client=0, coordinating_only=0, master=1, cluster_manager=1, ingest=0}>
     but: was <{search=0, data=0, remote_cluster_client=0, coordinating_only=0, master=1, cluster_manager=0, ingest=0}>

I propose we remove this test as we have already deprecated the cluster.initial_master_nodes setting. Also, setDeprecatedMasterRole() function should be removed as there is no need to add a deprecated master role to the cluster_manager nodes.

Although the node role count and cluster stats still feature a count for master role which you would see in the updated response which is desired.

@shwetathareja @andrross @dblock @tlfeng Seeking comments on this!

@sandeshkr419
Copy link
Contributor Author

Not part of this issue, I'd open a separate issue to deprecate the master count in cluster stats as well. This might be a breaking change so might not be a good idea to include it in 2.x, but can be food for thought for 3.x for sure. Since for 2.x we wanted to keep master count for backward compatibility but doesn't holds for future major versions.

@shwetathareja
Copy link
Member

I propose we remove this test as we have already #2463 the cluster.initial_master_nodes setting. Also, setDeprecatedMasterRole() function should be removed as there is no need to add a deprecated master role to the cluster_manager nodes.

The problem with removing setDeprecatedMasterRole() is that now master role will not show up as node role and only cluster_manager role would come. This would be breaking change.
Below is the output from your PR

curl   "<endpoint>/_cat/nodes?v"
ip         heap.percent ram.percent cpu load_1m load_5m load_15m node.role node.roles                            cluster_manager name
10.0.5.192           38          63   0    0.00    0.00     0.00 mr        cluster_manager,remote_cluster_client -               manager-node
10.0.3.75            30          63   0    0.00    0.00     0.00 mr        cluster_manager,remote_cluster_client -               manager-node
10.0.5.168           16          63   0    0.65    0.16     0.05 dir       data,ingest,remote_cluster_client     -               data-node
10.0.4.83            46          63   0    0.00    0.00     0.00 dir       data,ingest,remote_cluster_client     -               data-node
10.0.5.169           55          63   0    0.00    0.00     0.00 mr        cluster_manager,remote_cluster_client *               seed
10.0.3.251           60          63   0    0.00    0.00     0.00 dir       data,ingest,remote_cluster_client     -               data-node

Also plugins accessing in-memory node.roles will not get master as the role. We need to keep both around but not double count.

@sandeshkr419
Copy link
Contributor Author

sandeshkr419 commented Feb 16, 2023

@shwetathareja Master role doesn't shows up in non-dedicated setup also before my changes although it comes only in dedicated-setup, so ideally it should not be a breaking change. Although to avoid any breaking, we can just fix the count in _cluster/stats, but this will leave the dedicated and non-dedicated setup with a difference in node roles displayed to user - mater showing in dedicated-setup and not in non-dedicated setup.

The concern with that test is that starts up a domain with deprecated master settings:

Settings settings = Settings.builder()
            .putList(NodeRoleSettings.NODE_ROLES_SETTING.getKey(), Collections.singletonList(DiscoveryNodeRole.MASTER_ROLE.roleName()))
            .build();

If we remove setDeprecatedMasterRole() - then above settings fail to initialize nodes with cluster_manager role if you would see the error:

Expected: <{search=0, data=0, remote_cluster_client=0, coordinating_only=0, master=1, cluster_manager=1, ingest=0}>
     but: was <{search=0, data=0, remote_cluster_client=0, coordinating_only=0, master=1, cluster_manager=0, ingest=0}>

Shouldn't we should start the cluster with cluster_manager role just: d3df6ee

Settings settings = Settings.builder()
            .putList(NodeRoleSettings.NODE_ROLES_SETTING.getKey(), Collections.singletonList(DiscoveryNodeRole.CLUSTER_MANAGER_ROLE.roleName()))
            .build();

Node role counts still gets master roles correctly in this case.

@shwetathareja
Copy link
Member

thanks @sandeshkr419 . Understood your concern. Looks like we need to be on the same first with @tlfeng what is the expected backward compatibility we are aiming for here. IMHO we shouldnt break any API behavior in 2.x

@tlfeng
Copy link
Collaborator

tlfeng commented Feb 17, 2023

@shwetathareja Thanks for your opinion! Backwards compatibility is a critical aspect when making API changes.
My original idea was to keep the existing REST API behavior to give users a smooth migration. The expected REST API changes was described in my PR #2424.

According to the description in the PR. by default, cluster_manager role will be added to a node, and the response of Node Info API will be shown as below

curl http://localhost:9200/_nodes/_all
...
 "nodes": {
    "USpTGYaBSIKbgSUJR2Z9lg": {
        "name": "node-0",
        "roles": ["cluster_manager","data","ingest"]
        ...
    }
 }

Although master role is not assigned to a node be default, the backwards compatibility is preserved if the user assigns master role to the node.

About the Cluster Stats API,

When a node has got either master or cluster_manager role, the count for both master and cluster_manager roles will be increased by 1. This behavior aims to keep backwards compatibility, so that any existing script that counting the number of master role can still get correct number.

    "nodes": {
        "count": {
            "total": 1,
            "cluster_manager": 1,
            "coordinating_only": 0,
            "data": 1,
            "ingest": 1,
            "master": 1,
            "remote_cluster_client": 1
        },
...

Looks like the API response for a "dedicated cluster_manager" cluster is incorrect, and master is added to the node role unexpectedly, which also caused the incorrect node role count.

In addition, setDeprecatedMasterRole() used to load master role into the node, and make master role a known role, it was added as a workaround to keep master and cluster_manager roles have the same abbreviation m. Previously, all the node roles that assigned to a node have to be pre-defined (either by core or plugins) to make the node starts, so loading master role to the node is necessary. After the commit e9c5ce3 to make OpenSearch support "unknown" role, probably there could be another way to make a node recognize master role.

@sandeshkr419
Copy link
Contributor Author

sandeshkr419 commented Feb 17, 2023

@shwetathareja @tlfeng - Are we on the same page that the non-dedicated setup should also have master role in conjunction with the cluster_manager role like that we have in dedicated setup right now for backward compatibility?

Presently, non-dedicated setup only have cluster_manager role and not master role - we should have both the roles here?

@dblock @andrross comments?

@tlfeng
Copy link
Collaborator

tlfeng commented Feb 17, 2023

@sandeshkr419 After reading the description in my PR 2424 which introduced the cluster_manager role, I think the API behavior in single node cluster is correct. REST API response should show exactly the roles that user assigned to cluster setting node.roles. E.g. Assigning cluster_manager role to a node will result in showing only cluster_manager role in API responses.
While to keep backwards compatibility, in the response of "Cluster Stats" API, the count of nodes having master role was designed to be the same with count of cluster_manager role, which aimed to not breaking clients or tools that reads the API response of the node counts. This behavior could be changed, and I'm open to opinions.

@sandeshkr419
Copy link
Contributor Author

sandeshkr419 commented Feb 17, 2023

I'm concerned on consistency in behavior with a dedicated and non-dedicated setup. Either we should have master node.role in both the setups or none of the setups - having different behavior in a dedicated and non-dedicated setup for role assignment doesn't makes sense.

Role count logic can be corrected accordingly once we reach a consensus on what node roles should be assigned in dedicated and non-dedicated setup.

@andrross
Copy link
Member

andrross commented Feb 17, 2023

having different behavior in a dedicated and non-dedicated setup for role assignment doesn't makes sense.

I agree with this.

Here is the behavior I would expect:

  • Case 1: Node configuration: node.roles=["master"]
    • Node info result: "roles": ["master"]
    • Cluster stats result: "count": { "master": 1, "cluster_manager": 1}
  • Case 2: Node configuration: node.roles=["master, cluster_manager"]
    • Node info result: "roles": ["master", "cluster_manager"]
    • Cluster stats result: "count": { "master": 1, "cluster_manager": 1}
  • Case 3: Node configuration: node.roles=["cluster_manager"]
    • Node info result: "roles": ["cluster_manager"]
    • Cluster stats result: "count": { "master": 1, "cluster_manager": 1}

This behavior shouldn't change if these nodes have other roles assigned to them. @tlfeng What do you think? It looks like there is a bug with double-counting in the stats API, right?

@tlfeng
Copy link
Collaborator

tlfeng commented Feb 17, 2023

@andrross Thank you for your opinion! The responses of "Node Info" API and "Cluster stats" API that you expected is the same with my intention in my PR #2424 (introduce cluster_manager role). But "Case 2" where assigning both master and cluster_manager roles to a node is designed to be prohibited.
I agree that as @sandeshkr419 mentioned, for a cluster with "dedicated cluster managers", there is a bug that master roles are assigned to the nodes by mistake which results double-counting in the "cluster stats" API.

@andrross
Copy link
Member

@tlfeng Indeed!

java.lang.IllegalArgumentException: The two roles [master, cluster_manager] can not be assigned together to a node. Assigning [master] role in setting [node.roles] is deprecated. To promote inclusive language, please use [cluster_manager] role instead.

So the expected behavior is:

  • Case 1: Node configuration: node.roles=["master"]
    • Node info result: "roles": ["master"]
    • Cluster stats result: "count": { "master": 1, "cluster_manager": 1}
  • Case 2: Node configuration: node.roles=["cluster_manager"]
    • Node info result: "roles": ["cluster_manager"]
    • Cluster stats result: "count": { "master": 1, "cluster_manager": 1}

Do we just need to fix the double-counting happening today in "Case 2" then?

@tlfeng
Copy link
Collaborator

tlfeng commented Feb 17, 2023

@andrross Yes, the double-counting needs to be fixed. A cause for the problem seems is cluster_manager and master roles are accidentally both assigned to node (in a multi-node cluster). As @sandeshkr419 reported #6103 (comment)

@sandeshkr419
Copy link
Contributor Author

sandeshkr419 commented Feb 17, 2023

@andrross Both roles cluster_manager and master getting assigned in a dedicated setup is the only blocker - do we want both the roles assigned or do we not?

Becaus, in @tlfeng PR: #2424 it was already decided that cluster_manager role should be used as a replacement to deprecated master role.
If so, the existing association of deprecated master role should be removed as well as in 749f9f1 to make the roles consistent with non-dedicated setup.

But if we remove this deprecated master role, then as @shwetathareja pointed out, we may not be backward compatible.

@tlfeng
Copy link
Collaborator

tlfeng commented Feb 18, 2023

@sandeshkr419 I think we don't want both roles cluster_manager and master assigned to any node. master and cluster_manager role cannot co-exist.
However, the purpose code DiscoveryNode.setDeprecatedMasterRole() is not assigning master role to a node, but make master to be a known role. Removing that line of code doesn't seems a desired solution for solving the issue that master role and cluster_manager role appear in a node.
Although removing the line of code results in master role disappears in "dedicated cluster manager" setup, I don't think it is a proper solution to remove the line alone.
To solve the problem, I still need to reproduce the issue and find the defect in my previous PR.

@andrross
Copy link
Member

No we definitely do not want both cluster_manager and master to be assigned to a node in any case. We don't allow the user to set that in the node configuration so it is a bug when that happens.

The way compatibility is handled is based off of the node configuration. If the users specifies master in the node configuration, then _cat/nodes should return master. If the user specifies cluster_manager then cluster_manager should be returned. The _cluster/stats API is something of an exception where counts for both are returned regardless of which one is specified in the configuration.

@shwetathareja
Copy link
Member

shwetathareja commented Feb 21, 2023

+1 to @andrross suggestion where API responds master/ cluster_manager depending on user configuration.

@sandeshkr419
Copy link
Contributor Author

sandeshkr419 commented Mar 2, 2023

Narrowed down the issue.

This issue does not occurs when node.roles are used to initialize the node. It occurs when the legacy legacySettings is used to initialize the node as in how I was creating the cluster using https://github.com/opensearch-project/opensearch-cluster-cdk. This utilizes the legacy 'node.master': true legacy setting: https://github.com/opensearch-project/opensearch-cluster-cdk/blob/main/lib/opensearch-config/node-config.ts#L12 (note: legacy, not deprecated)

I have modified the fix where in I remove master role when the legacy settings are used. Please note that there is no such setting such as 'node.cluster_manager'. The new way to initialize the nodes is via providing node roles like: https://github.com/opensearch-project/opensearch-cluster-cdk/blob/main/lib/opensearch-config/node-config.ts#L47
This is the reason why 'master' role is being removed in my changes whenever roles are decided by legacy settings.

I have added test cases for better understanding of scenarios - Asserting both the node.roles attached to nodes and the cluster/_stats response.

While I'm adding more test cases seeking early comments on draft code changes. @shwetathareja @andrross @tlfeng
Will be improving other test cases as well to assert both the things instead of just relying on _cluster/stats within this scope.

Also, in response to @andrross comments:

 If the users specifies master in the node configuration, then _cat/nodes should return master. If the user specifies cluster_manager then cluster_manager should be returned.

Whatever node.roles are specified by user, whether 'master' or 'cluster_manager' - the node obeys that - so I think we can close on this.

The _cluster/stats API is something of an exception where counts for both are returned regardless of which one is specified in the configuration.

Since the changes were in getting roles from legacySettings(), no changes will be required in _cluster/stats API.

@sandeshkr419
Copy link
Contributor Author

sandeshkr419 commented Mar 2, 2023

Modified the changes as per @andrross suggestions - legacy node.master settings now configure deprecated master role to a node and not cluster_manager role which can be set via new node.roles only.

Any concerns @shwetathareja @tlfeng or difference in opinions?

@sandeshkr419
Copy link
Contributor Author

@andrross @shwetathareja @tlfeng Gentle reminder to review the PR and let me know any additional steps that are required for merging?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working distributed framework
Projects
None yet
5 participants