Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misc minor fixes in Helix tool, ReplicationEngine and Composite Manager #1157

Merged
merged 4 commits into from
Apr 19, 2019

Conversation

jsjtzyy
Copy link
Contributor

@jsjtzyy jsjtzyy commented Apr 16, 2019

  1. Add null check in getDatacenterName() in CompositeClusterManager.
  2. Make instance config change ignore count more accurate.
  3. Ensure Helix bootstrap/upgrade tool can exit in the end.
  4. Break the loop if remoteReplicaInfo is found in getRemoteReplicaInfo().

1. Add null check in getDatacenterName() in CompositeClusterManager.
2. Make instance config change ignore count more accurate.
3. Ensure Helix bootstrap/upgrade tool can exit in the end.
4. Break the loop if remoteReplicaInfo is found in getRemoteReplicaInfo().
@jsjtzyy jsjtzyy requested review from zzmao and pnarayanan April 16, 2019 22:09
@jsjtzyy jsjtzyy self-assigned this Apr 16, 2019
@jsjtzyy
Copy link
Contributor Author

jsjtzyy commented Apr 17, 2019

./gradlew build && ./gradlew test succeeded.

@jsjtzyy jsjtzyy closed this Apr 18, 2019
@jsjtzyy jsjtzyy reopened this Apr 18, 2019
@codecov-io
Copy link

codecov-io commented Apr 18, 2019

Codecov Report

❗ No coverage uploaded for pull request base (master@7ab285a). Click here to learn what that means.
The diff coverage is 12.5%.

Impacted file tree graph

@@            Coverage Diff            @@
##             master    #1157   +/-   ##
=========================================
  Coverage          ?   69.58%           
  Complexity        ?     5240           
=========================================
  Files             ?      414           
  Lines             ?    32311           
  Branches          ?     4116           
=========================================
  Hits              ?    22483           
  Misses            ?     8698           
  Partials          ?     1130
Impacted Files Coverage Δ Complexity Δ
...om.github.ambry.replication/ReplicationEngine.java 52.4% <0%> (ø) 20 <0> (?)
...ub.ambry/clustermap/HelixBootstrapUpgradeTool.java 0% <0%> (ø) 0 <0> (?)
...ub.ambry.clustermap/HelixBootstrapUpgradeUtil.java 65.77% <0%> (ø) 53 <0> (?)
...thub.ambry.clustermap/CompositeClusterManager.java 67.24% <0%> (ø) 19 <0> (?)
...m.github.ambry.clustermap/HelixClusterManager.java 87.53% <100%> (ø) 36 <0> (?)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7ab285a...5bcd693. Read the comment docs.

@@ -1840,6 +1840,7 @@ void addMessage(PartitionId id, MessageInfo messageInfo, ByteBuffer buffer) {
new RemoteReplicaInfo(peerReplicaId, replicaId, store, new MockFindToken(0, 0), Long.MAX_VALUE,
SystemTime.getInstance(), new Port(peerReplicaId.getDataNodeId().getPort(), PortType.PLAINTEXT));
remoteReplicaInfos.add(remoteReplicaInfo);
break;
Copy link
Contributor

@zzmao zzmao Apr 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's possible to have two replicas(belong to same partition) on same host. We can't assume there is only one, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made the change based on the assumption that no two replicas belonging to same partition would reside on same node. Since this is only a test and we may allow two replicas on single node in the future, so I decide to remove the break here.

Copy link
Contributor

@zzmao zzmao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zzmao zzmao merged commit ec256c7 into linkedin:master Apr 19, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants