Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HDDS-6961. [Snapshot] Bootstrapping slow followers/new followers. #3980

Merged
merged 130 commits into from
Mar 30, 2023
Merged
Show file tree
Hide file tree
Changes from 128 commits
Commits
Show all changes
130 commits
Select commit Hold shift + click to select a range
797bd77
initial changes with noCopyList
Oct 5, 2022
0cc7a2b
added files to tarball
Oct 5, 2022
b386cd4
added test
Oct 6, 2022
9e19ff9
basics working
Oct 6, 2022
2dda59c
cleanup
Oct 6, 2022
bfd3054
basically working
Oct 6, 2022
245c740
fix original tests
Oct 7, 2022
c7c5b62
added create snapshot
Oct 7, 2022
0c242f4
added checkpoint test
Oct 7, 2022
fd5ce9b
added link file test
Oct 7, 2022
b143565
cleanup
Oct 7, 2022
a402d14
fixed scm checkpoint tests
Oct 7, 2022
af55d3f
reformatted for backwards compatibilty
Oct 8, 2022
77e0b02
cleanup
Oct 10, 2022
8d8fa65
cleanup starting slash
Oct 10, 2022
879695f
cleanup
Oct 10, 2022
354b280
cleanup
Oct 10, 2022
045b701
cleanup
Oct 11, 2022
407d9b1
added duration
Oct 11, 2022
31a048c
getSnapshotPath()
Oct 11, 2022
16968e6
checks dummy file count
Oct 11, 2022
713dcad
checkstyle
Oct 11, 2022
25cdbd1
fixed snapshotDirName
Oct 11, 2022
e329e8d
cleanup
Oct 11, 2022
6975c1a
follower installs snapshot tarball
Nov 3, 2022
a1ee873
cleanup
Nov 4, 2022
e662710
refactored hardlink handling
Nov 4, 2022
1ff9d66
hard link test working
Nov 4, 2022
357c1e2
cleanup
Nov 8, 2022
4948396
sleep for double buffer
Nov 8, 2022
0caf85b
cleanup
Nov 9, 2022
91f0a90
cleanup
Nov 9, 2022
8d8d521
cleanup
Nov 9, 2022
ba6b649
checkstyle
Nov 9, 2022
c3eb22b
findbugs
Nov 9, 2022
c3e6e41
includeSnapshotData initally working
Nov 10, 2022
71782b0
inc timeout
Nov 11, 2022
8acbc07
added test for includeSnapshotData is false
Nov 11, 2022
aee7bbf
renamed OzoneManagerSnapshotProvider
Nov 12, 2022
0c37119
checkstyle
Nov 12, 2022
2573caf
added package info
Nov 12, 2022
93fb5cf
fixed import
Nov 12, 2022
dc2b7b2
findbugs
Nov 12, 2022
f07cfef
removed make dbDir
Nov 13, 2022
43dadbb
restructured to pull in all of snapshot dir
Nov 15, 2022
bf0511a
added checkpoint state dir
Nov 15, 2022
ee3457f
fixed up directories
Nov 16, 2022
0954bba
restructured tarball to include all of snapshot dir
Nov 16, 2022
1cd5275
cleanup
Nov 17, 2022
d229f28
cleanup
Nov 17, 2022
7f175ee
cleanup hardlink test
Nov 17, 2022
793cea2
rebase cleanup
Nov 17, 2022
75beaf6
cleanup
Nov 17, 2022
07eac3b
cleanup
Nov 17, 2022
c4f8157
removed pom.xml/differ changes
Nov 18, 2022
fa3e691
fixed intellij warnings
Nov 18, 2022
58f5103
checkstyle
Nov 18, 2022
cbf14af
comments
Nov 18, 2022
474aaf7
trigger new CI check
Nov 18, 2022
9cb92c4
Update hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/…
GeorgeJahad Nov 28, 2022
7825c96
Update hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/…
GeorgeJahad Nov 28, 2022
1953d80
Update hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/…
GeorgeJahad Nov 28, 2022
f5a4a95
Update hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozo…
GeorgeJahad Nov 28, 2022
9aace54
Update hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozo…
GeorgeJahad Nov 28, 2022
b91843b
cleanup comments
Nov 29, 2022
09ab4ea
cleanup
Nov 29, 2022
16c3cf4
cleanup
Dec 1, 2022
e3a2921
Skips unexpected snapshot directories
Dec 1, 2022
d711d9a
checkstyle
Dec 1, 2022
5004374
findbugs
Dec 1, 2022
f9a26d1
Merge remote-tracking branch 'origin/HDDS-6517-Snapshot' into bootstr…
Jan 25, 2023
ab05184
fixed for updated directory structure
Jan 26, 2023
20cb141
Merge remote-tracking branch 'origin/master' into bootstrapMasterMerge2
Feb 3, 2023
0b0abbd
merge cleanup
Feb 6, 2023
a900c65
bootstrapStateLock
Feb 2, 2023
9e9700d
fixed snapshot dir in tests
Feb 7, 2023
b4f1afd
cleanup bootstrapLock
Feb 7, 2023
65e4873
Merge remote-tracking branch 'origin/master' into bootstrapMasterMerge3
Feb 7, 2023
61da276
changed to BootstrapStateHandler
Feb 7, 2023
ba4df8d
only allow single compaction log at a time
Feb 8, 2023
da44dc2
updated OMDBCheckpointServlet with bootstrap locks
Feb 8, 2023
993e51c
moved instance count to differ
Feb 8, 2023
80ca988
now tests stops metadataManager
Feb 9, 2023
1419e98
checkStyle
Feb 10, 2023
eeb35b9
checkstyle
Feb 10, 2023
3a8589a
initial lock test
Feb 10, 2023
2e76dd8
more lock tests
Feb 10, 2023
cc29251
clean up
Feb 10, 2023
eb4002c
checkstyle
Feb 10, 2023
bf82a4f
Merge branch 'master' into HDDS-6961
GeorgeJahad Feb 10, 2023
62c4f02
split the bootstrap state changes into separate PR.
Feb 10, 2023
76796ff
checkstyle
Feb 10, 2023
74b800a
Merge remote-tracking branch 'origin/master' into bootstrapMerge0216
Feb 16, 2023
d7a66bc
checkstyle
Feb 16, 2023
b8384c4
fix directory
Feb 16, 2023
ed069f0
checkstyle
Feb 16, 2023
3eda75a
Merge remote-tracking branch 'origin/master' into newBootstrap
Mar 16, 2023
ab15ef0
merge cleanup
Mar 16, 2023
fc6549e
Apply suggestions from code review
GeorgeJahad Mar 17, 2023
8c32c5b
checkstyle
Mar 17, 2023
f0e790d
findbugs
Mar 17, 2023
a24709e
findbugs
Mar 17, 2023
b684796
fixed streams
Mar 20, 2023
5241b11
Merge remote-tracking branch 'origin/master' into bootstrapMerge0320
Mar 20, 2023
9f81428
fix merge
Mar 20, 2023
7c60c53
now uses RDBCheckpointManager.waitForCheckpointDirectoryExist
Mar 20, 2023
c06b404
more hard link tests
Mar 20, 2023
9f93f50
parameterize snapshot e2e test
Mar 21, 2023
4f4c77a
fix test for restructured directory
Mar 21, 2023
68bbbd1
added parameterized tests
Mar 21, 2023
068a1bb
removed parameterized test
Mar 21, 2023
6e4c317
now compares hardlinks with leader
Mar 21, 2023
bc83526
fixed test to only check leader
Mar 21, 2023
f77b4d6
checkstyle
Mar 21, 2023
051b97b
checkstyle
Mar 21, 2023
dbcaae9
Apply suggestions from code review
GeorgeJahad Mar 27, 2023
bb6e09f
more review changes
Mar 27, 2023
ab25e93
Merge remote-tracking branch 'origin/master' into bootstrapTest3
Mar 28, 2023
b7ede37
refactoring based on review comments
Mar 28, 2023
2bb1fa5
fix alignment
Mar 28, 2023
8d33bb4
moved waitForCheckpointDirectoryExist into utils class
Mar 28, 2023
0212d2f
checkstyle
Mar 28, 2023
992f4d4
checkstyle
Mar 28, 2023
79b8936
fixed comment
Mar 28, 2023
27c5a9a
trigger new CI check
Mar 28, 2023
d94e406
checkstyle
Mar 29, 2023
59b5e52
fix checkstyle and TestOMSnapshotDAG
Mar 29, 2023
a519cdc
Merge remote-tracking branch 'origin/master' into bootstrapTest4
Mar 29, 2023
855d16d
Merge remote-tracking branch 'origin/master' into bootstrapTest5
Mar 30, 2023
49dcc6e
trigger new CI check
Mar 30, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,8 @@ public final class OzoneConsts {
public static final String STORAGE_DIR_CHUNKS = "chunks";
public static final String OZONE_DB_CHECKPOINT_REQUEST_FLUSH =
"flushBeforeCheckpoint";
public static final String OZONE_DB_CHECKPOINT_INCLUDE_SNAPSHOT_DATA =
"includeSnapshotData";

public static final String RANGER_OZONE_SERVICE_VERSION_KEY =
"#RANGEROZONESERVICEVERSION";
Expand Down Expand Up @@ -562,7 +564,13 @@ private OzoneConsts() {
public static final int OZONE_MAXIMUM_ACCESS_ID_LENGTH = 100;

public static final String OM_SNAPSHOT_NAME = "snapshotName";
public static final String OM_CHECKPOINT_DIR = "db.checkpoints";
public static final String OM_SNAPSHOT_DIR = "db.snapshots";
public static final String OM_SNAPSHOT_CHECKPOINT_DIR = OM_SNAPSHOT_DIR
+ OM_KEY_PREFIX + "checkpointState";
public static final String OM_SNAPSHOT_DIFF_DIR = OM_SNAPSHOT_DIR
+ OM_KEY_PREFIX + "diffState";

public static final String OM_SNAPSHOT_INDICATOR = ".snapshot";
public static final String OM_SNAPSHOT_DIFF_DB_NAME = "db.snapdiff";

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
import java.io.OutputStream;
import java.nio.file.Path;
import java.time.Duration;
import java.time.Instant;
Expand Down Expand Up @@ -162,7 +163,7 @@ public void doGet(HttpServletRequest request, HttpServletResponse response) {
file + ".tar\"");

Instant start = Instant.now();
writeDBCheckpointToStream(checkpoint,
writeDbDataToStream(checkpoint, request,
response.getOutputStream());
Instant end = Instant.now();

Expand All @@ -188,4 +189,19 @@ public void doGet(HttpServletRequest request, HttpServletResponse response) {
}
}

/**
* Write checkpoint to the stream.
*
* @param checkpoint The checkpoint to be written.
* @param ignoredRequest The httpRequest which generated this checkpoint.
* (Parameter is ignored in this class but used in child classes).
* @param destination The stream to write to.
*/
public void writeDbDataToStream(DBCheckpoint checkpoint,
HttpServletRequest ignoredRequest,
GeorgeJahad marked this conversation as resolved.
Show resolved Hide resolved
OutputStream destination)
throws IOException, InterruptedException {
writeDBCheckpointToStream(checkpoint, destination);
}

}
Original file line number Diff line number Diff line change
Expand Up @@ -548,7 +548,7 @@ public static void writeDBCheckpointToStream(DBCheckpoint checkpoint,
}
}

private static void includeFile(File file, String entryName,
public static void includeFile(File file, String entryName,
ArchiveOutputStream archiveOutputStream)
throws IOException {
ArchiveEntry archiveEntry =
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,20 +20,16 @@
package org.apache.hadoop.hdds.utils.db;

import java.io.Closeable;
import java.io.File;
import java.io.IOException;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.time.Duration;
import java.time.Instant;
import org.apache.commons.lang3.StringUtils;
import org.apache.hadoop.hdds.utils.db.RocksDatabase.RocksCheckpoint;
import org.awaitility.core.ConditionTimeoutException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import static org.awaitility.Awaitility.with;

/**
* RocksDB Checkpoint Manager, used to create and cleanup checkpoints.
*/
Expand All @@ -44,9 +40,6 @@ public class RDBCheckpointManager implements Closeable {
private static final Logger LOG =
LoggerFactory.getLogger(RDBCheckpointManager.class);
private final String checkpointNamePrefix;
private static final Duration POLL_DELAY_DURATION = Duration.ZERO;
private static final Duration POLL_INTERVAL_DURATION = Duration.ofMillis(100);
private static final Duration POLL_MAX_DURATION = Duration.ofSeconds(5);

/**
* Create a checkpoint manager with a prefix to be added to the
Expand Down Expand Up @@ -96,7 +89,8 @@ public RocksDBCheckpoint createCheckpoint(String parentDir, String name) {
LOG.info("Created checkpoint in rocksDB at {} in {} milliseconds",
checkpointPath, duration);

waitForCheckpointDirectoryExist(checkpointPath.toFile());
RDBCheckpointUtils.waitForCheckpointDirectoryExist(
checkpointPath.toFile());

return new RocksDBCheckpoint(
checkpointPath,
Expand All @@ -109,29 +103,6 @@ public RocksDBCheckpoint createCheckpoint(String parentDir, String name) {
return null;
}

/**
* Wait for checkpoint directory to be created for 5 secs with 100 millis
* poll interval.
*/
public static void waitForCheckpointDirectoryExist(File file)
throws IOException {
Instant start = Instant.now();
try {
with().atMost(POLL_MAX_DURATION)
.pollDelay(POLL_DELAY_DURATION)
.pollInterval(POLL_INTERVAL_DURATION)
.await()
.until(file::exists);
LOG.info("Waited for {} milliseconds for checkpoint directory {}" +
" availability.",
Duration.between(start, Instant.now()).toMillis(),
file.getAbsoluteFile());
} catch (ConditionTimeoutException exception) {
LOG.info("Checkpoint directory: {} didn't get created in 5 secs.",
file.getAbsolutePath());
}
}

/**
* Create RocksDB snapshot by saving a checkpoint to a directory.
*
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.hadoop.hdds.utils.db;

import org.awaitility.core.ConditionTimeoutException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.File;
import java.io.IOException;
import java.time.Duration;
import java.time.Instant;

import static org.awaitility.Awaitility.with;

/**
* RocksDB Checkpoint Utilities.
*/
public final class RDBCheckpointUtils {
static final Logger LOG =
LoggerFactory.getLogger(RDBCheckpointUtils.class);
private static final Duration POLL_DELAY_DURATION = Duration.ZERO;
private static final Duration POLL_INTERVAL_DURATION = Duration.ofMillis(100);
private static final Duration POLL_MAX_DURATION = Duration.ofSeconds(5);

private RDBCheckpointUtils() { }

/**
* Wait for checkpoint directory to be created for 5 secs with 100 millis
* poll interval.
* @param file Checkpoint directory.
* @return true if found.
*/
public static boolean waitForCheckpointDirectoryExist(File file)
throws IOException {
Instant start = Instant.now();
try {
with().atMost(POLL_MAX_DURATION)
.pollDelay(POLL_DELAY_DURATION)
.pollInterval(POLL_INTERVAL_DURATION)
.await()
.until(file::exists);
LOG.info("Waited for {} milliseconds for checkpoint directory {}" +
" availability.",
Duration.between(start, Instant.now()).toMillis(),
file.getAbsoluteFile());
return true;
} catch (ConditionTimeoutException exception) {
LOG.info("Checkpoint directory: {} didn't get created in 5 secs.",
file.getAbsolutePath());
return false;
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,12 @@
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import static org.apache.hadoop.ozone.OzoneConsts.OM_CHECKPOINT_DIR;
import static org.apache.hadoop.ozone.OzoneConsts.OM_KEY_PREFIX;
import static org.apache.hadoop.ozone.OzoneConsts.OM_SNAPSHOT_CHECKPOINT_DIR;
import static org.apache.hadoop.ozone.OzoneConsts.OM_SNAPSHOT_DIFF_DIR;
import static org.apache.hadoop.ozone.OzoneConsts.DB_COMPACTION_LOG_DIR;
import static org.apache.hadoop.ozone.OzoneConsts.DB_COMPACTION_SST_BACKUP_DIR;
import static org.apache.hadoop.ozone.OzoneConsts.OM_SNAPSHOT_DIR;
import static org.apache.hadoop.ozone.OzoneConsts.SNAPSHOT_INFO_TABLE;

/**
Expand Down Expand Up @@ -106,8 +109,9 @@ public RDBStore(File dbFile, ManagedDBOptions dbOptions,
try {
if (enableCompactionLog) {
rocksDBCheckpointDiffer = new RocksDBCheckpointDiffer(
dbLocation.getParent(), DB_COMPACTION_SST_BACKUP_DIR,
DB_COMPACTION_LOG_DIR, dbLocation.toString(),
dbLocation.getParent() + OM_KEY_PREFIX + OM_SNAPSHOT_DIFF_DIR,
DB_COMPACTION_SST_BACKUP_DIR, DB_COMPACTION_LOG_DIR,
dbLocation.toString(),
maxTimeAllowedForSnapshotInDag, compactionDagDaemonInterval);
rocksDBCheckpointDiffer.setRocksDBForCompactionTracking(dbOptions);
} else {
Expand Down Expand Up @@ -135,7 +139,7 @@ public RDBStore(File dbFile, ManagedDBOptions dbOptions,

//create checkpoints directory if not exists.
checkpointsParentDir =
Paths.get(dbLocation.getParent(), "db.checkpoints").toString();
dbLocation.getParent() + OM_KEY_PREFIX + OM_CHECKPOINT_DIR;
File checkpointsDir = new File(checkpointsParentDir);
if (!checkpointsDir.exists()) {
boolean success = checkpointsDir.mkdir();
Expand All @@ -146,15 +150,15 @@ public RDBStore(File dbFile, ManagedDBOptions dbOptions,
}
}

//create snapshot directory if does not exist.
//create snapshot checkpoint directory if does not exist.
snapshotsParentDir = Paths.get(dbLocation.getParent(),
OM_SNAPSHOT_DIR).toString();
OM_SNAPSHOT_CHECKPOINT_DIR).toString();
File snapshotsDir = new File(snapshotsParentDir);
if (!snapshotsDir.exists()) {
boolean success = snapshotsDir.mkdir();
boolean success = snapshotsDir.mkdirs();
if (!success) {
throw new IOException(
"Unable to create RocksDB snapshot directory: " +
"Unable to create RocksDB snapshot checkpoint directory: " +
snapshotsParentDir);
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@ private String createCompactionLogDir(String metadataDir,

final File parentDir = new File(metadataDir);
if (!parentDir.exists()) {
if (!parentDir.mkdir()) {
if (!parentDir.mkdirs()) {
LOG.error("Error creating compaction log parent dir.");
return null;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@
import java.io.IOException;
import java.net.InetSocketAddress;

import static org.apache.hadoop.ozone.OzoneConsts.OZONE_DB_CHECKPOINT_INCLUDE_SNAPSHOT_DATA;
import static org.apache.hadoop.ozone.OzoneConsts.OZONE_DB_CHECKPOINT_REQUEST_FLUSH;
import static org.apache.hadoop.ozone.OzoneConsts.OZONE_DB_CHECKPOINT_HTTP_ENDPOINT;
import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
Expand Down Expand Up @@ -164,13 +165,15 @@ public String getOMDBCheckpointEnpointUrl(boolean isHttpPolicy) {
if (StringUtils.isNotEmpty(getHttpAddress())) {
return "http://" + getHttpAddress() +
OZONE_DB_CHECKPOINT_HTTP_ENDPOINT +
"?" + OZONE_DB_CHECKPOINT_REQUEST_FLUSH + "=true";
"?" + OZONE_DB_CHECKPOINT_REQUEST_FLUSH + "=true&" +
OZONE_DB_CHECKPOINT_INCLUDE_SNAPSHOT_DATA + "=true";
}
} else {
if (StringUtils.isNotEmpty(getHttpsAddress())) {
return "https://" + getHttpsAddress() +
OZONE_DB_CHECKPOINT_HTTP_ENDPOINT +
"?" + OZONE_DB_CHECKPOINT_REQUEST_FLUSH + "=true";
"?" + OZONE_DB_CHECKPOINT_REQUEST_FLUSH + "=true&" +
OZONE_DB_CHECKPOINT_INCLUDE_SNAPSHOT_DATA + "=true";
}
}
return null;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@
import org.apache.commons.lang3.RandomStringUtils;
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.ozone.MiniOzoneCluster;
import org.apache.hadoop.ozone.om.OMStorage;
import org.apache.hadoop.ozone.om.OzoneManager;
import org.apache.hadoop.ozone.om.helpers.SnapshotInfo;
import org.apache.hadoop.util.ToolRunner;
Expand All @@ -45,11 +44,10 @@
import org.junit.jupiter.params.provider.ValueSource;

import static org.apache.hadoop.fs.FileSystem.FS_DEFAULT_NAME_KEY;
import static org.apache.hadoop.ozone.OzoneConsts.OM_DB_NAME;
import static org.apache.hadoop.ozone.OzoneConsts.OM_KEY_PREFIX;
import static org.apache.hadoop.ozone.OzoneConsts.OM_SNAPSHOT_DIR;
import static org.apache.hadoop.ozone.OzoneConsts.OZONE_OFS_URI_SCHEME;
import static org.apache.hadoop.ozone.OzoneConsts.OM_SNAPSHOT_INDICATOR;
import static org.apache.hadoop.ozone.om.OmSnapshotManager.getSnapshotPath;

/**
* Test client-side CRUD snapshot operations with Ozone Manager.
Expand Down Expand Up @@ -332,16 +330,14 @@ private String createSnapshot() throws Exception {
// Asserts that create request succeeded
Assertions.assertEquals(0, res);

File metaDir = OMStorage
.getOmDbDir(ozoneManager.getConfiguration());
OzoneConfiguration conf = ozoneManager.getConfiguration();

// wait till the snapshot directory exists
SnapshotInfo snapshotInfo = ozoneManager.getMetadataManager()
.getSnapshotInfoTable()
.get(SnapshotInfo.getTableKey(VOLUME, BUCKET, snapshotName));
String snapshotDirName = metaDir + OM_KEY_PREFIX +
OM_SNAPSHOT_DIR + OM_KEY_PREFIX + OM_DB_NAME +
snapshotInfo.getCheckpointDirName() + OM_KEY_PREFIX + "CURRENT";
String snapshotDirName = getSnapshotPath(conf, snapshotInfo) +
OM_KEY_PREFIX + "CURRENT";
GenericTestUtils.waitFor(() -> new File(snapshotDirName).exists(),
1000, 100000);

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
import java.util.Collections;
import java.util.UUID;

import org.apache.commons.compress.compressors.CompressorException;
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMMetrics;
import org.apache.hadoop.hdds.scm.server.SCMDBCheckpointServlet;
Expand All @@ -46,6 +47,8 @@
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.Timeout;
import org.mockito.Matchers;

import static org.mockito.ArgumentMatchers.any;
import static org.mockito.Mockito.doCallRealMethod;
import static org.mockito.Mockito.doNothing;
import static org.mockito.Mockito.mock;
Expand Down Expand Up @@ -99,7 +102,9 @@ public void shutdown() {
}

@Test
public void testDoGet() throws ServletException, IOException {
public void testDoGet()
throws ServletException, IOException, CompressorException,
InterruptedException {

File tempFile = null;
try {
Expand All @@ -114,6 +119,8 @@ public void testDoGet() throws ServletException, IOException {
Collections.emptyList(),
Collections.emptyList(),
false);
doCallRealMethod().when(scmDbCheckpointServletMock)
.writeDbDataToStream(any(), any(), any());

HttpServletRequest requestMock = mock(HttpServletRequest.class);
HttpServletResponse responseMock = mock(HttpServletResponse.class);
Expand Down
Loading