Skip to content

Commit

Permalink
cloud_storage: Force exhaustive trim when fast trim fails
Browse files Browse the repository at this point in the history
In the case where the cache dir consists solely of index+tx files, the
current code path in fast trim does not remove anything. This should be
followed up by an exhaustive trim to free up slots in cache, but due to
an adjustment of the objects to delete counter, this ends up not
happening.

The change here makes sure that if we have a certain count of objects to
delete, and we were not able to delete the count, and there is a set of
filtered out files, we proceed to do an exhaustive trim.
  • Loading branch information
abhijat committed Aug 25, 2023
1 parent 4b8351a commit c1bf75b
Showing 1 changed file with 22 additions and 5 deletions.
27 changes: 22 additions & 5 deletions src/v/cloud_storage/cache_service.cc
Original file line number Diff line number Diff line change
Expand Up @@ -351,7 +351,8 @@ ss::future<> cache::trim(
vlog(
cst_log.debug,
"trim: set target_size {}/{}, size {}/{}, walked size {} (max {}/{}), "
" reserved {}/{}, pending {}/{})",
" reserved {}/{}, pending {}/{}), candidates for deletion: {}, filtered "
"out: {}",
target_size,
target_objects,
_current_cache_size,
Expand All @@ -362,7 +363,9 @@ ss::future<> cache::trim(
_reserved_cache_size,
_reserved_cache_objects,
_reservations_pending,
_reservations_pending_objects);
_reservations_pending_objects,
candidates_for_deletion.size(),
filtered_out_files);

if (
_current_cache_size + _reserved_cache_size < target_size
Expand Down Expand Up @@ -453,9 +456,23 @@ ss::future<> cache::trim(
// cache.
size_to_delete = std::min(
walked_cache_size - fast_result.deleted_size, size_to_delete);
objects_to_delete = std::min(
candidates_for_deletion.size() - fast_result.deleted_count,
objects_to_delete);

// If we were not able to delete enough files and there are some filtered
// out files, force an exhaustive trim. This ensures that if the cache is
// dominated by filtered out files, we do not skip trimming them by reducing
// the objects_to_delete counter next.
bool force_exhaustive_trim = fast_result.deleted_count < objects_to_delete
&& filtered_out_files > 0;

// In the situation where all files in cache are filtered out,
// candidates_for_deletion is one (accesstime tracker) and the following
// adjustment ends up setting objects_to_delete to 1, skipping the
// exhaustive trim. The check force_exhaustive_trim prevents this.
if (!force_exhaustive_trim) {
objects_to_delete = std::min(
candidates_for_deletion.size() - fast_result.deleted_count,
objects_to_delete);
}

if (
size_to_delete > undeletable_bytes
Expand Down

0 comments on commit c1bf75b

Please sign in to comment.