Skip to content

Commit

Permalink
[#10428] docdb - Improve logging for SST file expiration
Browse files Browse the repository at this point in the history
Summary:
We are currently producing a lot of log spam with the compaction_file_filter, without
actually gathering useful information. This diff reduces the log level of the
compaction_file_filter Filter() log (from INFO to VLOG(2)), and instead adds an
INFO level log when the file is filtered during the compaction. Also adds information
about the user frontiers to the flush and compaction finish log message.

----LOGS BEFORE----
When file is eligible for expiration (pre-compaction/repeated):
I1105 04:15:42.434656 25926 compaction_file_filter.cc:101] Filtering file, TTL expired:  filter: { table_ttl_: 1.000s history_cutoff_: { physical: 1636085742433257 } max_ht_to_expire_: { physical: 1636085742320265 } filter_ht_: { physical: 1636085742433222 } } file: { number: 25 total_size: 71800 base_size: 66417 refs: 1 being_compacted: 1 smallest: { seqno: 1125899906843129 user_frontier: 0x000000000a550c00 -> { op_id: 1.170 hybrid_time: { physical: 1636085739649349 } history_cutoff: <invalid> hybrid_time_filter: <invalid> max_value_level_ttl_expiration_time: <invalid> } } largest: { seqno: 1125899906843164 user_frontier: 0x00000000090bcf40 -> { op_id: 1.181 hybrid_time: { physical: 1636085739925567 } history_cutoff: <invalid> hybrid_time_filter: <invalid> max_value_level_ttl_expiration_time: <initial> } } }

On file creation during flush:
I1105 04:15:42.435385 25916 flush_job.cc:303] T 5b97f50bc909438c84e54d8651ab2a28 P 18426f35cbad44979831e1806f9b7fdc [R]: [default] [JOB 13] Level-0 flush table #22: 70479 bytes OK

On file creation during compaction:
I1105 04:15:42.447424 25928 compaction_job.cc:859] T ad83e8ad8f034f5eac549ca7df16b4e3 P 18426f35cbad44979831e1806f9b7fdc [R]: [default] [JOB 19] Generated table #28: 33 keys, 71195 bytes

-----LOGS AFTER-----
When file is filtered:
I1104 23:24:20.177368 200241152 version_set.cc:3579] T 9198a11b2be244429555807d1ce94c80 P bcdd2129f7f748e18e427516a59cdbaf [R]: [default] File marked for deletion, will be removed after compaction. file: { number: 21 total_size: 71799 base_size: 66417 refs: 2 being_compacted: 1 smallest: { seqno: 1125899906842985 user_frontier: 0x0000000120930000 -> { op_id: 1.122 hybrid_time: { physical: 1636082658112011 } history_cutoff: <invalid> hybrid_time_filter: <invalid> max_value_level_ttl_expiration_time: <invalid> } } largest: { seqno: 1125899906843020 user_frontier: 0x0000000120930400 -> { op_id: 1.133 hybrid_time: { physical: 1636082658126080 } history_cutoff: <invalid> hybrid_time_filter: <invalid> max_value_level_ttl_expiration_time: <initial> } } }

On file creation during flush:
I1110 16:09:13.087689 199917568 flush_job.cc:304] T 19663be0d4e84a0292e7b936f6e20d10 P 42f87f3cb4dc4993876f16c2be1bb0c7 [R]: [default] [JOB 2] Level-0 flush table #10: 71791 bytes OK frontiers: { smallest: { op_id: 1.2 hybrid_time: { physical: 1636578553073885 } history_cutoff: <invalid> hybrid_time_filter: <invalid> max_value_level_ttl_expiration_time: <invalid> } largest: { op_id: 1.13 hybrid_time: { physical: 1636578553083839 } history_cutoff: <invalid> hybrid_time_filter: <invalid> max_value_level_ttl_expiration_time: <initial> } }

On file creation during compaction:
I1110 16:09:13.113641 207966208 compaction_job.cc:860] T 19663be0d4e84a0292e7b936f6e20d10 P 42f87f3cb4dc4993876f16c2be1bb0c7 [R]: [default] [JOB 5] Generated table #14: 108 keys, 82095 bytes frontiers: { smallest: { op_id: 1.2 hybrid_time: { physical: 1636578553073885 } history_cutoff: <invalid> hybrid_time_filter: <invalid> max_value_level_ttl_expiration_time: <invalid> } largest: { op_id: 1.37 hybrid_time: { physical: 1636578553103044 } history_cutoff: <invalid> hybrid_time_filter: <invalid> max_value_level_ttl_expiration_time: <initial> } }

Test Plan: Manual log inspection

Reviewers: sergei, rsami, timur

Reviewed By: timur

Subscribers: ybase, bogdan

Differential Revision: https://phabricator.dev.yugabyte.com/D13671
  • Loading branch information
jmeehan16 committed Nov 11, 2021
1 parent d6a4ff6 commit 3769cd0
Show file tree
Hide file tree
Showing 6 changed files with 22 additions and 7 deletions.
3 changes: 1 addition & 2 deletions src/yb/docdb/compaction_file_filter.cc
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@
//

#include "yb/docdb/compaction_file_filter.h"
#include <algorithm>

#include "yb/common/hybrid_time.h"
#include "yb/docdb/consensus_frontier.h"
Expand Down Expand Up @@ -98,7 +97,7 @@ FilterDecision DocDBCompactionFileFilter::Filter(const FileMetaData* file) {
<< " file: " << file->ToString();
return FilterDecision::kKeep;
}
LOG(INFO) << "Filtering file, TTL expired: "
VLOG(2) << "Filtering file, TTL expired: "
<< " filter: " << ToString()
<< " file: " << file->ToString();
return FilterDecision::kDiscard;
Expand Down
5 changes: 3 additions & 2 deletions src/yb/rocksdb/db/compaction_job.cc
Original file line number Diff line number Diff line change
Expand Up @@ -853,10 +853,11 @@ Status CompactionJob::FinishCompactionOutputFile(
info.job_id = job_id_;
RLOG(InfoLogLevel::INFO_LEVEL, db_options_.info_log,
"[%s] [JOB %d] Generated table #%" PRIu64 ": %" PRIu64
" keys, %" PRIu64 " bytes%s",
" keys, %" PRIu64 " bytes%s %s",
cfd->GetName().c_str(), job_id_, output_number, current_entries,
current_total_bytes,
meta->marked_for_compaction ? " (need compaction)" : "");
meta->marked_for_compaction ? " (need compaction)" : "",
meta->FrontiersToString().c_str());
EventHelpers::LogAndNotifyTableFileCreation(
event_logger_, cfd->ioptions()->listeners, meta->fd, info);
}
Expand Down
7 changes: 4 additions & 3 deletions src/yb/rocksdb/db/flush_job.cc
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@
#include "yb/rocksdb/db/version_set.h"
#include "yb/rocksdb/port/likely.h"
#include "yb/rocksdb/port/port.h"
#include "yb/rocksdb/compaction_filter.h"
#include "yb/rocksdb/db.h"
#include "yb/rocksdb/env.h"
#include "yb/rocksdb/statistics.h"
Expand Down Expand Up @@ -296,11 +297,11 @@ Result<FileNumbersHolder> FlushJob::WriteLevel0Table(
}
RLOG(InfoLogLevel::INFO_LEVEL, db_options_.info_log,
"[%s] [JOB %d] Level-0 flush table #%" PRIu64 ": %" PRIu64
" bytes %s"
"%s",
" bytes %s%s %s",
cfd_->GetName().c_str(), job_context_->job_id, meta->fd.GetNumber(),
meta->fd.GetTotalFileSize(), s.ToString().c_str(),
meta->marked_for_compaction ? " (needs compaction)" : "");
meta->marked_for_compaction ? " (needs compaction)" : "",
meta->FrontiersToString().c_str());

// output to event logger
if (s.ok()) {
Expand Down
6 changes: 6 additions & 0 deletions src/yb/rocksdb/db/version_edit.cc
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,12 @@ Slice FileMetaData::UserFilter() const {
return largest.user_frontier ? largest.user_frontier->Filter() : Slice();
}

std::string FileMetaData::FrontiersToString() const {
return yb::Format("frontiers: { smallest: $0 largest: $1 }",
smallest.user_frontier ? smallest.user_frontier->ToString() : "none",
largest.user_frontier ? largest.user_frontier->ToString() : "none");
}

std::string FileMetaData::ToString() const {
return yb::Format("{ number: $0 total_size: $1 base_size: $2 refs: $3 "
"being_compacted: $4 smallest: $5 largest: $6 }",
Expand Down
3 changes: 3 additions & 0 deletions src/yb/rocksdb/db/version_edit.h
Original file line number Diff line number Diff line change
Expand Up @@ -136,6 +136,9 @@ struct FileMetaData {

Slice UserFilter() const; // Extracts user filter from largest boundary value if present.

// Outputs smallest and largest user frontiers to string, if they exist.
std::string FrontiersToString() const;

std::string ToString() const;
};

Expand Down
5 changes: 5 additions & 0 deletions src/yb/rocksdb/db/version_set.cc
Original file line number Diff line number Diff line change
Expand Up @@ -3572,6 +3572,11 @@ InternalIterator* VersionSet::MakeInputIterator(Compaction* c) {
for (size_t i = 0; i < flevel->num_files; i++) {
FileMetaData* fmd = c->input(which, i);
if (c->input(which, i)->delete_after_compaction) {
RLOG(
InfoLogLevel::INFO_LEVEL, db_options_->info_log,
yb::Format(
"[$0] File marked for deletion, will be removed after compaction. file: $1",
c->column_family_data()->GetName(), fmd->ToString()).c_str());
RecordTick(cfd->ioptions()->statistics, COMPACTION_FILES_FILTERED);
continue;
}
Expand Down

0 comments on commit 3769cd0

Please sign in to comment.