-
Notifications
You must be signed in to change notification settings - Fork 28.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SPARK-3357 [CORE] Internal log messages should be set at DEBUG level instead of INFO #4838
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1074,7 +1074,7 @@ private[spark] class BlockManager( | |
* Remove all blocks belonging to the given broadcast. | ||
*/ | ||
def removeBroadcast(broadcastId: Long, tellMaster: Boolean): Int = { | ||
logInfo(s"Removing broadcast $broadcastId") | ||
logDebug(s"Removing broadcast $broadcastId") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It would be good to make sure that when the user explicitly There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sounds good;
Is that what you have in mind? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah that one is sufficient then. I looked earlier, but only at the |
||
val blocksToRemove = blockInfo.keys.collect { | ||
case bid @ BroadcastBlockId(`broadcastId`, _) => bid | ||
} | ||
|
@@ -1086,7 +1086,7 @@ private[spark] class BlockManager( | |
* Remove a block from both memory and disk. | ||
*/ | ||
def removeBlock(blockId: BlockId, tellMaster: Boolean = true): Unit = { | ||
logInfo(s"Removing block $blockId") | ||
logDebug(s"Removing block $blockId") | ||
val info = blockInfo.get(blockId).orNull | ||
if (info != null) { | ||
info.synchronized { | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -184,7 +184,7 @@ private[spark] class MemoryStore(blockManager: BlockManager, maxMemory: Long) | |
val entry = entries.remove(blockId) | ||
if (entry != null) { | ||
currentMemory -= entry.size | ||
logInfo(s"Block $blockId of size ${entry.size} dropped from memory (free $freeMemory)") | ||
logDebug(s"Block $blockId of size ${entry.size} dropped from memory (free $freeMemory)") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. On this one - do you know if this already gets logged somewhere else if a block is dropped from memory due to contention? It would be good to make sure there is some INFO level logging when a block is dropped due to memory being exceeded. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I believe we do have INFO level logging for this up the call chain when drops are blocked due to cache contention: Might be nice to augment that logging to have information on the size and limit (like this does). |
||
true | ||
} else { | ||
false | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm +1 on demoting these cleaning ones. It's really not useful or actionable.