Skip to content

Commit

Permalink
[SPARK-23345][SQL] Remove open stream record even closing it fails
Browse files Browse the repository at this point in the history
## What changes were proposed in this pull request?

When `DebugFilesystem` closes opened stream, if any exception occurs, we still need to remove the open stream record from `DebugFilesystem`. Otherwise, it goes to report leaked filesystem connection.

## How was this patch tested?

Existing tests.

Author: Liang-Chi Hsieh <viirya@gmail.com>

Closes apache#20524 from viirya/SPARK-23345.
  • Loading branch information
viirya authored and Robert Kruszewski committed Feb 12, 2018
1 parent d824d5a commit fdb671f
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 3 deletions.
7 changes: 5 additions & 2 deletions core/src/test/scala/org/apache/spark/DebugFilesystem.scala
Original file line number Diff line number Diff line change
Expand Up @@ -103,8 +103,11 @@ class DebugFilesystem extends LocalFileSystem {
override def markSupported(): Boolean = wrapped.markSupported()

override def close(): Unit = {
wrapped.close()
removeOpenStream(wrapped)
try {
wrapped.close()
} finally {
removeOpenStream(wrapped)
}
}

override def read(): Int = wrapped.read()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ trait SharedSparkSession
spark.sharedState.cacheManager.clearCache()
// files can be closed from other threads, so wait a bit
// normally this doesn't take more than 1s
eventually(timeout(10.seconds)) {
eventually(timeout(10.seconds), interval(2.seconds)) {
DebugFilesystem.assertNoOpenStreams()
}
}
Expand Down

0 comments on commit fdb671f

Please sign in to comment.