Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a new pull request by comparing changes across two branches #1517

Merged
merged 13 commits into from
Jul 5, 2023

Conversation

GulajavaMinistudio
Copy link
Owner

What changes were proposed in this pull request?

Why are the changes needed?

Does this PR introduce any user-facing change?

How was this patch tested?

LuciferYang and others added 13 commits July 4, 2023 20:06
### What changes were proposed in this pull request?
This pr aims to upgrade `scala-parser-combinators` from 2.2.0 to 2.3.0

### Why are the changes needed?
The new version [dropped support for Scala 2.11](scala/scala-parser-combinators#504) and bring a bug fix:
- scala/scala-parser-combinators#507

The full release notes as follows:
- https://github.com/scala/scala-parser-combinators/releases/tag/v2.3.0

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Pass Github Actions

Closes #41848 from LuciferYang/scala-parser-combinators-23.

Authored-by: yangjie01 <yangjie01@baidu.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
… with Scala 2.13

### What changes were proposed in this pull request?
The main change of this pr as follows:

1. rename `TestHelloV2.jar` and `TestHelloV3.jar` added in #41789 to `TestHelloV2_2.12.jar` and `TestHelloV3_2.12.jar`
2. Add corresponding `TestHelloV2_2.13.jar` and `TestHelloV3_2.13.jar` which compiled with Scala 2.13
3. Make `ClassLoaderIsolationSuite` use the correct jar in testing

### Why are the changes needed?
Make `ClassLoaderIsolationSuite` test pass with Scala 2.13.

The Scala 2.13 daily test failed after #41789
- https://github.com/apache/spark/actions/runs/5447771717/jobs/9910185372

```
[info] - Executor classloader isolation with JobArtifactSet *** FAILED *** (83 milliseconds)
[info]   org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (localhost executor driver): java.lang.NoClassDefFoundError: scala/Serializable
[info] 	at java.lang.ClassLoader.defineClass1(Native Method)
[info] 	at java.lang.ClassLoader.defineClass(ClassLoader.java:756)
[info] 	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
[info] 	at java.net.URLClassLoader.defineClass(URLClassLoader.java:473)
[info] 	at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
[info] 	at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
[info] 	at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
[info] 	at java.security.AccessController.doPrivileged(Native Method)
[info] 	at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
[info] 	at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
[info] 	at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
[info] 	at java.lang.Class.forName0(Native Method)
[info] 	at java.lang.Class.forName(Class.java:348)
[info] 	at org.apache.spark.util.SparkClassUtils.classForName(SparkClassUtils.scala:35)
[info] 	at org.apache.spark.util.SparkClassUtils.classForName$(SparkClassUtils.scala:30)
[info] 	at org.apache.spark.util.Utils$.classForName(Utils.scala:94)
[info] 	at org.apache.spark.executor.ClassLoaderIsolationSuite.$anonfun$new$3(ClassLoaderIsolationSuite.scala:53)
[info] 	at scala.runtime.java8.JFunction1$mcVI$sp.apply(JFunction1$mcVI$sp.scala:18)
[info] 	at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:576)
[info] 	at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:574)
[info] 	at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
[info] 	at org.apache.spark.rdd.RDD.$anonfun$foreach$2(RDD.scala:1028)
[info] 	at org.apache.spark.rdd.RDD.$anonfun$foreach$2$adapted(RDD.scala:1028)
[info] 	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2406)
[info] 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
[info] 	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
[info] 	at org.apache.spark.scheduler.Task.run(Task.scala:141)
[info] 	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:593)
[info] 	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1478)
[info] 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:596)
[info] 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[info] 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[info] 	at java.lang.Thread.run(Thread.java:750)
[info] Caused by: java.lang.ClassNotFoundException: scala.Serializable
[info] 	at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
[info] 	at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
[info] 	at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
[info] 	... 33 more
```

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
- Pass GitHub Actions
- Manual check:

**Scala 2.12**
```
build/sbt "core/testOnly *ClassLoaderIsolationSuite"
```

```
[info] ClassLoaderIsolationSuite:
[info] - Executor classloader isolation with JobArtifactSet (1 second, 394 milliseconds)
[info] Run completed in 2 seconds, 437 milliseconds.
[info] Total number of tests run: 1
[info] Suites: completed 1, aborted 0
[info] Tests: succeeded 1, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
```

**Scala 2.13**

```
dev/change-scala-version.sh 2.13
build/sbt "core/testOnly *ClassLoaderIsolationSuite" -Pscala-2.13
```

```
[info] ClassLoaderIsolationSuite:
[info] - Executor classloader isolation with JobArtifactSet (1 second, 355 milliseconds)
[info] Run completed in 2 seconds, 264 milliseconds.
[info] Total number of tests run: 1
[info] Suites: completed 1, aborted 0
[info] Tests: succeeded 1, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
```

Closes #41852 from LuciferYang/SPARK-44297.

Authored-by: yangjie01 <yangjie01@baidu.com>
Signed-off-by: yangjie01 <yangjie01@baidu.com>
### What changes were proposed in this pull request?
This pr aims upgrade dropwizard metrics to 4.2.19.

### Why are the changes needed?
The new version bring a bug fix related to metrics-jetty module:
- dropwizard/metrics#3379

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Pass Github Actions

Closes #41849 from LuciferYang/SPARK-44296.

Authored-by: yangjie01 <yangjie01@baidu.com>
Signed-off-by: yangjie01 <yangjie01@baidu.com>
…_[2315-2319]

### What changes were proposed in this pull request?
The pr aims to assign names to the error class _LEGACY_ERROR_TEMP_[2315-2319].

### Why are the changes needed?
Improve the error framework.

### Does this PR introduce _any_ user-facing change?
'No'.

### How was this patch tested?
Exists test cases updated and added new test cases.

Closes #41850 from beliefer/SPARK-44292.

Authored-by: Jiaan Geng <beliefer@163.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
### What changes were proposed in this pull request?

This PR fixs the bug where invalid JAR URIs were being generated because the URI was stored as `artifactURI + "/" + target.toString` (here, `target` is the absolute path of the file) instead of `artifactURI + "/" + remoteRelativePath.toString` (here, the `remoteRelativePath` is in the form of `jars/...`)

### Why are the changes needed?

Without this change, Spark Connect users attempting to use a custom JAR (such as in UDFs) will hit task failure issue as an exception would be thrown during the JAR file fetch operation.
Example stacktrace:
```
23/07/03 17:00:15 INFO Executor: Fetching spark://ip-10-110-22-170.us-west-2.compute.internal:43743/artifacts/d9548b02-ff3b-4278-ab52-aef5d1fc724e//home/venkata.gudesa/spark/artifacts/spark-d6141194-c487-40fd-ba40-444d922808ea/d9548b02-ff3b-4278-ab52-aef5d1fc724e/jars/TestHelloV2.jar with timestamp 0
23/07/03 17:00:15 ERROR Executor: Exception in task 6.0 in stage 4.0 (TID 55)
java.lang.RuntimeException: Stream '/artifacts/d9548b02-ff3b-4278-ab52-aef5d1fc724e//home/venkata.gudesa/spark/artifacts/spark-d6141194-c487-40fd-ba40-444d922808ea/d9548b02-ff3b-4278-ab52-aef5d1fc724e/jars/TestHelloV2.jar' was not found.
	at org.apache.spark.network.client.TransportResponseHandler.handle(TransportResponseHandler.java:260)
	at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:142)
	at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:53)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
	at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:102)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)
```

### Does this PR introduce _any_ user-facing change?

No (the bug-fix is consistent with what users expect)

### How was this patch tested?

New E2E test in `ReplE2ESuite`.

Closes #41844 from vicennial/SPARK-44293.

Authored-by: vicennial <venkata.gudesa@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
…a RuntimeException

### What changes were proposed in this pull request?
The executor expects `numChunks` to be > 0. If it is zero, then we see that the executor fails with
```
23/06/20 19:07:37 ERROR task 2031.0 in stage 47.0 (TID 25018) Executor: Exception in task 2031.0 in stage 47.0 (TID 25018)
java.lang.ArithmeticException: / by zero
	at org.apache.spark.storage.PushBasedFetchHelper.createChunkBlockInfosFromMetaResponse(PushBasedFetchHelper.scala:128)
	at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:1047)
	at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:90)
	at org.apache.spark.util.CompletionIterator.next(CompletionIterator.scala:29)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
```
Because this is an `ArithmeticException`, the executor doesn't fallback. It's not a `FetchFailure` either, so the stage is not retried and the application fails.

### Why are the changes needed?
The executor should fallback to fetch original blocks and not fail because this suggests that there is an issue with push-merged block.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Modified the existing UTs to validate that RuntimeException is thrown when numChunks are 0.

Closes #41762 from otterc/SPARK-44215.

Authored-by: Chandni Singh <singh.chandni@gmail.com>
Signed-off-by: Mridul Muralidharan <mridul<at>gmail.com>
### What changes were proposed in this pull request?
Disable running example doctests in pyspark.sql.dataframe

### Why are the changes needed?
The doctest serves illustrative purpose and can be broken easily due to external factors.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Checked that doctest is ignoring those lines

Closes #41787 from cdkrot/doctest_bugfix.

Authored-by: Alice Sayutina <alice.sayutina@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
…ring_replace for pandas 2.0.0"

This reverts commit 442fdb8.
…1 before the new arrow version release

### What changes were proposed in this pull request?
The pr aims to disable PySpark test on the daily test of Java 21 before the new arrow version release.

### Why are the changes needed?
Make daily testing runs based on Java21 successful.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Pass GA.

Closes #41826 from panbingkun/skip_python_test_java21.

Authored-by: panbingkun <pbk1982@gmail.com>
Signed-off-by: yangjie01 <yangjie01@baidu.com>
…n scope to session specific artifacts

### What changes were proposed in this pull request?

Modify the directory deletion in `SparkConnectArtifactManager#cleanUpResources` to target the session-specific artifact directory instead of the root artifact directory.

### Why are the changes needed?

Currently, when `SparkConnectArtifactManager#cleanUpResources` is called, it would lead to the deletion of **all** artifacts instead of session-specific ones. This breaks resource isolation among sessions when the bug is triggered.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

New unit test in `ArtifactManagerSuite` that verifies that the correct directory is deleted as well as the existence of the root directory.

Closes #41854 from vicennial/SPARK-44300.

Authored-by: vicennial <venkata.gudesa@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?

Upgrade Avro dependency to version 1.11.2

### Why are the changes needed?

To keep up with upstream

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Normal Spark build tests.

Closes #41830 from iemejia/SPARK-44277.

Authored-by: Ismaël Mejía <iemejia@gmail.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
…ite` for Java 21

### What changes were proposed in this pull request?
This pr adds a check condition for `beforeAll` function of `ReplE2ESuite`, make the `beforeAll` function No longer initializing Ammonite test with Java 17+.

### Why are the changes needed?
Make `connect-client-jvm` module test pass with Java 21 on GA.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
- Pass GitHub Actions
- Checked with GA

**Before**

- https://github.com/apache/spark/actions/runs/5434602425/jobs/9883143909

```
at java.base/java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:200)
	at java.base/sun.nio.ch.FileChannelImpl.endBlocking(FileChannelImpl.java:172)
	at java.base/sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:430)
	at jdk.zipfs/jdk.nio.zipfs.ZipFileSystem.findEND(ZipFileSystem.java:1255)
	at jdk.zipfs/jdk.nio.zipfs.ZipFileSystem.initCEN(ZipFileSystem.java:1541)
	at jdk.zipfs/jdk.nio.zipfs.ZipFileSystem.<init>(ZipFileSystem.java:179)
	at jdk.zipfs/jdk.nio.zipfs.ZipFileSystemProvider.getZipFileSystem(ZipFileSystemProvider.java:125)
	at jdk.zipfs/jdk.nio.zipfs.ZipFileSystemProvider.newFileSystem(ZipFileSystemProvider.java:106)
	at java.base/java.nio.file.FileSystems.newFileSystem(FileSystems.java:339)
	at java.base/java.nio.file.FileSystems.newFileSystem(FileSystems.java:288)
	at io.github.retronym.java9rtexport.Export.rt(Export.java:60)
	at io.github.retronym.java9rtexport.Export.rtTo(Export.java:88)
	at io.github.retronym.java9rtexport.Export.rtAt(Export.java:100)
	at io.github.retronym.java9rtexport.Export.rtAt(Export.java:105)
	at ammonite.util.Classpath$.classpath(Classpath.scala:76)
	at ammonite.compiler.CompilerLifecycleManager.init(CompilerLifecycleManager.scala:92)
	at ammonite.compiler.CompilerLifecycleManager.preprocess(CompilerLifecycleManager.scala:64)
	at ammonite.interp.Interpreter.compileRunBlock$1(Interpreter.scala:526)
	at ammonite.interp.Interpreter.$anonfun$processAllScriptBlocks$15(Interpreter.scala:587)
	at ammonite.util.Res$Success.flatMap(Res.scala:62)
	at ammonite.interp.Interpreter.$anonfun$processAllScriptBlocks$14(Interpreter.scala:584)
	at ammonite.util.Res$Success.flatMap(Res.scala:62)
	at ammonite.interp.Interpreter.$anonfun$processAllScriptBlocks$12(Interpreter.scala:581)
	at scala.Option.getOrElse(Option.scala:189)
	at ammonite.interp.Interpreter.loop$1(Interpreter.scala:581)
	at ammonite.interp.Interpreter.processAllScriptBlocks(Interpreter.scala:619)
	at ammonite.interp.Interpreter.$anonfun$processModule$6(Interpreter.scala:414)
	at ammonite.util.Catching.flatMap(Res.scala:115)
	at ammonite.interp.Interpreter.$anonfun$processModule$5(Interpreter.scala:405)
	at ammonite.util.Res$Success.flatMap(Res.scala:62)
	at ammonite.interp.Interpreter.processModule(Interpreter.scala:395)
	at ammonite.interp.Interpreter.$anonfun$initializePredef$3(Interpreter.scala:148)
	at ammonite.interp.Interpreter.$anonfun$initializePredef$3$adapted(Interpreter.scala:148)
	at ammonite.interp.PredefInitialization$.$anonfun$apply$2(PredefInitialization.scala:79)
	at ammonite.util.Res$.$anonfun$fold$1(Res.scala:32)
	at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
	at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
	at scala.collection.immutable.List.foldLeft(List.scala:91)
	at ammonite.util.Res$.fold(Res.scala:30)
	at ammonite.interp.PredefInitialization$.apply(PredefInitialization.scala:67)
	at ammonite.interp.Interpreter.initializePredef(Interpreter.scala:150)
	at ammonite.repl.Repl.initializePredef(Repl.scala:144)
	at ammonite.Main.run(Main.scala:224)
	at org.apache.spark.sql.application.ConnectRepl$.doMain(ConnectRepl.scala:104)
	at org.apache.spark.sql.application.ReplE2ESuite$$anon$1.run(ReplE2ESuite.scala:60)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.base/java.lang.Thread.run(Thread.java:1583)
	...
[error] Error during tests:
[error] 	Running java with options -classpath /home/runner/work/spark/spark/connector/connect/client/jvm/target/scala-2.12/test-classes:/home/runner/work/spark/spark/connector/connect/client/jvm/target/scala-2.12/spark-connect-client-jvm_2.12-3.5.0-SNAPSHOT.jar:/home/runner/work/spark/spark/connector/connect/common/target/scala-2.12/spark-connect-common_2.12-3.5.0-
...
[error] (connect-client-jvm / Test / test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 50 s, completed Jul 2, 2023, 4:55:33 AM
[error] running /home/runner/work/spark/spark/build/sbt -Phadoop-3 -Pyarn -Pmesos -Pconnect -Phadoop-cloud -Pkubernetes -Pspark-ganglia-lgpl -Pvolcano sql-kafka-0-10/test connect/test connect-client-jvm/test protobuf/test streaming/test streaming-kafka-0-10/test token-provider-kafka-0-10/test mllib-local/test mllib/test yarn/test network-yarn/test mesos/test kubernetes/test hadoop-cloud/test ; received return code 1
Error: Process completed with exit code 18.
```

The test result was judged as failed on GA.

**After**

- https://github.com/LuciferYang/spark/actions/runs/5439928518/jobs/9892364759

```
[info] Run completed in 10 seconds, 973 milliseconds.
[info] Total number of tests run: 858
[info] Suites: completed 22, aborted 0
[info] Tests: succeeded 858, failed 0, canceled 167, ignored 1, pending 0
[info] All tests passed.
```
<img width="1274" alt="image" src="https://github.com/apache/spark/assets/1475305/8f21a8dc-18b1-4663-9698-27513adbc38d">

Closes #41814 from LuciferYang/SPARK-44259-FOLLOWUP.

Lead-authored-by: yangjie01 <yangjie01@baidu.com>
Co-authored-by: YangJie <yangjie01@baidu.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
…ct-all-box

### What changes were proposed in this pull request?

This PR fixes the bug that the last element of execOptionalColumns does not point to the executor removeReason but HeapHistogram.

### Why are the changes needed?

bugfix, otherwise, after users've checked the select-all-box, an unexpected HeapHistogram will show with a dead link
### Does this PR introduce _any_ user-facing change?

no, bugfix and not released

### How was this patch tested?

locally built and tested to verify the HeapHistogram invisible

<img width="1768" alt="image" src="https://github.com/apache/spark/assets/8326978/1fa108a6-5598-4c90-855c-35104ed2c740">

Closes #41847 from yaooqinn/SPARK-44294.

Authored-by: Kent Yao <yao@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants