SparkSession-based UDF profiler. #831
build_main.yml
on: push
Run
/
Check changes
33s
Run
/
Protobuf breaking change detection and Python CodeGen check
1m 6s
Run
/
Run TPC-DS queries with SF=1
49m 32s
Run
/
Run Docker integration tests
26m 44s
Run
/
Run Spark on Kubernetes Integration test
53m 23s
Run
/
Run Spark UI tests
18s
Matrix: Run / build
Matrix: Run / java-other-versions
Run
/
Build modules: sparkr
27m 8s
Run
/
Linters, licenses, dependencies and documentation generation
17m 55s
Matrix: Run / pyspark
Annotations
27 errors and 1 warning
Run / Linters, licenses, dependencies and documentation generation
Process completed with exit code 1.
|
Run / Build modules: sql - slow tests
Process completed with exit code 18.
|
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-761adc8c846663e9-exec-1".
|
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-ab0a138c84674386-exec-1".
|
Run / Run Spark on Kubernetes Integration test
sleep interrupted
|
Run / Run Spark on Kubernetes Integration test
Task io.fabric8.kubernetes.client.utils.internal.SerialExecutor$$Lambda$685/0x00007f3adc5c0d70@b25ce58 rejected from java.util.concurrent.ThreadPoolExecutor@6eaa7622[Shutting down, pool size = 2, active threads = 2, queued tasks = 0, completed tasks = 298]
|
Run / Run Spark on Kubernetes Integration test
sleep interrupted
|
Run / Run Spark on Kubernetes Integration test
Task io.fabric8.kubernetes.client.utils.internal.SerialExecutor$$Lambda$685/0x00007f3adc5c0d70@63357b74 rejected from java.util.concurrent.ThreadPoolExecutor@6eaa7622[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 299]
|
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-a05ec98c8477faae-exec-1".
|
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-2857318c8478def9-exec-1".
|
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-5ff14d8c847c7b65-exec-1".
|
Run / Run Spark on Kubernetes Integration test
Status(apiVersion=v1, code=404, details=StatusDetails(causes=[], group=null, kind=pods, name=spark-test-app-4191b774358f46c5871c64cd6edfcf2d-driver, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=pods "spark-test-app-4191b774358f46c5871c64cd6edfcf2d-driver" not found, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=NotFound, status=Failure, additionalProperties={})..
|
Run / Build modules: pyspark-connect
Process completed with exit code 19.
|
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py.test_column_order:
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py#L1
[RETRIES_EXCEEDED] The maximum number of retries has been exceeded.
|
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py.test_complex_groupby:
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py#L1
[RETRIES_EXCEEDED] The maximum number of retries has been exceeded.
|
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py.test_datatype_string:
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py#L1
[RETRIES_EXCEEDED] The maximum number of retries has been exceeded.
|
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py.test_decorator:
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py#L1
[RETRIES_EXCEEDED] The maximum number of retries has been exceeded.
|
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py.test_empty_groupby:
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py#L1
[RETRIES_EXCEEDED] The maximum number of retries has been exceeded.
|
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py.test_grouped_over_window:
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py#L1
[RETRIES_EXCEEDED] The maximum number of retries has been exceeded.
|
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py.test_grouped_over_window_with_key:
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py#L1
[RETRIES_EXCEEDED] The maximum number of retries has been exceeded.
|
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py.test_mixed_scalar_udfs_followed_by_groupby_apply:
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py#L1
[RETRIES_EXCEEDED] The maximum number of retries has been exceeded.
|
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py.test_positional_assignment_conf:
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py#L1
[RETRIES_EXCEEDED] The maximum number of retries has been exceeded.
|
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py.test_self_join_with_pandas:
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py#L1
[RETRIES_EXCEEDED] The maximum number of retries has been exceeded.
|
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py.test_timestamp_dst:
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py#L1
[RETRIES_EXCEEDED] The maximum number of retries has been exceeded.
|
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py.test_udf_with_key:
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py#L1
[RETRIES_EXCEEDED] The maximum number of retries has been exceeded.
|
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py.tearDownClass (pyspark.sql.tests.connect.test_parity_pandas_grouped_map.GroupedApplyInPandasTests):
python/pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py#L1
[RETRIES_EXCEEDED] The maximum number of retries has been exceeded.
|
RocksDBStateStoreStreamingAggregationSuite.SPARK-35896: metrics in StateOperatorProgress are output correctly (RocksDBStateStore):
RocksDBStateStoreStreamingAggregationSuite#L1
org.scalatest.exceptions.TestFailedException:
Timed out waiting for stream: The code passed to failAfter did not complete within 120 seconds.
java.base/java.lang.Thread.getStackTrace(Thread.java:1619)
org.scalatest.concurrent.TimeLimits$.failAfterImpl(TimeLimits.scala:277)
org.scalatest.concurrent.TimeLimits.failAfter(TimeLimits.scala:231)
org.scalatest.concurrent.TimeLimits.failAfter$(TimeLimits.scala:230)
org.apache.spark.SparkFunSuite.failAfter(SparkFunSuite.scala:69)
org.apache.spark.sql.streaming.StreamTest.$anonfun$testStream$7(StreamTest.scala:481)
org.apache.spark.sql.streaming.StreamTest.$anonfun$testStream$7$adapted(StreamTest.scala:480)
scala.collection.mutable.HashMap$Node.foreach(HashMap.scala:642)
scala.collection.mutable.HashMap.foreach(HashMap.scala:504)
org.apache.spark.sql.streaming.StreamTest.fetchStreamAnswer$1(StreamTest.scala:480)
Caused by: null
java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1764)
org.apache.spark.sql.execution.streaming.StreamExecution.awaitOffset(StreamExecution.scala:481)
org.apache.spark.sql.streaming.StreamTest.$anonfun$testStream$8(StreamTest.scala:482)
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
org.scalatest.enablers.Timed$$anon$1.timeoutAfter(Timed.scala:127)
org.scalatest.concurrent.TimeLimits$.failAfterImpl(TimeLimits.scala:282)
org.scalatest.concurrent.TimeLimits.failAfter(TimeLimits.scala:231)
org.scalatest.concurrent.TimeLimits.failAfter$(TimeLimits.scala:230)
org.apache.spark.SparkFunSuite.failAfter(SparkFunSuite.scala:69)
org.apache.spark.sql.streaming.StreamTest.$anonfun$testStream$7(StreamTest.scala:481)
== Progress ==
StartStream(ProcessingTimeTrigger(0),org.apache.spark.util.SystemClock@5c603221,Map(spark.sql.shuffle.partitions -> 3),null)
AddData to MemoryStream[value#19321]: 3,2,1,3
=> CheckLastBatch: [3,2],[2,1],[1,1]
AssertOnQuery(<condition>, Check total state rows = List(3), updated state rows = List(3), rows dropped by watermark = List(0), removed state rows = Some(List(0)))
AddData to MemoryStream[value#19321]: 1,4
CheckLastBatch: [1,2],[4,1]
AssertOnQuery(<condition>, Check operator progress metrics: operatorName = stateStoreSave, numShufflePartitions = 3, numStateStoreInstances = 3)
== Stream ==
Output Mode: Update
Stream state: {}
Thread state: alive
Thread stack trace: java.base@17.0.9/jdk.internal.misc.Unsafe.park(Native Method)
java.base@17.0.9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:211)
java.base@17.0.9/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:715)
java.base@17.0.9/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1047)
app//scala.concurrent.impl.Promise$DefaultPromise.tryAwait0(Promise.scala:243)
app//scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:255)
app//scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:104)
app//org.apache.spark.util.ThreadUtils$.awaitReady(ThreadUtils.scala:342)
app//org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:983)
app//org.apache.spark.SparkContext.runJob(SparkContext.scala:2428)
app//org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:386)
app//org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2$(WriteToDataSourceV2Exec.scala:360)
app//org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.writeWithV2(WriteToDataSourceV2Exec.scala:308)
app//org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.run(WriteToDataSourceV2Exec.scala:319)
app//org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
app//org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
app//org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
app//org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:4442)
app//org.apache.spark.sql.Dataset.$anonfun$collect$1(Dataset.scala:3674)
app//org.apache.spark.sql.Dataset$$Lambda$2326/0x00007fbcf12f0000.apply(Unknown Source)
app//org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:4432)
app//org.apache.spark.sql.Dataset$$Lambda$2338/0x00007fbcf12f54a8.apply(Unknown Source)
app//org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:557)
app//org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:4430)
app//org.apache.spark.sql.Dataset$$Lambda$2327/0x00007fbcf12f03d0.apply(Unknown Source)
app//org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$6(SQLExecution.scala:150)
app//org.apache.spark.sql.execution.SQLExecution$$$Lambda$2282/0x00007fbcf12d3be0.apply(Unknown Source)
app//org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:241)
app//org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:116)
app//org.apache.spark.sql.execution.SQLExecution$$$Lambda$2268/0x00007fbcf12d0b10.apply(Unknown Source)
app//org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:919)
app//org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:72)
app//org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:196)
app//org.apache.spark.sql.Dataset.withAction(Dataset.scala:4430)
app//org.apache.spark.sql.Dataset.collect(Dataset.scala:3674)
app//org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$17(MicroBatchExecution.scala:783)
app//org.apache.spark.sql.execution.streaming.MicroBatchExecution$$Lambda$2267/0x00007fbcf12d0850.apply(Unknown Source)
app//org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$6(SQLExecution.scala:150)
app//org.apache.spark.sql.execution.SQLExecution$$$Lambda$2282/0x00007fbcf12d3be0.apply(Unknown Source)
app//org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:241)
app//org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:116)
app//org.apache.spark.sql.execution.SQLExecution$$$Lambda$2268/0x00007fbcf12d0b10.apply(Unknown Source)
app//org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:919)
app//org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:72)
app//org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:196)
app//org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$16(MicroBatchExecution.scala:771)
app//org.apache.spark.sql.execution.streaming.MicroBatchExecution$$Lambda$2265/0x00007fbcf12d0000.apply(Unknown Source)
app//org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:427)
app//org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:425)
app//org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:67)
app//org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:771)
app//org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:326)
app//org.apache.spark.sql.execution.streaming.MicroBatchExecution$$Lambda$1967/0x00007fbcf11f21b8.apply$mcV$sp(Unknown Source)
app//scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
app//org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:427)
app//org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:425)
app//org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:67)
app//org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:289)
app//org.apache.spark.sql.execution.streaming.MicroBatchExecution$$Lambda$1964/0x00007fbcf11f1130.apply$mcZ$sp(Unknown Source)
app//org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:67)
app//org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:279)
app//org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$1(StreamExecution.scala:311)
app//org.apache.spark.sql.execution.streaming.StreamExecution$$Lambda$1955/0x00007fbcf11e8e58.apply$mcV$sp(Unknown Source)
app//scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
app//org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:919)
app//org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:289)
app//org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.$anonfun$run$1(StreamExecution.scala:211)
app//org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1$$Lambda$1951/0x00007fbcf11e76d0.apply$mcV$sp(Unknown Source)
app//scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
app//org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:94)
app//org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:211)
== Sink ==
== Plan ==
== Parsed Logical Plan ==
WriteToMicroBatchDataSource MemorySink, c7dcd408-619e-4333-a8af-0050d070b913, Update, 0
+- Aggregate [value#19321], [value#19321, count(1) AS count(1)#19326L]
+- StreamingDataSourceV2Relation [value#19321], org.apache.spark.sql.execution.streaming.MemoryStreamScanBuilder@5030e7ed, MemoryStream[value#19321], -1, 0
== Analyzed Logical Plan ==
WriteToMicroBatchDataSource MemorySink, c7dcd408-619e-4333-a8af-0050d070b913, Update, 0
+- Aggregate [value#19321], [value#19321, count(1) AS count(1)#19326L]
+- StreamingDataSourceV2Relation [value#19321], org.apache.spark.sql.execution.streaming.MemoryStreamScanBuilder@5030e7ed, MemoryStream[value#19321], -1, 0
== Optimized Logical Plan ==
WriteToDataSourceV2 MicroBatchWrite[epoch: 0, writer: org.apache.spark.sql.execution.streaming.sources.MemoryStreamingWrite@c95101e]
+- Aggregate [value#19321], [value#19321, count(1) AS count(1)#19326L]
+- StreamingDataSourceV2Relation [value#19321], org.apache.spark.sql.execution.streaming.MemoryStreamScanBuilder@5030e7ed, MemoryStream[value#19321], -1, 0
== Physical Plan ==
WriteToDataSourceV2 MicroBatchWrite[epoch: 0, writer: org.apache.spark.sql.execution.streaming.sources.MemoryStreamingWrite@c95101e], org.apache.spark.sql.execution.datasources.v2.DataSourceV2Strategy$$Lambda$2211/0x00007fbcf129dfa0@7f4e59d6
+- *(4) HashAggregate(keys=[value#19321], functions=[count(1)], output=[value#19321, count(1)#19326L])
+- StateStoreSave [value#19321], state info [ checkpoint = file:/home/runner/work/apache-spark/apache-spark/target/tmp/streaming.metadata-34c7ea0d-1112-40b1-ba50-e69024d50d1a/state, runId = c22f2402-e633-4634-a564-bc7b51bb20ac, opId = 0, ver = 0, numPartitions = 3], Update, 0, 0, 2
+- *(3) HashAggregate(keys=[value#19321], functions=[merge_count(1)], output=[value#19321, count#19355L])
+- StateStoreRestore [value#19321], state info [ checkpoint = file:/home/runner/work/apache-spark/apache-spark/target/tmp/streaming.metadata-34c7ea0d-1112-40b1-ba50-e69024d50d1a/state, runId = c22f2402-e633-4634-a564-bc7b51bb20ac, opId = 0, ver = 0, numPartitions = 3], 2
+- *(2) HashAggregate(keys=[value#19321], functions=[merge_count(1)], output=[value#19321, count#19355L])
+- Exchange hashpartitioning(value#19321, 3), ENSURE_REQUIREMENTS, [plan_id=88167]
+- *(1) HashAggregate(keys=[value#19321], functions=[partial_count(1)], output=[value#19321, count#19355L])
+- *(1) Project [value#19321]
+- MicroBatchScan[value#19321] MemoryStreamDataSource
|
Run / Build modules: pyspark-errors
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
|
Artifacts
Produced during runtime
Name | Size | |
---|---|---|
test-results-api, catalyst, hive-thriftserver--17-hadoop3-hive2.3
Expired
|
2.83 MB |
|
test-results-core, unsafe, kvstore, avro, utils, network-common, network-shuffle, repl, launcher, examples, sketch--17-hadoop3-hive2.3
Expired
|
2.54 MB |
|
test-results-docker-integration--17-hadoop3-hive2.3
Expired
|
119 KB |
|
test-results-hive-- other tests-17-hadoop3-hive2.3
Expired
|
928 KB |
|
test-results-hive-- slow tests-17-hadoop3-hive2.3
Expired
|
863 KB |
|
test-results-mllib-local, mllib, graphx--17-hadoop3-hive2.3
Expired
|
1.46 MB |
|
test-results-pyspark-connect--17-hadoop3-hive2.3
Expired
|
300 KB |
|
test-results-pyspark-core, pyspark-streaming--17-hadoop3-hive2.3
Expired
|
80.6 KB |
|
test-results-pyspark-mllib, pyspark-ml, pyspark-ml-connect--17-hadoop3-hive2.3
Expired
|
1.09 MB |
|
test-results-pyspark-pandas--17-hadoop3-hive2.3
Expired
|
1.33 MB |
|
test-results-pyspark-pandas-connect-part0--17-hadoop3-hive2.3
Expired
|
1.3 MB |
|
test-results-pyspark-pandas-connect-part1--17-hadoop3-hive2.3
Expired
|
1.36 MB |
|
test-results-pyspark-pandas-connect-part2--17-hadoop3-hive2.3
Expired
|
695 KB |
|
test-results-pyspark-pandas-connect-part3--17-hadoop3-hive2.3
Expired
|
702 KB |
|
test-results-pyspark-pandas-slow--17-hadoop3-hive2.3
Expired
|
2.93 MB |
|
test-results-pyspark-sql, pyspark-resource, pyspark-testing--17-hadoop3-hive2.3
Expired
|
426 KB |
|
test-results-sparkr--17-hadoop3-hive2.3
Expired
|
280 KB |
|
test-results-sql-- extended tests-17-hadoop3-hive2.3
Expired
|
3.02 MB |
|
test-results-sql-- other tests-17-hadoop3-hive2.3
Expired
|
4.34 MB |
|
test-results-sql-- slow tests-17-hadoop3-hive2.3
Expired
|
2.85 MB |
|
test-results-streaming, sql-kafka-0-10, streaming-kafka-0-10, streaming-kinesis-asl, yarn, kubernetes, hadoop-cloud, spark-ganglia-lgpl, connect, protobuf--17-hadoop3-hive2.3
Expired
|
1.51 MB |
|
test-results-tpcds--17-hadoop3-hive2.3
Expired
|
21.8 KB |
|
unit-tests-log-pyspark-connect--17-hadoop3-hive2.3
Expired
|
802 MB |
|
unit-tests-log-sql-- slow tests-17-hadoop3-hive2.3
Expired
|
379 MB |
|