Skip to content

Handle named arguments only when necessary. #699

Handle named arguments only when necessary.

Handle named arguments only when necessary. #699

Triggered via push September 13, 2023 21:46
Status Failure
Total duration 2h 38m 48s
Artifacts 11

build_main.yml

on: push
Run  /  Check changes
38s
Run / Check changes
Run  /  Base image build
50s
Run / Base image build
Run  /  Breaking change detection with Buf (branch-3.5)
1m 18s
Run / Breaking change detection with Buf (branch-3.5)
Run  /  Scala 2.13 build with SBT
26m 15s
Run / Scala 2.13 build with SBT
Run  /  Run TPC-DS queries with SF=1
0s
Run / Run TPC-DS queries with SF=1
Run  /  Run Docker integration tests
0s
Run / Run Docker integration tests
Run  /  Run Spark on Kubernetes Integration test
1h 28m
Run / Run Spark on Kubernetes Integration test
Matrix: Run / build
Matrix: Run / java-other-versions
Run  /  Build modules: sparkr
0s
Run / Build modules: sparkr
Run  /  Linters, licenses, dependencies and documentation generation
1h 39m
Run / Linters, licenses, dependencies and documentation generation
Matrix: Run / pyspark
Fit to window
Zoom out
Zoom in

Annotations

12 errors and 9 warnings
Run / Build modules: pyspark-connect
Process completed with exit code 19.
Run / Run Spark on Kubernetes Integration test
Set() did not contain "decomtest-f253af8a90a8d711-exec-1".
Run / Run Spark on Kubernetes Integration test
Set() did not contain "decomtest-3f77658a90aa290a-exec-1".
Run / Run Spark on Kubernetes Integration test
sleep interrupted
Run / Run Spark on Kubernetes Integration test
sleep interrupted
Run / Run Spark on Kubernetes Integration test
Task io.fabric8.kubernetes.client.utils.internal.SerialExecutor$$Lambda$580/2121413029@528e7413 rejected from java.util.concurrent.ThreadPoolExecutor@1644434b[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 320]
Run / Run Spark on Kubernetes Integration test
Task io.fabric8.kubernetes.client.utils.internal.SerialExecutor$$Lambda$580/2121413029@14e8a3bb rejected from java.util.concurrent.ThreadPoolExecutor@1644434b[Shutting down, pool size = 2, active threads = 2, queued tasks = 0, completed tasks = 319]
Run / Run Spark on Kubernetes Integration test
Set() did not contain "decomtest-4754458a90c4923e-exec-1".
Run / Run Spark on Kubernetes Integration test
Set() did not contain "decomtest-613f6f8a90c5dd7e-exec-1".
Run / Run Spark on Kubernetes Integration test
Set() did not contain "decomtest-625a488a90ca59e2-exec-1".
Run / Run Spark on Kubernetes Integration test
Status(apiVersion=v1, code=404, details=StatusDetails(causes=[], group=null, kind=pods, name=spark-test-app-df87b6bdd8e244dda89b2b86b88b1f12-driver, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=pods "spark-test-app-df87b6bdd8e244dda89b2b86b88b1f12-driver" not found, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=NotFound, status=Failure, additionalProperties={})..
python/pyspark/sql/tests/connect/test_parity_arrow_python_udf.py.test_named_arguments_negative: python/pyspark/sql/tests/connect/test_parity_arrow_python_udf.py#L1
"test_udf\(\) got an unexpected keyword argument 'c'" does not match " An exception was thrown from the Python worker. Please see the stack trace below. Traceback (most recent call last): File "/__w/apache-spark/apache-spark/python/lib/pyspark.zip/pyspark/worker.py", line 1321, in main process() File "/__w/apache-spark/apache-spark/python/lib/pyspark.zip/pyspark/worker.py", line 1313, in process serializer.dump_stream(out_iter, outfile) File "/__w/apache-spark/apache-spark/python/lib/pyspark.zip/pyspark/sql/pandas/serializers.py", line 470, in dump_stream return ArrowStreamSerializer.dump_stream(self, init_stream_yield_batches(), stream) File "/__w/apache-spark/apache-spark/python/lib/pyspark.zip/pyspark/sql/pandas/serializers.py", line 100, in dump_stream for batch in iterator: File "/__w/apache-spark/apache-spark/python/lib/pyspark.zip/pyspark/sql/pandas/serializers.py", line 463, in init_stream_yield_batches for series in iterator: File "/__w/apache-spark/apache-spark/python/lib/pyspark.zip/pyspark/worker.py", line 1221, in mapper result = tuple(f(*[a[o] for o in arg_offsets]) for arg_offsets, f in udfs) File "/__w/apache-spark/apache-spark/python/lib/pyspark.zip/pyspark/worker.py", line 1221, in <genexpr> result = tuple(f(*[a[o] for o in arg_offsets]) for arg_offsets, f in udfs) File "/__w/apache-spark/apache-spark/python/lib/pyspark.zip/pyspark/worker.py", line 166, in <lambda> lambda *a: (verify_result_length(evaluate(*a), len(a[0])), arrow_return_type), File "/__w/apache-spark/apache-spark/python/lib/pyspark.zip/pyspark/util.py", line 87, in wrapper return f(*args, **kwargs) File "/__w/apache-spark/apache-spark/python/lib/pyspark.zip/pyspark/worker.py", line 151, in evaluate return pd.Series([result_func(func(*row)) for row in zip(*args)]) File "/__w/apache-spark/apache-spark/python/lib/pyspark.zip/pyspark/worker.py", line 151, in <listcomp> return pd.Series([result_func(func(*row)) for row in zip(*args)]) File "/__w/apache-spark/apache-spark/python/lib/pyspark.zip/pyspark/worker.py", line 548, in func return f(**dict(zip(keys, args))) TypeError: test_udf() got an unexpected keyword..."
Run / Build modules: hive - other tests
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: streaming, sql-kafka-0-10, streaming-kafka-0-10, mllib-local, mllib, yarn, mesos, kubernetes, hadoop-cloud, spark-ganglia-lgpl, connect, protobuf
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: sql - other tests
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: hive - slow tests
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: catalyst, hive-thriftserver
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: sql - extended tests
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: core, unsafe, kvstore, avro, network-common, network-shuffle, repl, launcher, examples, sketch, graphx
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: sql - slow tests
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: pyspark-errors
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.

Artifacts

Produced during runtime
Name Size
site Expired
57.7 MB
test-results-pyspark-connect--8-hadoop3-hive2.3 Expired
161 KB
test-results-pyspark-core, pyspark-streaming--8-hadoop3-hive2.3 Expired
78.4 KB
test-results-pyspark-mllib, pyspark-ml, pyspark-ml-connect--8-hadoop3-hive2.3 Expired
553 KB
test-results-pyspark-pandas--8-hadoop3-hive2.3 Expired
1.12 MB
test-results-pyspark-pandas-connect-part0--8-hadoop3-hive2.3 Expired
1.06 MB
test-results-pyspark-pandas-connect-part1--8-hadoop3-hive2.3 Expired
903 KB
test-results-pyspark-pandas-connect-part2--8-hadoop3-hive2.3 Expired
647 KB
test-results-pyspark-pandas-slow--8-hadoop3-hive2.3 Expired
1.61 MB
test-results-pyspark-sql, pyspark-resource, pyspark-testing--8-hadoop3-hive2.3 Expired
377 KB
unit-tests-log-pyspark-connect--8-hadoop3-hive2.3 Expired
761 MB