Skip to content

update docs

update docs #341

Triggered via push November 13, 2023 23:01
Status Failure
Total duration 5h 3m 29s
Artifacts 14

build_main.yml

on: push
Run  /  Check changes
52s
Run / Check changes
Run  /  Base image build
44s
Run / Base image build
Run  /  Breaking change detection with Buf (branch-3.5)
54s
Run / Breaking change detection with Buf (branch-3.5)
Run  /  Run TPC-DS queries with SF=1
0s
Run / Run TPC-DS queries with SF=1
Run  /  Run Docker integration tests
0s
Run / Run Docker integration tests
Run  /  Run Spark on Kubernetes Integration test
56m 47s
Run / Run Spark on Kubernetes Integration test
Matrix: Run / build
Matrix: Run / java-other-versions
Run  /  Build modules: sparkr
0s
Run / Build modules: sparkr
Run  /  Linters, licenses, dependencies and documentation generation
2h 18m
Run / Linters, licenses, dependencies and documentation generation
Matrix: Run / pyspark
Fit to window
Zoom out
Zoom in

Annotations

18 errors and 10 warnings
Run / Build modules: pyspark-connect
Process completed with exit code 19.
Run / Build modules: pyspark-sql, pyspark-resource, pyspark-testing
Process completed with exit code 19.
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-5889308bcb01be75-exec-1".
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-92945b8bcb029bba-exec-1".
Run / Run Spark on Kubernetes Integration test
sleep interrupted
Run / Run Spark on Kubernetes Integration test
Task io.fabric8.kubernetes.client.utils.internal.SerialExecutor$$Lambda$679/0x00007fb02c5c3818@78b47997 rejected from java.util.concurrent.ThreadPoolExecutor@13b65e40[Shutting down, pool size = 3, active threads = 2, queued tasks = 0, completed tasks = 317]
Run / Run Spark on Kubernetes Integration test
sleep interrupted
Run / Run Spark on Kubernetes Integration test
Task io.fabric8.kubernetes.client.utils.internal.SerialExecutor$$Lambda$679/0x00007fb02c5c3818@a93f784 rejected from java.util.concurrent.ThreadPoolExecutor@13b65e40[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 318]
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-5647608bcb1345cb-exec-1".
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-e3700a8bcb14247f-exec-1".
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-fe59a58bcb17baa5-exec-1".
Run / Run Spark on Kubernetes Integration test
Status(apiVersion=v1, code=404, details=StatusDetails(causes=[], group=null, kind=pods, name=spark-test-app-095922fd3105498bbc6da217804fec32-driver, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=pods "spark-test-app-095922fd3105498bbc6da217804fec32-driver" not found, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=NotFound, status=Failure, additionalProperties={})..
Run / Build modules: pyspark-mllib, pyspark-ml, pyspark-ml-connect
The job running on runner GitHub Actions 19 has exceeded the maximum execution time of 300 minutes.
Run / Build modules: pyspark-mllib, pyspark-ml, pyspark-ml-connect
The operation was canceled.
python/pyspark/sql/tests/connect/test_parity_udtf.py.test_udtf_with_skip_rest_of_input_table_exception: python/pyspark/sql/tests/connect/test_parity_udtf.py#L1
[UNRESOLVED_COLUMN.WITH_SUGGESTION] A column, variable, or function parameter with name `id` cannot be resolved. Did you mean one of the following? [`total`]. SQLSTATE: 42703; line 5 pos 23; 'Sort ['ALL ASC NULLS FIRST], true +- 'Project [('id / 10) AS id_divided_by_ten#1707, total#1710] +- Project [total#1710] +- LateralJoin lateral-subquery#1713 [c#1712], Inner : +- SubqueryAlias __auto_generated_subquery_name_1 : +- Generate test_udtf(outer(c#1712))#1709, false, [total#1710] : +- OneRowRelation +- SubqueryAlias __auto_generated_subquery_name_0 +- Project [named_struct(id, id#1708L, partition_by_0, partition_by_0#1711) AS c#1712] +- Sort [partition_by_0#1711 ASC NULLS FIRST], false +- RepartitionByExpression [partition_by_0#1711] +- Project [id#1708L, (cast(id#1708L as double) / cast(10 as double)) AS partition_by_0#1711] +- SubqueryAlias t +- SubqueryAlias t +- Project [id#1708L] +- Range (1, 21, step=1, splits=None) JVM stacktrace: org.apache.spark.sql.catalyst.ExtendedAnalysisException at org.apache.spark.sql.errors.QueryCompilationErrors$.unresolvedAttributeError(QueryCompilationErrors.scala:326) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.org$apache$spark$sql$catalyst$analysis$CheckAnalysis$$failUnresolvedAttribute(CheckAnalysis.scala:149) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$6(CheckAnalysis.scala:306) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$6$adapted(CheckAnalysis.scala:304) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:227) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:226) at scala.collection.immutable.Vector.foreach(Vector.scala:2124) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:226) at scala.collection.immutable.Vector.foreach(Vector.scala:2124) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:226) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$5(CheckAnalysis.scala:304) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$5$adapted(CheckAnalysis.scala:304) at scala.collection.immutable.List.foreach(List.scala:333) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2(CheckAnalysis.scala:304) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2$adapted(CheckAnalysis.scala:222) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:227) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:226) at scala.collection.immutable.Vector.foreach(Vector.scala:2124) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:226) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0(CheckAnalysis.scala:222) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0$(CheckAnalysis.scala:204) at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis0(Analyzer.scala:191) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:196) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:167) at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:191) at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:213) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:330) at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:211) at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:88) at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:138) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:230) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:557) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:230) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:229) at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:88) at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:85) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:69) at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:98) at org.apache.spark.sql.SparkSession.$anonfun$sql$4(SparkSession.scala:697) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:688) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleSqlCommand(SparkConnectPlanner.scala:2538) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2496) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:199) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:158) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:132) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:263) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:263) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:94) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withContextClassLoader$1(SessionHolder.scala:250) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:182) at org.apache.spark.sql.connect.service.SessionHolder.withContextClassLoader(SessionHolder.scala:249) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:262) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:132) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:84) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:225)
python/pyspark/sql/tests/connect/test_parity_udtf.py.test_udtf_with_skip_rest_of_input_table_exception: python/pyspark/sql/tests/connect/test_parity_udtf.py#L1
[UNRESOLVED_COLUMN.WITH_SUGGESTION] A column, variable, or function parameter with name `id` cannot be resolved. Did you mean one of the following? [`total`]. SQLSTATE: 42703; line 5 pos 23; 'Sort ['ALL ASC NULLS FIRST], true +- 'Project [('id / 10) AS id_divided_by_ten#4429, total#4432] +- Project [total#4432] +- LateralJoin lateral-subquery#4435 [c#4434], Inner : +- SubqueryAlias __auto_generated_subquery_name_1 : +- Generate test_udtf(outer(c#4434))#4431, false, [total#4432] : +- OneRowRelation +- SubqueryAlias __auto_generated_subquery_name_0 +- Project [named_struct(id, id#4430L, partition_by_0, partition_by_0#4433) AS c#4434] +- Sort [partition_by_0#4433 ASC NULLS FIRST], false +- RepartitionByExpression [partition_by_0#4433] +- Project [id#4430L, (cast(id#4430L as double) / cast(10 as double)) AS partition_by_0#4433] +- SubqueryAlias t +- SubqueryAlias t +- Project [id#4430L] +- Range (1, 21, step=1, splits=None) JVM stacktrace: org.apache.spark.sql.catalyst.ExtendedAnalysisException at org.apache.spark.sql.errors.QueryCompilationErrors$.unresolvedAttributeError(QueryCompilationErrors.scala:326) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.org$apache$spark$sql$catalyst$analysis$CheckAnalysis$$failUnresolvedAttribute(CheckAnalysis.scala:149) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$6(CheckAnalysis.scala:306) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$6$adapted(CheckAnalysis.scala:304) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:227) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:226) at scala.collection.immutable.Vector.foreach(Vector.scala:2124) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:226) at scala.collection.immutable.Vector.foreach(Vector.scala:2124) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:226) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$5(CheckAnalysis.scala:304) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$5$adapted(CheckAnalysis.scala:304) at scala.collection.immutable.List.foreach(List.scala:333) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2(CheckAnalysis.scala:304) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2$adapted(CheckAnalysis.scala:222) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:227) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:226) at scala.collection.immutable.Vector.foreach(Vector.scala:2124) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:226) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0(CheckAnalysis.scala:222) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0$(CheckAnalysis.scala:204) at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis0(Analyzer.scala:191) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:196) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:167) at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:191) at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:213) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:330) at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:211) at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:88) at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:138) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:230) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:557) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:230) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:229) at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:88) at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:85) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:69) at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:98) at org.apache.spark.sql.SparkSession.$anonfun$sql$4(SparkSession.scala:697) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:688) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleSqlCommand(SparkConnectPlanner.scala:2538) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2496) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:199) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:158) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:132) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:263) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:263) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:94) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withContextClassLoader$1(SessionHolder.scala:250) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:182) at org.apache.spark.sql.connect.service.SessionHolder.withContextClassLoader(SessionHolder.scala:249) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:262) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:132) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:84) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:225)
python/pyspark/sql/tests/test_udtf.py.test_udtf_with_skip_rest_of_input_table_exception: python/pyspark/sql/tests/test_udtf.py#L1
[UNRESOLVED_COLUMN.WITH_SUGGESTION] A column, variable, or function parameter with name `id` cannot be resolved. Did you mean one of the following? [`total`]. SQLSTATE: 42703; line 5 pos 23; 'Sort ['ALL ASC NULLS FIRST], true +- 'Project [('id / 10) AS id_divided_by_ten#975, total#978] +- Project [total#978] +- LateralJoin lateral-subquery#981 [c#980], Inner : +- SubqueryAlias __auto_generated_subquery_name_1 : +- Generate test_udtf(outer(c#980))#977, false, [total#978] : +- OneRowRelation +- SubqueryAlias __auto_generated_subquery_name_0 +- Project [named_struct(id, id#976L, partition_by_0, partition_by_0#979) AS c#980] +- Sort [partition_by_0#979 ASC NULLS FIRST], false +- RepartitionByExpression [partition_by_0#979] +- Project [id#976L, (cast(id#976L as double) / cast(10 as double)) AS partition_by_0#979] +- SubqueryAlias t +- SubqueryAlias t +- Project [id#976L] +- Range (1, 21, step=1, splits=None) JVM stacktrace: org.apache.spark.sql.AnalysisException: [UNRESOLVED_COLUMN.WITH_SUGGESTION] A column, variable, or function parameter with name `id` cannot be resolved. Did you mean one of the following? [`total`]. SQLSTATE: 42703; line 5 pos 23; 'Sort ['ALL ASC NULLS FIRST], true +- 'Project [('id / 10) AS id_divided_by_ten#975, total#978] +- Project [total#978] +- LateralJoin lateral-subquery#981 [c#980], Inner : +- SubqueryAlias __auto_generated_subquery_name_1 : +- Generate test_udtf(outer(c#980))#977, false, [total#978] : +- OneRowRelation +- SubqueryAlias __auto_generated_subquery_name_0 +- Project [named_struct(id, id#976L, partition_by_0, partition_by_0#979) AS c#980] +- Sort [partition_by_0#979 ASC NULLS FIRST], false +- RepartitionByExpression [partition_by_0#979] +- Project [id#976L, (cast(id#976L as double) / cast(10 as double)) AS partition_by_0#979] +- SubqueryAlias t +- SubqueryAlias t +- Project [id#976L] +- Range (1, 21, step=1, splits=None) at org.apache.spark.sql.errors.QueryCompilationErrors$.unresolvedAttributeError(QueryCompilationErrors.scala:326) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.org$apache$spark$sql$catalyst$analysis$CheckAnalysis$$failUnresolvedAttribute(CheckAnalysis.scala:149) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$6(CheckAnalysis.scala:306) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$6$adapted(CheckAnalysis.scala:304) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:227) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:226) at scala.collection.immutable.Vector.foreach(Vector.scala:2124) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:226) at scala.collection.immutable.Vector.foreach(Vector.scala:2124) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:226) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$5(CheckAnalysis.scala:304) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$5$adapted(CheckAnalysis.scala:304) at scala.collection.immutable.List.foreach(List.scala:333) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2(CheckAnalysis.scala:304) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2$adapted(CheckAnalysis.scala:222) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:227) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:226) at scala.collection.immutable.Vector.foreach(Vector.scala:2124) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:226) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0(CheckAnalysis.scala:222) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0$(CheckAnalysis.scala:204) at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis0(Analyzer.scala:191) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:196) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:167) at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:191) at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:213) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:330) at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:211) at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:88) at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:138) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:230) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:557) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:230) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:229) at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:88) at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:85) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:69) at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:98) at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:644) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:635) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:665) at jdk.internal.reflect.GeneratedMethodAccessor92.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.base/java.lang.Thread.run(Thread.java:840)
python/pyspark/sql/tests/test_udtf.py.test_udtf_with_skip_rest_of_input_table_exception: python/pyspark/sql/tests/test_udtf.py#L1
[UNRESOLVED_COLUMN.WITH_SUGGESTION] A column, variable, or function parameter with name `id` cannot be resolved. Did you mean one of the following? [`total`]. SQLSTATE: 42703; line 5 pos 23; 'Sort ['ALL ASC NULLS FIRST], true +- 'Project [('id / 10) AS id_divided_by_ten#2430, total#2433] +- Project [total#2433] +- LateralJoin lateral-subquery#2436 [c#2435], Inner : +- SubqueryAlias __auto_generated_subquery_name_1 : +- Generate test_udtf(outer(c#2435))#2432, false, [total#2433] : +- OneRowRelation +- SubqueryAlias __auto_generated_subquery_name_0 +- Project [named_struct(id, id#2431L, partition_by_0, partition_by_0#2434) AS c#2435] +- Sort [partition_by_0#2434 ASC NULLS FIRST], false +- RepartitionByExpression [partition_by_0#2434] +- Project [id#2431L, (cast(id#2431L as double) / cast(10 as double)) AS partition_by_0#2434] +- SubqueryAlias t +- SubqueryAlias t +- Project [id#2431L] +- Range (1, 21, step=1, splits=None) JVM stacktrace: org.apache.spark.sql.AnalysisException: [UNRESOLVED_COLUMN.WITH_SUGGESTION] A column, variable, or function parameter with name `id` cannot be resolved. Did you mean one of the following? [`total`]. SQLSTATE: 42703; line 5 pos 23; 'Sort ['ALL ASC NULLS FIRST], true +- 'Project [('id / 10) AS id_divided_by_ten#2430, total#2433] +- Project [total#2433] +- LateralJoin lateral-subquery#2436 [c#2435], Inner : +- SubqueryAlias __auto_generated_subquery_name_1 : +- Generate test_udtf(outer(c#2435))#2432, false, [total#2433] : +- OneRowRelation +- SubqueryAlias __auto_generated_subquery_name_0 +- Project [named_struct(id, id#2431L, partition_by_0, partition_by_0#2434) AS c#2435] +- Sort [partition_by_0#2434 ASC NULLS FIRST], false +- RepartitionByExpression [partition_by_0#2434] +- Project [id#2431L, (cast(id#2431L as double) / cast(10 as double)) AS partition_by_0#2434] +- SubqueryAlias t +- SubqueryAlias t +- Project [id#2431L] +- Range (1, 21, step=1, splits=None) at org.apache.spark.sql.errors.QueryCompilationErrors$.unresolvedAttributeError(QueryCompilationErrors.scala:326) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.org$apache$spark$sql$catalyst$analysis$CheckAnalysis$$failUnresolvedAttribute(CheckAnalysis.scala:149) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$6(CheckAnalysis.scala:306) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$6$adapted(CheckAnalysis.scala:304) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:227) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:226) at scala.collection.immutable.Vector.foreach(Vector.scala:2124) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:226) at scala.collection.immutable.Vector.foreach(Vector.scala:2124) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:226) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$5(CheckAnalysis.scala:304) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$5$adapted(CheckAnalysis.scala:304) at scala.collection.immutable.List.foreach(List.scala:333) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2(CheckAnalysis.scala:304) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2$adapted(CheckAnalysis.scala:222) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:227) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:226) at scala.collection.immutable.Vector.foreach(Vector.scala:2124) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:226) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0(CheckAnalysis.scala:222) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0$(CheckAnalysis.scala:204) at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis0(Analyzer.scala:191) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:196) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:167) at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:191) at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:213) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:330) at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:211) at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:88) at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:138) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:230) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:557) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:230) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:229) at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:88) at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:85) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:69) at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:98) at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:644) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:635) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:665) at jdk.internal.reflect.GeneratedMethodAccessor92.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.base/java.lang.Thread.run(Thread.java:840)
Run / Build modules: sql - slow tests
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: core, unsafe, kvstore, avro, utils, network-common, network-shuffle, repl, launcher, examples, sketch, graphx
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: hive - slow tests
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: sql - extended tests
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: mllib-local,mllib
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: sql - other tests
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: streaming, sql-kafka-0-10, streaming-kafka-0-10, yarn, kubernetes, hadoop-cloud, spark-ganglia-lgpl, connect, protobuf
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: hive - other tests
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: api, catalyst, hive-thriftserver
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: pyspark-errors
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.

Artifacts

Produced during runtime
Name Size
site Expired
59.5 MB
test-results-pyspark-connect--17-hadoop3-hive2.3 Expired
421 KB
test-results-pyspark-core, pyspark-streaming--17-hadoop3-hive2.3 Expired
80.2 KB
test-results-pyspark-mllib, pyspark-ml, pyspark-ml-connect--17-hadoop3-hive2.3 Expired
482 KB
test-results-pyspark-pandas--17-hadoop3-hive2.3 Expired
1.46 MB
test-results-pyspark-pandas-connect-part0--17-hadoop3-hive2.3 Expired
1.32 MB
test-results-pyspark-pandas-connect-part1--17-hadoop3-hive2.3 Expired
1.42 MB
test-results-pyspark-pandas-connect-part2--17-hadoop3-hive2.3 Expired
953 KB
test-results-pyspark-pandas-connect-part3--17-hadoop3-hive2.3 Expired
530 KB
test-results-pyspark-pandas-slow--17-hadoop3-hive2.3 Expired
2.86 MB
test-results-pyspark-sql, pyspark-resource, pyspark-testing--17-hadoop3-hive2.3 Expired
418 KB
unit-tests-log-pyspark-connect--17-hadoop3-hive2.3 Expired
1.82 GB
unit-tests-log-pyspark-mllib, pyspark-ml, pyspark-ml-connect--17-hadoop3-hive2.3 Expired
326 MB
unit-tests-log-pyspark-sql, pyspark-resource, pyspark-testing--17-hadoop3-hive2.3 Expired
1.19 GB