Skip to content

Commit

Permalink
[SPARK-49426][CONNECT][SQL] Create a shared interface for DataFrameWr…
Browse files Browse the repository at this point in the history
…iterV2

### What changes were proposed in this pull request?
This PR creates a shared interface for DataFrameWriterV2.

### Why are the changes needed?
We are creating a shared Scala Spark SQL interface for Classic and Connect.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Existing tests.

### Was this patch authored or co-authored using generative AI tooling?
No.

Closes apache#47962 from hvanhovell/SPARK-49426.

Authored-by: Herman van Hovell <herman@databricks.com>
Signed-off-by: Herman van Hovell <herman@databricks.com>
  • Loading branch information
hvanhovell committed Sep 4, 2024
1 parent 4a79e73 commit c5293ec
Show file tree
Hide file tree
Showing 10 changed files with 440 additions and 555 deletions.

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ import org.apache.spark.sql.connect.common.{DataTypeProtoConverter, StorageLevel
import org.apache.spark.sql.errors.DataTypeErrors.toSQLId
import org.apache.spark.sql.expressions.SparkUserDefinedFunction
import org.apache.spark.sql.functions.{struct, to_json}
import org.apache.spark.sql.internal.{ColumnNodeToProtoConverter, DataFrameWriterImpl, ToScalaUDF, UDFAdaptors, UnresolvedAttribute, UnresolvedRegex}
import org.apache.spark.sql.internal.{ColumnNodeToProtoConverter, DataFrameWriterImpl, DataFrameWriterV2Impl, ToScalaUDF, UDFAdaptors, UnresolvedAttribute, UnresolvedRegex}
import org.apache.spark.sql.streaming.DataStreamWriter
import org.apache.spark.sql.types.{Metadata, StructType}
import org.apache.spark.storage.StorageLevel
Expand Down Expand Up @@ -1018,27 +1018,9 @@ class Dataset[T] private[sql] (
new DataFrameWriterImpl[T](this)
}

/**
* Create a write configuration builder for v2 sources.
*
* This builder is used to configure and execute write operations. For example, to append to an
* existing table, run:
*
* {{{
* df.writeTo("catalog.db.table").append()
* }}}
*
* This can also be used to create or replace existing tables:
*
* {{{
* df.writeTo("catalog.db.table").partitionedBy($"col").createOrReplace()
* }}}
*
* @group basic
* @since 3.4.0
*/
/** @inheritdoc */
def writeTo(table: String): DataFrameWriterV2[T] = {
new DataFrameWriterV2[T](table, this)
new DataFrameWriterV2Impl[T](table, this)
}

/**
Expand Down
Loading

0 comments on commit c5293ec

Please sign in to comment.