Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated dependencies for Spark 3.0.0 #30

Draft
wants to merge 4 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -348,3 +348,6 @@ MigrationBackup/

# Ionide (cross platform F# VS Code tools) working folder
.ionide/

target
.idea
10 changes: 5 additions & 5 deletions build.sbt
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ organization := "com.microsoft.sqlserver.jdbc.spark"

version := "1.0.0"

scalaVersion := "2.11.12"

val sparkVersion = "2.4.6"
scalaVersion := "2.12.11"
ThisBuild / useCoursier := false
val sparkVersion = "3.0.0"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would need to support sparkVersion 2.4/Scala 2.11 combo as well.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I will look into supporting both scenarios.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can use something like
crossScalaVersions := Seq("2.12.10", "2.11.12")
to do this

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rajmera3 it won't work, because there is no Spark 3.0 with Scala 2.11. Here we need to have a combo of (Spark 2.4 + Scala 2.11) & (Spark 3.0 + Scala 2.12)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One option here is to make main line as Spark3.0/Scala.2.12 and stable/old version to a separate branch e.g. Spark2.4 branch.


javacOptions ++= Seq("-source", "1.8", "-target", "1.8", "-Xlint")

Expand All @@ -19,11 +19,11 @@ libraryDependencies ++= Seq(
"tests",
"org.apache.spark" %% "spark-catalyst" % sparkVersion % "test" classifier
"tests",
"org.scalatest" %% "scalatest" % "3.0.5" % "test",
"org.scalatest" %% "scalatest" % "3.0.8" % "test",
"com.novocode" % "junit-interface" % "0.11" % "test",

//SQLServer JDBC jars
"com.microsoft.sqlserver" % "mssql-jdbc" % "7.2.1.jre8"
"com.microsoft.sqlserver" % "mssql-jdbc" % "8.2.1.jre8"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why v8.2 for JDBC driver? Did 7.2 give some issue or was this just alignment to latest.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spark 3 is JDK 11, you need to use

// https://mvnrepository.com/artifact/com.microsoft.sqlserver/mssql-jdbc
libraryDependencies += "com.microsoft.sqlserver" % "mssql-jdbc" % "8.4.1.jre11"

)

scalacOptions := Seq("-unchecked", "-deprecation", "evicted")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@ import java.sql.Connection

import org.scalatest.Matchers
import org.apache.spark.SparkFunSuite
import org.apache.spark.sql.test.SharedSQLContext
import org.apache.spark.sql.test.SharedSparkSession

class DataSourceTest extends SparkFunSuite with Matchers with SharedSQLContext {
class DataSourceTest extends SparkFunSuite with Matchers with SharedSparkSession {

test("Schema validation between Spark DataFrame and SQL Server ResultSet") {}

Expand Down