Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/master'
Browse files Browse the repository at this point in the history
  • Loading branch information
Niharikadutta committed Aug 10, 2020
2 parents e4b81af + 61cfeb5 commit 4c5d502
Show file tree
Hide file tree
Showing 133 changed files with 5,520 additions and 1,254 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -367,3 +367,6 @@ hs_err_pid*

# The target folder contains the output of building
**/target/**

# F# vs code
.ionide/
2 changes: 2 additions & 0 deletions NuGet.config
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,7 @@
<add key="dotnet-core" value="https://dotnetfeed.blob.core.windows.net/dotnet-core/index.json" />
<add key="dotnet-tools" value="https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-tools/nuget/v3/index.json" />
<add key="dotnet-eng" value="https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-eng/nuget/v3/index.json" />
<add key="dotnet5" value="https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet5/nuget/v3/index.json" />
<add key="dotnet-try" value="https://dotnet.myget.org/F/dotnet-try/api/v3/index.json" />
</packageSources>
</configuration>
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
<tbody align="center">
<tr>
<td >2.3.*</td>
<td rowspan=6><a href="https://github.com/dotnet/spark/releases/tag/v0.11.0">v0.11.0</a></td>
<td rowspan=6><a href="https://github.com/dotnet/spark/releases/tag/v0.12.1">v0.12.1</a></td>
</tr>
<tr>
<td>2.4.0</td>
Expand Down
895 changes: 485 additions & 410 deletions azure-pipelines.yml

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion benchmark/scala/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
<modelVersion>4.0.0</modelVersion>
<groupId>com.microsoft.spark</groupId>
<artifactId>microsoft-spark-benchmark</artifactId>
<version>0.11.0</version>
<version>0.12.1</version>
<inceptionYear>2019</inceptionYear>
<properties>
<encoding>UTF-8</encoding>
Expand Down
92 changes: 92 additions & 0 deletions docs/broadcast-guide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
# Guide to using Broadcast Variables

This is a guide to show how to use broadcast variables in .NET for Apache Spark.

## What are Broadcast Variables

[Broadcast variables in Apache Spark](https://spark.apache.org/docs/2.2.0/rdd-programming-guide.html#broadcast-variables) are a mechanism for sharing variables across executors that are meant to be read-only. They allow the programmer to keep a read-only variable cached on each machine rather than shipping a copy of it with tasks. They can be used, for example, to give every node a copy of a large input dataset in an efficient manner.

### How to use broadcast variables in .NET for Apache Spark

Broadcast variables are created from a variable `v` by calling `SparkContext.Broadcast(v)`. The broadcast variable is a wrapper around `v`, and its value can be accessed by calling the `Value()` method.

Example:

```csharp
string v = "Variable to be broadcasted";
Broadcast<string> bv = SparkContext.Broadcast(v);

// Using the broadcast variable in a UDF:
Func<Column, Column> udf = Udf<string, string>(
str => $"{str}: {bv.Value()}");
```

The type parameter for `Broadcast` should be the type of the variable being broadcasted.

### Deleting broadcast variables

The broadcast variable can be deleted from all executors by calling the `Destroy()` method on it.

```csharp
// Destroying the broadcast variable bv:
bv.Destroy();
```

> Note: `Destroy()` deletes all data and metadata related to the broadcast variable. Use this with caution - once a broadcast variable has been destroyed, it cannot be used again.
#### Caveat of using Destroy

One important thing to keep in mind while using broadcast variables in UDFs is to limit the scope of the variable to only the UDF that is referencing it. The [guide to using UDFs](udf-guide.md) describes this phenomenon in detail. This is especially crucial when calling `Destroy` on the broadcast variable. If the broadcast variable that has been destroyed is visible to or accessible from other UDFs, it gets picked up for serialization by all those UDFs, even if it is not being referenced by them. This will throw an error as .NET for Apache Spark is not able to serialize the destroyed broadcast variable.

Example to demonstrate:

```csharp
string v = "Variable to be broadcasted";
Broadcast<string> bv = SparkContext.Broadcast(v);

// Using the broadcast variable in a UDF:
Func<Column, Column> udf1 = Udf<string, string>(
str => $"{str}: {bv.Value()}");

// Destroying bv
bv.Destroy();

// Calling udf1 after destroying bv throws the following expected exception:
// org.apache.spark.SparkException: Attempted to use Broadcast(0) after it was destroyed
df.Select(udf1(df["_1"])).Show();

// Different UDF udf2 that is not referencing bv
Func<Column, Column> udf2 = Udf<string, string>(
str => $"{str}: not referencing broadcast variable");

// Calling udf2 throws the following (unexpected) exception:
// [Error] [JvmBridge] org.apache.spark.SparkException: Task not serializable
df.Select(udf2(df["_1"])).Show();
```

The recommended way of implementing above desired behavior:

```csharp
string v = "Variable to be broadcasted";
// Restricting the visibility of bv to only the UDF referencing it
{
Broadcast<string> bv = SparkContext.Broadcast(v);

// Using the broadcast variable in a UDF:
Func<Column, Column> udf1 = Udf<string, string>(
str => $"{str}: {bv.Value()}");

// Destroying bv
bv.Destroy();
}

// Different UDF udf2 that is not referencing bv
Func<Column, Column> udf2 = Udf<string, string>(
str => $"{str}: not referencing broadcast variable");

// Calling udf2 works fine as expected
df.Select(udf2(df["_1"])).Show();
```
This ensures that destroying `bv` doesn't affect calling `udf2` because of unexpected serialization behavior.

Broadcast variables are useful for transmitting read-only data to all executors, as the data is sent only once and this can give performance benefits when compared with using local variables that get shipped to the executors with each task. Please refer to the [official documentation](https://spark.apache.org/docs/2.2.0/rdd-programming-guide.html#broadcast-variables) to get a deeper understanding of broadcast variables and why they are used.
26 changes: 13 additions & 13 deletions docs/building/ubuntu-instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,14 +35,14 @@ If you already have all the pre-requisites, skip to the [build](ubuntu-instructi
```bash
sudo update-alternatives --config java
```
3. Install **[Apache Maven 3.6.0+](https://maven.apache.org/download.cgi)**
3. Install **[Apache Maven 3.6.3+](https://maven.apache.org/download.cgi)**
- Run the following command:
```bash
mkdir -p ~/bin/maven
cd ~/bin/maven
wget https://www-us.apache.org/dist/maven/maven-3/3.6.0/binaries/apache-maven-3.6.0-bin.tar.gz
tar -xvzf apache-maven-3.6.0-bin.tar.gz
ln -s apache-maven-3.6.0 current
wget https://www-us.apache.org/dist/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz
tar -xvzf apache-maven-3.6.3-bin.tar.gz
ln -s apache-maven-3.6.3 current
export M2_HOME=~/bin/maven/current
export PATH=${M2_HOME}/bin:${PATH}
source ~/.bashrc
Expand All @@ -54,11 +54,11 @@ If you already have all the pre-requisites, skip to the [build](ubuntu-instructi
<summary>&#x1F4D9; Click to see sample mvn -version output</summary>

```
Apache Maven 3.6.0 (97c98ec64a1fdfee7767ce5ffb20918da4f719f3; 2018-10-24T18:41:47Z)
Maven home: ~/bin/apache-maven-3.6.0
Java version: 1.8.0_191, vendor: Oracle Corporation, runtime: /usr/lib/jvm/java-8-openjdk-amd64/jre
Default locale: en, platform encoding: UTF-8
OS name: "linux", version: "4.4.0-17763-microsoft", arch: "amd64", family: "unix"
Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
Maven home: ~/bin/apache-maven-3.6.3
Java version: 1.8.0_242, vendor: Oracle Corporation, runtime: /usr/lib/jvm/java-8-openjdk-amd64/jre
Default locale: en_US, platform encoding: ANSI_X3.4-1968
OS name: "linux", version: "4.4.0-142-generic", arch: "amd64", family: "unix"
```
4. Install **[Apache Spark 2.3+](https://spark.apache.org/downloads.html)**
- Download [Apache Spark 2.3+](https://spark.apache.org/downloads.html) and extract it into a local folder (e.g., `~/bin/spark-2.3.2-bin-hadoop2.7`)
Expand Down Expand Up @@ -185,15 +185,15 @@ Once you build the samples, you can use `spark-submit` to submit your .NET Core
--class org.apache.spark.deploy.dotnet.DotnetRunner \
--master local \
~/dotnet.spark/src/scala/microsoft-spark-<version>/target/microsoft-spark-<version>.jar \
Microsoft.Spark.CSharp.Examples Sql.Batch.Basic $SPARK_HOME/examples/src/main/resources/people.json
./Microsoft.Spark.CSharp.Examples Sql.Batch.Basic $SPARK_HOME/examples/src/main/resources/people.json
```
- **[Microsoft.Spark.Examples.Sql.Streaming.StructuredNetworkWordCount](../../examples/Microsoft.Spark.CSharp.Examples/Sql/Streaming/StructuredNetworkWordCount.cs)**
```bash
spark-submit \
--class org.apache.spark.deploy.dotnet.DotnetRunner \
--master local \
~/dotnet.spark/src/scala/microsoft-spark-<version>/target/microsoft-spark-<version>.jar \
Microsoft.Spark.CSharp.Examples Sql.Streaming.StructuredNetworkWordCount localhost 9999
./Microsoft.Spark.CSharp.Examples Sql.Streaming.StructuredNetworkWordCount localhost 9999
```
- **[Microsoft.Spark.Examples.Sql.Streaming.StructuredKafkaWordCount (maven accessible)](../../examples/Microsoft.Spark.CSharp.Examples/Sql/Streaming/StructuredKafkaWordCount.cs)**
```bash
Expand All @@ -202,7 +202,7 @@ Once you build the samples, you can use `spark-submit` to submit your .NET Core
--class org.apache.spark.deploy.dotnet.DotnetRunner \
--master local \
~/dotnet.spark/src/scala/microsoft-spark-<version>/target/microsoft-spark-<version>.jar \
Microsoft.Spark.CSharp.Examples Sql.Streaming.StructuredKafkaWordCount localhost:9092 subscribe test
./Microsoft.Spark.CSharp.Examples Sql.Streaming.StructuredKafkaWordCount localhost:9092 subscribe test
```
- **[Microsoft.Spark.Examples.Sql.Streaming.StructuredKafkaWordCount (jars provided)](../../examples/Microsoft.Spark.CSharp.Examples/Sql/Streaming/StructuredKafkaWordCount.cs)**
```bash
Expand All @@ -211,7 +211,7 @@ Once you build the samples, you can use `spark-submit` to submit your .NET Core
--class org.apache.spark.deploy.dotnet.DotnetRunner \
--master local \
~/dotnet.spark/src/scala/microsoft-spark-<version>/target/microsoft-spark-<version>.jar \
Microsoft.Spark.CSharp.Examples Sql.Streaming.StructuredKafkaWordCount localhost:9092 subscribe test
./Microsoft.Spark.CSharp.Examples Sql.Streaming.StructuredKafkaWordCount localhost:9092 subscribe test
```
Feel this experience is complicated? Help us by taking up [Simplify User Experience for Running an App](https://github.com/dotnet/spark/issues/6)
8 changes: 4 additions & 4 deletions docs/building/windows-instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,10 +30,10 @@ If you already have all the pre-requisites, skip to the [build](windows-instruct
3. Install **[Java 1.8](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html)**
- Select the appropriate version for your operating system e.g., jdk-8u201-windows-x64.exe for Win x64 machine.
- Install using the installer and verify you are able to run `java` from your command-line
4. Install **[Apache Maven 3.6.0+](https://maven.apache.org/download.cgi)**
- Download [Apache Maven 3.6.0](http://mirror.metrocast.net/apache/maven/maven-3/3.6.0/binaries/apache-maven-3.6.0-bin.zip)
- Extract to a local directory e.g., `c:\bin\apache-maven-3.6.0\`
- Add Apache Maven to your [PATH environment variable](https://www.java.com/en/download/help/path.xml) e.g., `c:\bin\apache-maven-3.6.0\bin`
4. Install **[Apache Maven 3.6.3+](https://maven.apache.org/download.cgi)**
- Download [Apache Maven 3.6.3](http://mirror.metrocast.net/apache/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.zip)
- Extract to a local directory e.g., `c:\bin\apache-maven-3.6.3\`
- Add Apache Maven to your [PATH environment variable](https://www.java.com/en/download/help/path.xml) e.g., `c:\bin\apache-maven-3.6.3\bin`
- Verify you are able to run `mvn` from your command-line
5. Install **[Apache Spark 2.3+](https://spark.apache.org/downloads.html)**
- Download [Apache Spark 2.3+](https://spark.apache.org/downloads.html) and extract it into a local folder (e.g., `c:\bin\spark-2.3.2-bin-hadoop2.7\`) using [7-zip](https://www.7-zip.org/).
Expand Down
110 changes: 110 additions & 0 deletions docs/release-notes/0.12.1/release-0.12.1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
# .NET for Apache Spark 0.12.1 Release Notes

### New Features/Improvements

* Expose `JvmException` to capture JVM error messages separately ([#566](https://github.com/dotnet/spark/pull/566))

### Bug Fixes

* AssemblyLoader should use absolute assembly path when loading assemblies ([570](https://github.com/dotnet/spark/pull/570))

### Infrastructure / Documentation / Etc.

* None

### Breaking Changes

* None

### Known Issues

* Broadcast variables do not work with [dotnet-interactive](https://github.com/dotnet/interactive) ([#561](https://github.com/dotnet/spark/pull/561))

### Compatibility

#### Backward compatibility

The following table describes the oldest version of the worker that the current version is compatible with, along with new features that are incompatible with the worker.

<table>
<thead>
<tr>
<th>Oldest compatible Microsoft.Spark.Worker version</th>
<th>Incompatible features</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td rowspan=4>v0.9.0</td>
<td>DataFrame with Grouped Map UDF <a href="https://github.com/dotnet/spark/pull/277">(#277)</a></td>
</tr>
<tr>
<td>DataFrame with Vector UDF <a href="https://github.com/dotnet/spark/pull/277">(#277)</a></td>
</tr>
<tr>
<td>Support for Broadcast Variables <a href="https://github.com/dotnet/spark/pull/414">(#414)</a></td>
</tr>
<tr>
<td>Support for TimestampType <a href="https://github.com/dotnet/spark/pull/428">(#428)</a></td>
</tr>
</tbody>
</table>

#### Forward compatibility

The following table describes the oldest version of .NET for Apache Spark release that the current worker is compatible with.

<table>
<thead>
<tr>
<th>Oldest compatible .NET for Apache Spark release version</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td>v0.9.0</td>
</tr>
</tbody>
</table>

### Supported Spark Versions

The following table outlines the supported Spark versions along with the microsoft-spark JAR to use with:

<table>
<thead>
<tr>
<th>Spark Version</th>
<th>microsoft-spark JAR</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td>2.3.*</td>
<td>microsoft-spark-2.3.x-0.12.1.jar</td>
</tr>
<tr>
<td>2.4.0</td>
<td rowspan=6>microsoft-spark-2.4.x-0.12.1.jar</td>
</tr>
<tr>
<td>2.4.1</td>
</tr>
<tr>
<td>2.4.3</td>
</tr>
<tr>
<td>2.4.4</td>
</tr>
<tr>
<td>2.4.5</td>
</tr>
<tr>
<td>2.4.6</td>
</tr>
<tr>
<td>2.4.2</td>
<td><a href="https://github.com/dotnet/spark/issues/60">Not supported</a></td>
</tr>
</tbody>
</table>
Loading

0 comments on commit 4c5d502

Please sign in to comment.