From b61cca94a607e86ba0eec8c0dfa5ead35f9318ea Mon Sep 17 00:00:00 2001 From: James Malone Date: Fri, 27 May 2016 09:54:23 -0700 Subject: [PATCH 1/3] Resolve merge --- .../20/where-is-my-pcollection-dot-map.html | 212 ++++++++++++++++++ content/blog/index.html | 16 ++ content/capability-matrix/index.html | 4 + content/feed.xml | 96 +++++++- content/index.html | 2 + 5 files changed, 328 insertions(+), 2 deletions(-) create mode 100644 content/blog/2016/05/20/where-is-my-pcollection-dot-map.html diff --git a/content/blog/2016/05/20/where-is-my-pcollection-dot-map.html b/content/blog/2016/05/20/where-is-my-pcollection-dot-map.html new file mode 100644 index 0000000000000..a45cb9c6af6da --- /dev/null +++ b/content/blog/2016/05/20/where-is-my-pcollection-dot-map.html @@ -0,0 +1,212 @@ + + + + + + + + + Where's my PCollection.map()? + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + +
+ +
+

Where's my PCollection.map()?

+ +
+ +
+

Have you ever wondered why Beam has PTransforms for everything instead of having methods on PCollection? Take a look at the history that led to this (and other) design decisions. + +Though Beam is relatively new, its design draws heavily on many years of experience with real-world pipelines. One of the primary inspirations is FlumeJava, which is Google’s internal successor to MapReduce first introduced in 2009.

+ +

The original FlumeJava API has methods like count and parallelDo on the PCollections. Though slightly more succinct, this approach has many disadvantages to extensibility. Every new user to FlumeJava wanted to add transforms, and adding them as methods to PCollection simply doesn’t scale well. In contrast, a PCollection in Beam has a single apply method which takes any PTransform as an argument.

+ +

Have you ever wondered why Beam has PTransforms for everything instead of having methods on PCollection? Take a look at the history that led to this (and other) design decisions.

+ + + + + + + + + + +
FlumeJavaBeam
+PCollection<T> input = …
+PCollection<O> output = input.count()
+                             .parallelDo(...);
+    
+PCollection<T> input = …
+PCollection<O> output = input.apply(Count.perElement())
+                             .apply(ParDo.of(...));
+    
+ +

This is a more scalable approach for several reasons.

+ +

Where to draw the line?

+

Adding methods to PCollection forces a line to be drawn between operations that are “useful” enough to merit this special treatment and those that are not. It is easy to make the case for flat map, group by key, and combine per key. But what about filter? Count? Approximate count? Approximate quantiles? Most frequent? WriteToMyFavoriteSource? Going too far down this path leads to a single enormous class that contains nearly everything one could want to do. (FlumeJava’s PCollection class is over 5000 lines long with around 70 distinct operations, and it could have been much larger had we accepted every proposal.) Furthermore, since Java doesn’t allow adding methods to a class, there is a sharp syntactic divide between those operations that are added to PCollection and those that aren’t. A traditional way to share code is with a library of functions, but functions (in traditional languages like Java at least) are written prefix-style, which doesn’t mix well with the fluent builder style (e.g. input.operation1().operation2().operation3() vs. operation3(operation1(input).operation2())).

+ +

Instead in Beam we’ve chosen a style that places all transforms–whether they be primitive operations, composite operations bundled in the SDK, or part of an external library–on equal footing. This also facilitates alternative implementations (which may even take different options) that are easily interchangeable.

+ + + + + + + + + + +
FlumeJavaBeam
+PCollection<O> output =
+    ExternalLibrary.doStuff(
+        MyLibrary.transform(input, myArgs)
+            .parallelDo(...),
+        externalLibArgs);
+    
+PCollection<O> output = input
+    .apply(MyLibrary.transform(myArgs))
+    .apply(ParDo.of(...))
+    .apply(ExternalLibrary.doStuff(externalLibArgs));
+     
+    
+ +

Configurability

+

It makes for a fluent style to let values (PCollections) be the objects passed around and manipulated (i.e. the handles to the deferred execution graph), but it is the operations themselves that need to be composable, configurable, and extendable. Using PCollection methods for the operations doesn’t scale well here, especially in a language without default or keyword arguments. For example, a ParDo operation can have any number of side inputs and side outputs, or a write operation may have configurations dealing with encoding and compression. One option is to separate these out into multiple overloads or even methods, but that exacerbates the problems above. (FlumeJava evolved over a dozen overloads of the parallelDo method!) Another option is to pass each method a configuration object that can be built up using more fluent idioms like the builder pattern, but at that point one might as well make the configuration object the operation itself, which is what Beam does.

+ +

Type Safety

+

Many operations can only be applied to collections whose elements are of a specific type. For example, the GroupByKey operation should only be applied to PCollection<KV<K, V>>s. In Java at least, it’s not possible to restrict methods based on the element type parameter alone. In FlumeJava, this led us to add a PTable<K, V> subclassing PCollection<KV<K, V>> to contain all the operations specific to PCollections of key-value pairs. This leads to the same question of which element types are special enough to merit being captured by PCollection subclasses. It is not very extensible for third parties and often requires manual downcasts/conversions (which can’t be safely chained in Java) and special operations that produce these PCollection specializations.

+ +

This is particularly inconvenient for transforms that produce outputs whose element types are the same as (or related to) their input’s element types, requiring extra support to generate the right subclasses (e.g. a filter on a PTable should produce another PTable rather than just a raw PCollection of key-value pairs).

+ +

Using PTransforms allows us to sidestep this entire issue. We can place arbitrary constraints on the context in which a transform may be used based on the type of its inputs; for instance GroupByKey is statically typed to only apply to a PCollection<KV<K, V>>. The way this happens is generalizable to arbitrary shapes, without needing to introduce specialized types like PTable.

+ +

Reusability and Structure

+

Though PTransforms are generally constructed at the site at which they’re used, by pulling them out as separate objects one is able to store them and pass them around.

+ +

As pipelines grow and evolve, it is useful to structure your pipeline into modular, often reusable components, and PTransforms allow one to do this nicely in a data-processing pipeline. In addition, modular PTransforms also expose the logical structure of your code to the system (e.g. for monitoring). Of the three different representations of the WordCount pipeline below, only the structured view captures the high-level intent of the pipeline. Letting even the simple operations be PTransforms means there’s less of an abrupt edge to packaging things up into composite operations.

+ +

Three different visualizations of a simple WordCount pipeline

+ +
+Three different visualizations of a simple WordCount pipeline which computes the number of occurrences of every word in a set of text files. The flag view gives the full DAG of all operations performed. The execution view groups operations according to how they're executed, e.g. after performing runner-specific optimizations like function composition. The structured view nests operations according to their grouping in PTransforms. +
+ +

Summary

+

Although it’s tempting to add methods to PCollections, such an approach is not scalable, extensible, or sufficiently expressive. Putting a single apply method on PCollection and all the logic into the operation itself lets us have the best of both worlds, and avoids hard cliffs of complexity by having a single consistent style across simple and complex pipelines, and between predefined and user-defined operations.

+ +
+ +
+ +
+ + +
+
+
+ +
+
+ +
+ + + + + diff --git a/content/blog/index.html b/content/blog/index.html index d72dfbb0f8ef4..a352f454477d8 100644 --- a/content/blog/index.html +++ b/content/blog/index.html @@ -104,6 +104,22 @@

Apache Beam Blog

This is the blog for the Apache Beam project. This blog contains news and updates for the project.

+ +

May 20, 2016 • Robert Bradshaw +

+ +

Have you ever wondered why Beam has PTransforms for everything instead of having methods on PCollection? Take a look at the history that led to this (and other) design decisions.

+ + + +

+ +Read more  + +

+ +
+

May 18, 2016 • Dan Halperin

diff --git a/content/capability-matrix/index.html b/content/capability-matrix/index.html index 9981074ea6c7d..752e6b8e35475 100644 --- a/content/capability-matrix/index.html +++ b/content/capability-matrix/index.html @@ -95,7 +95,11 @@

Apache Beam Capability Matrix

+<<<<<<< 657701378fc1935a67e5b5a5fe1427997b6c9029

Last updated: 2016-05-27 18:38 CEST

+======= +

Last updated: 2016-05-20 14:42 PDT

+>>>>>>> Regenerate to post PCollection blog post

Apache Beam (incubating) provides a portable API layer for building sophisticated data-parallel processing engines that may be executed across a diversity of exeuction engines, or runners. The core concepts of this layer are based upon the Beam Model (formerly referred to as the Dataflow Model), and implemented to varying degrees in each Beam runner. To help clarify the capabilities of individual runners, we’ve created the capability matrix below.

diff --git a/content/feed.xml b/content/feed.xml index 9a41d569c4a4e..02081173ec749 100644 --- a/content/feed.xml +++ b/content/feed.xml @@ -6,10 +6,102 @@ http://beam.incubator.apache.org/ - Wed, 18 May 2016 20:55:36 -0700 - Wed, 18 May 2016 20:55:36 -0700 + Fri, 20 May 2016 14:42:30 -0700 + Fri, 20 May 2016 14:42:30 -0700 Jekyll v3.1.3 + + Where's my PCollection.map()? + <p>Have you ever wondered why Beam has PTransforms for everything instead of having methods on PCollection? Take a look at the history that led to this (and other) design decisions. +<!--more--> +Though Beam is relatively new, its design draws heavily on many years of experience with real-world pipelines. One of the primary inspirations is <a href="http://research.google.com/pubs/pub35650.html">FlumeJava</a>, which is Google’s internal successor to MapReduce first introduced in 2009.</p> + +<p>The original FlumeJava API has methods like <code class="highlighter-rouge">count</code> and <code class="highlighter-rouge">parallelDo</code> on the PCollections. Though slightly more succinct, this approach has many disadvantages to extensibility. Every new user to FlumeJava wanted to add transforms, and adding them as methods to PCollection simply doesn’t scale well. In contrast, a PCollection in Beam has a single <code class="highlighter-rouge">apply</code> method which takes any PTransform as an argument.</p> + +<p>Have you ever wondered why Beam has PTransforms for everything instead of having methods on PCollection? Take a look at the history that led to this (and other) design decisions.</p> + +<table class="table"> + <tr> + <th>FlumeJava</th> + <th>Beam</th> + </tr> + <tr> + <td><pre> +PCollection&lt;T&gt; input = … +PCollection&lt;O&gt; output = input.count() + .parallelDo(...); + </pre></td> + <td><pre> +PCollection&lt;T&gt; input = … +PCollection&lt;O&gt; output = input.apply(Count.perElement()) + .apply(ParDo.of(...)); + </pre></td> + </tr> +</table> + +<p>This is a more scalable approach for several reasons.</p> + +<h2 id="where-to-draw-the-line">Where to draw the line?</h2> +<p>Adding methods to PCollection forces a line to be drawn between operations that are “useful” enough to merit this special treatment and those that are not. It is easy to make the case for flat map, group by key, and combine per key. But what about filter? Count? Approximate count? Approximate quantiles? Most frequent? WriteToMyFavoriteSource? Going too far down this path leads to a single enormous class that contains nearly everything one could want to do. (FlumeJava’s PCollection class is over 5000 lines long with around 70 distinct operations, and it could have been <em>much</em> larger had we accepted every proposal.) Furthermore, since Java doesn’t allow adding methods to a class, there is a sharp syntactic divide between those operations that are added to PCollection and those that aren’t. A traditional way to share code is with a library of functions, but functions (in traditional languages like Java at least) are written prefix-style, which doesn’t mix well with the fluent builder style (e.g. <code class="highlighter-rouge">input.operation1().operation2().operation3()</code> vs. <code class="highlighter-rouge">operation3(operation1(input).operation2())</code>).</p> + +<p>Instead in Beam we’ve chosen a style that places all transforms–whether they be primitive operations, composite operations bundled in the SDK, or part of an external library–on equal footing. This also facilitates alternative implementations (which may even take different options) that are easily interchangeable.</p> + +<table class="table"> + <tr> + <th>FlumeJava</th> + <th>Beam</th> + </tr> + <tr> + <td><pre> +PCollection&lt;O&gt; output = + ExternalLibrary.doStuff( + MyLibrary.transform(input, myArgs) + .parallelDo(...), + externalLibArgs); + </pre></td> + <td><pre> +PCollection&lt;O&gt; output = input + .apply(MyLibrary.transform(myArgs)) + .apply(ParDo.of(...)) + .apply(ExternalLibrary.doStuff(externalLibArgs)); + &nbsp; + </pre></td> + </tr> +</table> + +<h2 id="configurability">Configurability</h2> +<p>It makes for a fluent style to let values (PCollections) be the objects passed around and manipulated (i.e. the handles to the deferred execution graph), but it is the operations themselves that need to be composable, configurable, and extendable. Using PCollection methods for the operations doesn’t scale well here, especially in a language without default or keyword arguments. For example, a ParDo operation can have any number of side inputs and side outputs, or a write operation may have configurations dealing with encoding and compression. One option is to separate these out into multiple overloads or even methods, but that exacerbates the problems above. (FlumeJava evolved over a dozen overloads of the <code class="highlighter-rouge">parallelDo</code> method!) Another option is to pass each method a configuration object that can be built up using more fluent idioms like the builder pattern, but at that point one might as well make the configuration object the operation itself, which is what Beam does.</p> + +<h2 id="type-safety">Type Safety</h2> +<p>Many operations can only be applied to collections whose elements are of a specific type. For example, the GroupByKey operation should only be applied to <code class="highlighter-rouge">PCollection&lt;KV&lt;K, V&gt;&gt;</code>s. In Java at least, it’s not possible to restrict methods based on the element type parameter alone. In FlumeJava, this led us to add a <code class="highlighter-rouge">PTable&lt;K, V&gt;</code> subclassing <code class="highlighter-rouge">PCollection&lt;KV&lt;K, V&gt;&gt;</code> to contain all the operations specific to PCollections of key-value pairs. This leads to the same question of which element types are special enough to merit being captured by PCollection subclasses. It is not very extensible for third parties and often requires manual downcasts/conversions (which can’t be safely chained in Java) and special operations that produce these PCollection specializations.</p> + +<p>This is particularly inconvenient for transforms that produce outputs whose element types are the same as (or related to) their input’s element types, requiring extra support to generate the right subclasses (e.g. a filter on a PTable should produce another PTable rather than just a raw PCollection of key-value pairs).</p> + +<p>Using PTransforms allows us to sidestep this entire issue. We can place arbitrary constraints on the context in which a transform may be used based on the type of its inputs; for instance GroupByKey is statically typed to only apply to a <code class="highlighter-rouge">PCollection&lt;KV&lt;K, V&gt;&gt;</code>. The way this happens is generalizable to arbitrary shapes, without needing to introduce specialized types like PTable.</p> + +<h2 id="reusability-and-structure">Reusability and Structure</h2> +<p>Though PTransforms are generally constructed at the site at which they’re used, by pulling them out as separate objects one is able to store them and pass them around.</p> + +<p>As pipelines grow and evolve, it is useful to structure your pipeline into modular, often reusable components, and PTransforms allow one to do this nicely in a data-processing pipeline. In addition, modular PTransforms also expose the logical structure of your code to the system (e.g. for monitoring). Of the three different representations of the WordCount pipeline below, only the structured view captures the high-level intent of the pipeline. Letting even the simple operations be PTransforms means there’s less of an abrupt edge to packaging things up into composite operations.</p> + +<p><img class="center-block" src="/images/blog/simple-wordcount-pipeline.png" alt="Three different visualizations of a simple WordCount pipeline" width="500" /></p> + +<div class="text-center"> +<i>Three different visualizations of a simple WordCount pipeline which computes the number of occurrences of every word in a set of text files. The flag view gives the full DAG of all operations performed. The execution view groups operations according to how they're executed, e.g. after performing runner-specific optimizations like function composition. The structured view nests operations according to their grouping in PTransforms.</i> +</div> + +<h2 id="summary">Summary</h2> +<p>Although it’s tempting to add methods to PCollections, such an approach is not scalable, extensible, or sufficiently expressive. Putting a single apply method on PCollection and all the logic into the operation itself lets us have the best of both worlds, and avoids hard cliffs of complexity by having a single consistent style across simple and complex pipelines, and between predefined and user-defined operations.</p> + + Fri, 20 May 2016 11:00:00 -0700 + http://beam.incubator.apache.org/blog/2016/05/20/where-is-my-pcollection-dot-map.html + http://beam.incubator.apache.org/blog/2016/05/20/where-is-my-pcollection-dot-map.html + + + blog + + + Dynamic work rebalancing for Beam <p>This morning, Eugene and Malo from the Google Cloud Dataflow team posted <a href="https://cloud.google.com/blog/big-data/2016/05/no-shard-left-behind-dynamic-work-rebalancing-in-google-cloud-dataflow"><em>No shard left behind: dynamic work rebalancing in Google Cloud Dataflow</em></a>. This article discusses Cloud Dataflow’s solution to the well-known straggler problem.</p> diff --git a/content/index.html b/content/index.html index f875a64eae050..c7440436d71ed 100644 --- a/content/index.html +++ b/content/index.html @@ -177,6 +177,8 @@

Getting Started with Apache Beam

Blog

+ May 20, 2016 - Where's my PCollection.map()? + May 18, 2016 - Dynamic work rebalancing for Beam Apr 3, 2016 - Apache Beam Presentation Materials From 49ae0d9493fe2f58a2db35c97e781073cc1e15f5 Mon Sep 17 00:00:00 2001 From: James Malone Date: Fri, 27 May 2016 09:58:43 -0700 Subject: [PATCH 2/3] Resolve merge,2 --- content/capability-matrix/index.html | 6 +----- content/feed.xml | 4 ++-- 2 files changed, 3 insertions(+), 7 deletions(-) diff --git a/content/capability-matrix/index.html b/content/capability-matrix/index.html index 752e6b8e35475..2d5f7d37b5206 100644 --- a/content/capability-matrix/index.html +++ b/content/capability-matrix/index.html @@ -95,11 +95,7 @@

Apache Beam Capability Matrix

-<<<<<<< 657701378fc1935a67e5b5a5fe1427997b6c9029 -

Last updated: 2016-05-27 18:38 CEST

-======= -

Last updated: 2016-05-20 14:42 PDT

->>>>>>> Regenerate to post PCollection blog post +

Last updated: 2016-05-27 09:58 PDT

Apache Beam (incubating) provides a portable API layer for building sophisticated data-parallel processing engines that may be executed across a diversity of exeuction engines, or runners. The core concepts of this layer are based upon the Beam Model (formerly referred to as the Dataflow Model), and implemented to varying degrees in each Beam runner. To help clarify the capabilities of individual runners, we’ve created the capability matrix below.

diff --git a/content/feed.xml b/content/feed.xml index 02081173ec749..b85a3e2b1d532 100644 --- a/content/feed.xml +++ b/content/feed.xml @@ -6,8 +6,8 @@ http://beam.incubator.apache.org/ - Fri, 20 May 2016 14:42:30 -0700 - Fri, 20 May 2016 14:42:30 -0700 + Fri, 27 May 2016 09:58:31 -0700 + Fri, 27 May 2016 09:58:31 -0700 Jekyll v3.1.3 From 0a1d6c8c7f97e45dd9866341e99f11f30c30ad21 Mon Sep 17 00:00:00 2001 From: James Malone Date: Fri, 27 May 2016 09:51:26 -0700 Subject: [PATCH 3/3] Update date for blog post --- ...016-05-20-where-is-my-pcollection-dot-map.md | 4 +++- .../where-is-my-pcollection-dot-map.html | 10 ++++++---- content/blog/index.html | 6 +++--- content/capability-matrix/index.html | 4 ++++ content/feed.xml | 17 ++++++++++++----- content/index.html | 2 +- 6 files changed, 29 insertions(+), 14 deletions(-) rename content/blog/2016/05/{20 => 27}/where-is-my-pcollection-dot-map.html (96%) diff --git a/_posts/2016-05-20-where-is-my-pcollection-dot-map.md b/_posts/2016-05-20-where-is-my-pcollection-dot-map.md index 311416449b147..f7ea46811ffef 100644 --- a/_posts/2016-05-20-where-is-my-pcollection-dot-map.md +++ b/_posts/2016-05-20-where-is-my-pcollection-dot-map.md @@ -1,14 +1,16 @@ --- layout: post title: "Where's my PCollection.map()?" -date: 2016-05-20 11:00:00 -0700 +date: 2016-05-27 09:00:00 -0700 excerpt_separator: categories: blog authors: - robertwb --- Have you ever wondered why Beam has PTransforms for everything instead of having methods on PCollection? Take a look at the history that led to this (and other) design decisions. + + Though Beam is relatively new, its design draws heavily on many years of experience with real-world pipelines. One of the primary inspirations is [FlumeJava](http://research.google.com/pubs/pub35650.html), which is Google's internal successor to MapReduce first introduced in 2009. The original FlumeJava API has methods like `count` and `parallelDo` on the PCollections. Though slightly more succinct, this approach has many disadvantages to extensibility. Every new user to FlumeJava wanted to add transforms, and adding them as methods to PCollection simply doesn't scale well. In contrast, a PCollection in Beam has a single `apply` method which takes any PTransform as an argument. diff --git a/content/blog/2016/05/20/where-is-my-pcollection-dot-map.html b/content/blog/2016/05/27/where-is-my-pcollection-dot-map.html similarity index 96% rename from content/blog/2016/05/20/where-is-my-pcollection-dot-map.html rename to content/blog/2016/05/27/where-is-my-pcollection-dot-map.html index a45cb9c6af6da..9d69ac3ec9663 100644 --- a/content/blog/2016/05/20/where-is-my-pcollection-dot-map.html +++ b/content/blog/2016/05/27/where-is-my-pcollection-dot-map.html @@ -13,7 +13,7 @@ - +