Skip to content

Commit

Permalink
[SPARK-7084] improve saveAsTable documentation
Browse files Browse the repository at this point in the history
Author: madhukar <phatak.dev@gmail.com>

Closes apache#5654 from phatak-dev/master and squashes the following commits:

386f407 [madhukar] apache#5654 updated for all the methods
2c997c5 [madhukar] Merge branch 'master' of https://github.com/apache/spark
00bc819 [madhukar] Merge branch 'master' of https://github.com/apache/spark
2a802c6 [madhukar] apache#5654 updated the doc according to comments
866e8df [madhukar] [SPARK-7084] improve saveAsTable documentation
  • Loading branch information
phatak-dev authored and rxin committed May 12, 2015
1 parent 4f4dbb0 commit 57255dc
Showing 1 changed file with 18 additions and 0 deletions.
18 changes: 18 additions & 0 deletions sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala
Original file line number Diff line number Diff line change
Expand Up @@ -1192,6 +1192,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
*
* Also note that while this function can persist the table metadata into Hive's metastore,
* the table will NOT be accessible from Hive.
* @group output
*/
@Experimental
Expand All @@ -1208,6 +1211,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
*
* Also note that while this function can persist the table metadata into Hive's metastore,
* the table will NOT be accessible from Hive.
* @group output
*/
@Experimental
Expand All @@ -1232,6 +1238,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
*
* Also note that while this function can persist the table metadata into Hive's metastore,
* the table will NOT be accessible from Hive.
* @group output
*/
@Experimental
Expand All @@ -1248,6 +1257,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
*
* Also note that while this function can persist the table metadata into Hive's metastore,
* the table will NOT be accessible from Hive.
* @group output
*/
@Experimental
Expand All @@ -1264,6 +1276,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
*
* Also note that while this function can persist the table metadata into Hive's metastore,
* the table will NOT be accessible from Hive.
* @group output
*/
@Experimental
Expand All @@ -1285,6 +1300,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
*
* Also note that while this function can persist the table metadata into Hive's metastore,
* the table will NOT be accessible from Hive.
* @group output
*/
@Experimental
Expand Down

0 comments on commit 57255dc

Please sign in to comment.