Skip to content

Commit

Permalink
apache#5654 updated for all the methods
Browse files Browse the repository at this point in the history
  • Loading branch information
phatak-dev committed Apr 24, 2015
1 parent 2c997c5 commit 386f407
Showing 1 changed file with 15 additions and 0 deletions.
15 changes: 15 additions & 0 deletions sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala
Original file line number Diff line number Diff line change
Expand Up @@ -1104,6 +1104,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
*
* Also note that while this function can persist the table metadata into Hive's metastore,
* the table will NOT be accessible from Hive.
* @group output
*/
@Experimental
Expand All @@ -1128,6 +1131,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
*
* Also note that while this function can persist the table metadata into Hive's metastore,
* the table will NOT be accessible from Hive.
* @group output
*/
@Experimental
Expand All @@ -1144,6 +1150,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
*
* Also note that while this function can persist the table metadata into Hive's metastore,
* the table will NOT be accessible from Hive.
* @group output
*/
@Experimental
Expand All @@ -1160,6 +1169,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
*
* Also note that while this function can persist the table metadata into Hive's metastore,
* the table will NOT be accessible from Hive.
* @group output
*/
@Experimental
Expand All @@ -1181,6 +1193,9 @@ class DataFrame private[sql](
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
* be the target of an `insertInto`.
*
* Also note that while this function can persist the table metadata into Hive's metastore,
* the table will NOT be accessible from Hive.
* @group output
*/
@Experimental
Expand Down

0 comments on commit 386f407

Please sign in to comment.