Skip to content

Commit

Permalink
[SPARK-48825][DOCS] Unify the 'See Also' section formatting across Py…
Browse files Browse the repository at this point in the history
…Spark docstrings

### What changes were proposed in this pull request?

This PR unifies the 'See Also' section formatting across PySpark docstrings and fixes some invalid references.

### Why are the changes needed?

To improve PySpark documentation

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

doctest

### Was this patch authored or co-authored using generative AI tooling?

No

Closes apache#47240 from allisonwang-db/spark-48825-also-see-docs.

Authored-by: allisonwang-db <allison.wang@databricks.com>
Signed-off-by: Ruifeng Zheng <ruifengz@apache.org>
  • Loading branch information
allisonwang-db authored and zhengruifeng committed Jul 8, 2024
1 parent 30055f7 commit cd1d687
Show file tree
Hide file tree
Showing 2 changed files with 21 additions and 20 deletions.
17 changes: 9 additions & 8 deletions python/pyspark/sql/dataframe.py
Original file line number Diff line number Diff line change
Expand Up @@ -1887,7 +1887,7 @@ def distinct(self) -> "DataFrame":
See Also
--------
DataFrame.dropDuplicates
DataFrame.dropDuplicates : Remove duplicate rows from this DataFrame.
Examples
--------
Expand Down Expand Up @@ -2951,7 +2951,7 @@ def describe(self, *cols: Union[str, List[str]]) -> "DataFrame":
See Also
--------
DataFrame.summary
DataFrame.summary : Computes summary statistics for numeric and string columns.
"""
...

Expand Down Expand Up @@ -3022,7 +3022,7 @@ def summary(self, *statistics: str) -> "DataFrame":
See Also
--------
DataFrame.display
DataFrame.describe : Computes basic statistics for numeric and string columns.
"""
...

Expand Down Expand Up @@ -3790,7 +3790,7 @@ def groupingSets(
self, groupingSets: Sequence[Sequence["ColumnOrName"]], *cols: "ColumnOrName"
) -> "GroupedData":
"""
Create multi-dimensional aggregation for the current `class`:DataFrame using the specified
Create multi-dimensional aggregation for the current :class:`DataFrame` using the specified
grouping sets, so we can run aggregation on them.
.. versionadded:: 4.0.0
Expand Down Expand Up @@ -3873,7 +3873,7 @@ def groupingSets(
See Also
--------
GroupedData
DataFrame.rollup : Compute hierarchical summaries at multiple levels.
"""
...

Expand Down Expand Up @@ -5420,7 +5420,7 @@ def withColumnRenamed(self, existing: str, new: str) -> "DataFrame":
See Also
--------
:meth:`withColumnsRenamed`
DataFrame.withColumnsRenamed
Examples
--------
Expand Down Expand Up @@ -5480,7 +5480,7 @@ def withColumnsRenamed(self, colsMap: Dict[str, str]) -> "DataFrame":
See Also
--------
:meth:`withColumnRenamed`
DataFrame.withColumnRenamed
Examples
--------
Expand Down Expand Up @@ -6183,6 +6183,7 @@ def mapInPandas(
See Also
--------
pyspark.sql.functions.pandas_udf
DataFrame.mapInArrow
"""
...

Expand Down Expand Up @@ -6259,7 +6260,7 @@ def mapInArrow(
See Also
--------
pyspark.sql.functions.pandas_udf
pyspark.sql.DataFrame.mapInPandas
DataFrame.mapInPandas
"""
...

Expand Down
24 changes: 12 additions & 12 deletions python/pyspark/sql/functions/builtin.py
Original file line number Diff line number Diff line change
Expand Up @@ -14040,8 +14040,8 @@ def element_at(col: "ColumnOrName", extraction: Any) -> Column:

See Also
--------
:meth:`get`
:meth:`try_element_at`
:meth:`pyspark.sql.functions.get`
:meth:`pyspark.sql.functions.try_element_at`

Examples
--------
Expand Down Expand Up @@ -14131,8 +14131,8 @@ def try_element_at(col: "ColumnOrName", extraction: "ColumnOrName") -> Column:

See Also
--------
:meth:`get`
:meth:`element_at`
:meth:`pyspark.sql.functions.get`
:meth:`pyspark.sql.functions.element_at`

Examples
--------
Expand Down Expand Up @@ -14233,7 +14233,7 @@ def get(col: "ColumnOrName", index: Union["ColumnOrName", int]) -> Column:

See Also
--------
:meth:`element_at`
:meth:`pyspark.sql.functions.element_at`

Examples
--------
Expand Down Expand Up @@ -15153,9 +15153,9 @@ def explode(col: "ColumnOrName") -> Column:

See Also
--------
:meth:`pyspark.functions.posexplode`
:meth:`pyspark.functions.explode_outer`
:meth:`pyspark.functions.posexplode_outer`
:meth:`pyspark.sql.functions.posexplode`
:meth:`pyspark.sql.functions.explode_outer`
:meth:`pyspark.sql.functions.posexplode_outer`

Notes
-----
Expand Down Expand Up @@ -15342,8 +15342,8 @@ def inline(col: "ColumnOrName") -> Column:

See Also
--------
:meth:`pyspark.functions.explode`
:meth:`pyspark.functions.inline_outer`
:meth:`pyspark.sql.functions.explode`
:meth:`pyspark.sql.functions.inline_outer`

Examples
--------
Expand Down Expand Up @@ -15570,8 +15570,8 @@ def inline_outer(col: "ColumnOrName") -> Column:

See Also
--------
:meth:`explode_outer`
:meth:`inline`
:meth:`pyspark.sql.functions.explode_outer`
:meth:`pyspark.sql.functions.inline`

Notes
-----
Expand Down

0 comments on commit cd1d687

Please sign in to comment.