Skip to content

Commit

Permalink
[MINOR][DOCS] fix: some minor typos
Browse files Browse the repository at this point in the history
### What changes were proposed in this pull request?
Change `the the` to `the`

### Why are the changes needed?
To fix the typo

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?

Closes apache#42188 from ejblanco/docs/spark-typos.

Authored-by: Eric Blanco <ericjoel.blancohermida@telefonica.com>
Signed-off-by: Sean Owen <srowen@gmail.com>
  • Loading branch information
ejblanco authored and srowen committed Jul 27, 2023
1 parent 01191c8 commit 921fb28
Show file tree
Hide file tree
Showing 6 changed files with 6 additions and 6 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ private[connect] class SparkConnectStreamingQueryCache(

/**
* Returns [[StreamingQuery]] if it is cached and session matches the cached query. It ensures
* the the session associated with it matches the session passed into the call. If the query is
* the session associated with it matches the session passed into the call. If the query is
* inactive (i.e. it has a cache expiry time set), this access extends its expiry time. So if a
* client keeps accessing a query, it stays in the cache.
*/
Expand Down

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion dev/connect-jvm-client-mima-check
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ echo "finish connect-client-jvm module mima check ..."

RESULT_SIZE=$(wc -l .connect-mima-check-result | awk '{print $1}')

# The the file has no content if check passed.
# The file has no content if check passed.
if [[ $RESULT_SIZE -eq "0" ]]; then
ERRORS=""
else
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2623,7 +2623,7 @@ class Analyzer(override val catalogManager: CatalogManager) extends RuleExecutor
withOrigin(t.origin)(t.copy(hasTried = true))
} else {
// This is a nested column, we still have a chance to match grouping expressions with
// the the top-levle column. Here we wrap the underlying `Attribute` with
// the top-level column. Here we wrap the underlying `Attribute` with
// `TempResolvedColumn` and try again.
val childWithTempCol = t.child.transformUp {
case a: Attribute => TempResolvedColumn(a, Seq(a.name))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ import org.apache.spark.sql.types._
@ExpressionDescription(
usage = """
_FUNC_(window_column) - Extract the time value from time/session window column which can be used for event time value of window.
The extracted time is (window.end - 1) which reflects the fact that the the aggregating
The extracted time is (window.end - 1) which reflects the fact that the aggregating
windows have exclusive upper bound - [start, end)
See <a href="https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#window-operations-on-event-time">'Window Operations on Event Time'</a> in Structured Streaming guide doc for detailed explanation and examples.
""",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4166,7 +4166,7 @@ object SQLConf {
val LEGACY_EMPTY_CURRENT_DB_IN_CLI =
buildConf("spark.sql.legacy.emptyCurrentDBInCli")
.internal()
.doc("When false, spark-sql CLI prints the the current database in prompt")
.doc("When false, spark-sql CLI prints the current database in prompt.")
.version("3.4.0")
.booleanConf
.createWithDefault(false)
Expand Down

0 comments on commit 921fb28

Please sign in to comment.