Skip to content

Commit

Permalink
[chore] Added tests to verify linter not being stuck in the infinite …
Browse files Browse the repository at this point in the history
…loop (#3225)

Bug was fixed in v0.46.0

- #3000
- #3027

See:
- #2976

`default-format-changed-in-dbr8` and `sql-parse-error` are ignored for
LSP plugin output.
  • Loading branch information
nfx authored Nov 8, 2024
1 parent 95c8eae commit ddf35bd
Showing 1 changed file with 14 additions and 0 deletions.
14 changes: 14 additions & 0 deletions tests/unit/source_code/samples/functional/es-1285042.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
import pyspark.sql.functions as F

# ucx[default-format-changed-in-dbr8:+1:17:+1:41] The default format changed in Databricks Runtime 8.0, from Parquet to Delta
churn_features = spark.table("something")
churn_features = (churn_features.withColumn("random", F.rand(seed=42)).withColumn("split",F.when(F.col("random") < train_ratio, "train").when(F.col("random") < train_ratio + val_ratio, "validate").otherwise("test")).drop("random"))

# ucx[default-format-changed-in-dbr8:+1:1:+1:109] The default format changed in Databricks Runtime 8.0, from Parquet to Delta
(churn_features.write.mode("overwrite").option("overwriteSchema", "true").saveAsTable("mlops_churn_training"))

# ucx[default-format-changed-in-dbr8:+1:21:+1:74] The default format changed in Databricks Runtime 8.0, from Parquet to Delta
sdf_system_columns = spark.read.table("system.information_schema.columns")

# ucx[sql-parse-error:+1:14:+1:140] SQL expression is not supported yet: SELECT 1 AS col1, 2 AS col2, 3 AS col3 FROM {sdf_system_columns} LIMIT 5
sdf_example = spark.sql("SELECT 1 AS col1, 2 AS col2, 3 AS col3 FROM {sdf_system_columns} LIMIT 5", sdf_system_columns = sdf_system_columns)

0 comments on commit ddf35bd

Please sign in to comment.