Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SHOW CREATE TABLE is unsupported #1032

Open
yifeng-chen opened this issue Mar 25, 2022 · 3 comments · May be fixed by #1255
Open

SHOW CREATE TABLE is unsupported #1032

yifeng-chen opened this issue Mar 25, 2022 · 3 comments · May be fixed by #1255
Assignees
Labels
enhancement New feature or request

Comments

@yifeng-chen
Copy link

Hi delta team, I'm trying to run SHOW CREATE TABLE in my local development environment and the error occurs, but the statement works well in Databricks environment.

I'm wondering whether this SHOW CREATE TABLE feature is proprietary for Databricks?

If it's true, is there any possible workaround for the OSS version to support SHOW CREATE TABLE statement?

Thanks.

@zsxwing zsxwing added the enhancement New feature or request label Mar 25, 2022
@zsxwing
Copy link
Member

zsxwing commented Mar 29, 2022

Thanks for raising this issue. This is an oversight in Delta. We will fix it. But if you have free time to work on this, feel free to open an PR.

@findinpath
Copy link

Stacktrace of the issue I'm dealing with as well:

22/05/11 14:00:08 ERROR SparkExecuteStatementOperation: Error executing query with 98f04258-59d4-43cd-b4a2-373a6684f3d9, currentState RUNNING, 
spark               | org.apache.spark.sql.AnalysisException: SHOW CREATE TABLE is not supported for v2 tables.
spark               |   at org.apache.spark.sql.execution.datasources.v2.DataSourceV2Strategy.apply(DataSourceV2Strategy.scala:350)
spark               |   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$1(QueryPlanner.scala:63)
spark               |   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
spark               |   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
spark               |   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:489)
spark               |   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
spark               |   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:67)
spark               |   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
spark               |   at scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:162)
spark               |   at scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:162)
spark               |   at scala.collection.Iterator.foreach(Iterator.scala:941)
spark               |   at scala.collection.Iterator.foreach$(Iterator.scala:941)
spark               |   at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
spark               |   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:162)
spark               |   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:160)
spark               |   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1429)
spark               |   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
spark               |   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
spark               |   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
spark               |   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
spark               |   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:67)
spark               |   at org.apache.spark.sql.execution.QueryExecution$.createSparkPlan(QueryExecution.scala:391)
spark               |   at org.apache.spark.sql.execution.QueryExecution.$anonfun$sparkPlan$1(QueryExecution.scala:104)
spark               |   at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
spark               |   at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:143)
spark               |   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
spark               |   at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:143)
spark               |   at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:104)
spark               |   at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:97)
spark               |   at org.apache.spark.sql.execution.QueryExecution.$anonfun$executedPlan$1(QueryExecution.scala:117)
spark               |   at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
spark               |   at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:143)
spark               |   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
spark               |   at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:143)
spark               |   at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:117)
spark               |   at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:110)
spark               |   at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:101)
spark               |   at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
spark               |   at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
spark               |   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
spark               |   at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
spark               |   at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685)
spark               |   at org.apache.spark.sql.Dataset.<init>(Dataset.scala:228)
spark               |   at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
spark               |   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
spark               |   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
spark               |   at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:618)
spark               |   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
spark               |   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:613)
spark               |   at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
spark               |   at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:325)
spark               |   at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:263)
spark               |   at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
spark               |   at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties(SparkOperation.scala:78)
spark               |   at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties$(SparkOperation.scala:62)
spark               |   at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:43)
spark               |   at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:263)
spark               |   at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:258)
spark               |   at java.base/java.security.AccessController.doPrivileged(Native Method)
spark               |   at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
spark               |   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
spark               |   at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:272)
spark               |   at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
spark               |   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
spark               |   at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
spark               |   at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
spark               |   at java.base/java.lang.Thread.run(Thread.java:829)

@zpappa
Copy link

zpappa commented Jul 5, 2022

I'll take this

zpappa added a commit to zpappa/delta that referenced this issue Jul 6, 2022
zpappa added a commit to zpappa/delta that referenced this issue Jul 6, 2022
…SHOW CREATE TABLE, implementation of command and tests. Added missing property constants to DeltaTableV2, consistent with CatalogTable.
zpappa added a commit to zpappa/delta that referenced this issue Jul 6, 2022
@zpappa zpappa linked a pull request Jul 6, 2022 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants