You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We deploy Trino on GKE using the helm chart and would like to be able to use workload identity for GCS authentication rather than mounting a json credential file or accepting an access token with the query. We have tried just deploying trino with workload identity and no explicit GCS configuration, and it works perfectly for read operations.
However, when we attempt a write or delete operation the parquet files are successfully created by the workers until the query fails attempting to write to the transaction log. We get the same error when trying to add a comment.
io.trino.spi.TrinoException: Unable to add '<column>' column comment for: <table>
at io.trino.plugin.deltalake.DeltaLakeMetadata.setColumnComment(DeltaLakeMetadata.java:1240)
at io.trino.plugin.base.classloader.ClassLoaderSafeConnectorMetadata.setColumnComment(ClassLoaderSafeConnectorMetadata.java:479)
at io.trino.tracing.TracingConnectorMetadata.setColumnComment(TracingConnectorMetadata.java:433)
at io.trino.metadata.MetadataManager.setColumnComment(MetadataManager.java:745)
at io.trino.tracing.TracingMetadata.setColumnComment(TracingMetadata.java:430)
at io.trino.execution.CommentTask.commentOnColumn(CommentTask.java:171)
at io.trino.execution.CommentTask.execute(CommentTask.java:79)
at io.trino.execution.CommentTask.execute(CommentTask.java:44)
at io.trino.execution.DataDefinitionExecution.start(DataDefinitionExecution.java:146)
at io.trino.execution.SqlQueryManager.createQuery(SqlQueryManager.java:257)
at io.trino.dispatcher.LocalDispatchQuery.startExecution(LocalDispatchQuery.java:145)
at io.trino.dispatcher.LocalDispatchQuery.lambda$waitForMinimumWorkers$2(LocalDispatchQuery.java:129)
at io.airlift.concurrent.MoreFutures.lambda$addSuccessCallback$12(MoreFutures.java:569)
at io.airlift.concurrent.MoreFutures$3.onSuccess(MoreFutures.java:544)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1133)
at io.trino.$gen.Trino_420____20230708_065838_2.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: io.trino.spi.TrinoException: GCS credentials not configured
at io.trino.plugin.deltalake.GcsStorageFactory.create(GcsStorageFactory.java:103)
at io.trino.plugin.deltalake.transactionlog.writer.GcsTransactionLogSynchronizer.write(GcsTransactionLogSynchronizer.java:50)
at io.trino.plugin.deltalake.transactionlog.writer.TransactionLogWriter.flush(TransactionLogWriter.java:110)
at io.trino.plugin.deltalake.DeltaLakeMetadata.setColumnComment(DeltaLakeMetadata.java:1237)
... 18 more
Caused by: java.lang.IllegalStateException: GCS credentials not configured
at io.trino.plugin.deltalake.GcsStorageFactory.lambda$create$0(GcsStorageFactory.java:96)
at java.base/java.util.Optional.orElseThrow(Optional.java:403)
at io.trino.plugin.deltalake.GcsStorageFactory.create(GcsStorageFactory.java:96)
... 21 more
The storage connector seems to be explicitly throwing an error if neither of the configuration options are set, despite the pods actually having write access to the bucket via workload identity.
Is it possible to either attempt the operation anyway if neither access token or credential json is given, or if that default behavior is not desired then to add a configuration option for using workload identity?
The text was updated successfully, but these errors were encountered:
We deploy Trino on GKE using the helm chart and would like to be able to use workload identity for GCS authentication rather than mounting a json credential file or accepting an access token with the query. We have tried just deploying trino with workload identity and no explicit GCS configuration, and it works perfectly for read operations.
However, when we attempt a write or delete operation the parquet files are successfully created by the workers until the query fails attempting to write to the transaction log. We get the same error when trying to add a comment.
link to file
The storage connector seems to be explicitly throwing an error if neither of the configuration options are set, despite the pods actually having write access to the bucket via workload identity.
Is it possible to either attempt the operation anyway if neither access token or credential json is given, or if that default behavior is not desired then to add a configuration option for using workload identity?
The text was updated successfully, but these errors were encountered: