-
Notifications
You must be signed in to change notification settings - Fork 697
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Waiter ObjectExists failed: Max attempts exceeded #299
Comments
Hi @anand086! Some questions:
Thanks |
Hi @igorborgest Is this Lambda running inside a VPC? Or is there no VPC involved? : No VPC is involved. I am trying to execute "alter table set location" command using the "athena.read_sql_query" , which doesn't seem to be the right way of modifying the location. I couldn't find any API in Glue catalog, like something similar to awswrangler.catalog.add_parquet_partitions to set the location.
|
The problem here is because wr.athena.read_sql_query() was designed to run your query and then READ the result as Pandas DataFrame. In your case you are running a query that has no output. For this situation I recommend this approach: import awswrangler as wr
query_id = wr.athena.start_query_execution("ALTER TABLE ...", database="...")
# Optional, only if you want to wait until the query is done.
wr.athena.wait_query(query_id) Ref: start_query_execution | wait_query I will tag this issue as an @anand086 Please, let me know how it goes |
Thank you, it works well. Appreciate your help. Also, just wanted to bring up this small documentation error on https://tinyurl.com/y83cv9bm. The input parameter is table, but example has "name" wr.catalog.get_table_location(database='default', name='my_table') |
Enhancement done. Release expected for version |
@anand086, feel free to test our dev branch before the official release:
|
Released in 1.7.0! |
Hi,
Thank you for this package.
I am using awswrangler in Lambda to run a sql, get the query result as pandas dataframe, write it to S3 as parquet and alter the table location. The lambda function shows execution result as failed, even though in glue catalog the location is updated. I have attached the lambda script.
lambda_sql.py.zip
Error --
The text was updated successfully, but these errors were encountered: