You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I created data frame with column ingestion_date and value for this source with ```
df['ingestion_date']=pd.to_datetime(datetime.datetime.now(datetime.timezone.utc))
wr.s3.to_parquet(
partition_cols=partition_col,
df=df,
path=s3_write_path,
dataset=True,
sanitize_columns=False
)
so my parquet is created
then i am trying to read the parquet using
`patient_df = wr.s3.read_parquet(full_path, dataset=True)`
It is throwing error '+00:00'
When i print dataframe parquet data_frame['ingestion_date'] before writing it gives below value.
2020-09-08 04:52:32.916075+00:00
**To Reproduce**
Steps to reproduce the behavior. Also add details about Python and Wrangler's version and how the library was installed.
*P.S. Don't attach files. Please, prefer add code snippets directly in the message body.*
The text was updated successfully, but these errors were encountered:
* supporting wildcard matching
* tweaking type hinting
* removing extra line breaks
* improving return for legibility
* validation shell script changes
* mypy fixes
* further tweaks to type hinting
* Fix bug for read_parquet with offset timezones. #385
Co-authored-by: Nick Miles <nicholasmi@zillowgroup.com>
Co-authored-by: Igor Tavares <igorborgest@gmail.com>
* supporting wildcard matching
* tweaking type hinting
* removing extra line breaks
* improving return for legibility
* validation shell script changes
* mypy fixes
* further tweaks to type hinting
* Fix bug for read_parquet with offset timezones. #385
Co-authored-by: Nick Miles <nicholasmi@zillowgroup.com>
Co-authored-by: Igor Tavares <igorborgest@gmail.com>
Describe the bug
I created data frame with column ingestion_date and value for this source with ```
df['ingestion_date']=pd.to_datetime(datetime.datetime.now(datetime.timezone.utc))
wr.s3.to_parquet(
partition_cols=partition_col,
df=df,
path=s3_write_path,
dataset=True,
sanitize_columns=False
)
The text was updated successfully, but these errors were encountered: