Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to access Azure Delta Lake #600

Closed
ganesh-gawande opened this issue May 9, 2022 Discussed in #599 · 58 comments · Fixed by #643
Closed

Not able to access Azure Delta Lake #600

ganesh-gawande opened this issue May 9, 2022 Discussed in #599 · 58 comments · Fixed by #643

Comments

@ganesh-gawande
Copy link

Discussed in #599

Originally posted by ganesh-gawande May 9, 2022
Hi,

I am using the documentation - https://github.com/delta-io/delta-rs/blob/main/docs/ADLSGen2-HOWTO.md
I tried many version of paths - but not able to access the Delta lake.

Following error received - Not a Delta table: No snapshot or version 0 found OR Invalid object URI

Here are the paths I have tried in my code but nothing works.

delta = DeltaTable("adls2://{ContainerName}@{StorageAccountName}.dfs.core.windows.net")
delta = DeltaTable("adls2://{StorageAccountName}/{ContainerName}/{Folder1}/{Folder2}/{FileName}.parquet")
delta = DeltaTable("adls2://{StorageAccountName}/{ContainerName}/{DeltaTableNameFromDatabricks}")
delta = DeltaTable("adls2://{StorageAccountName}/{ContainerName}/")
delta = DeltaTable("adls2://{ContainerName}@{StorageAccountName}.dfs.core.windows.net/{ContainerName}/{DeltaTableNameFromDatabricks}")

delta = DeltaTable("abfss://{ContainerName}@{StorageAccountName}.dfs.core.windows.net/{ContainerName}/{DeltaTableNameFromDatabricks}")
delta = DeltaTable("abfss://{ContainerName}@{StorageAccountName}.dfs.core.windows.net/{ContainerName}/")
delta = DeltaTable("abfss://{ContainerName}@{StorageAccountName}.dfs.core.windows.net/")
@roeap
Copy link
Collaborator

roeap commented May 9, 2022

Hi @ganesh-gawande, tahnks for this report. - could you share the basic layout of your storage account? Specifically where the _delta_logs folder is located?

@ganesh-gawande
Copy link
Author

Hi @roeap -
Storage account -> Container -> Delta log folder

@roeap
Copy link
Collaborator

roeap commented May 10, 2022

not sure i follow. This means, the delta table is located at the root of the container within the storage account? In your example you mention DeltaTableNameFromDatabricks. Does this then only refer to the name given in the metadata?

Also, would it be possible to share the (possibly redacted) contents of the 00000000000000000000.json file from the delta log?

@ganesh-gawande
Copy link
Author

ganesh-gawande commented May 10, 2022

Yes. Delta table is located at root of container of storage account. This means - when I open the container in storage explorer - I can see _delta_logs folder.

More info - I have partitioned the delta table - based on Id and date. So where the _delta_logs folder is present, at same level other folders with id present. Within that folder date folder is present - and within that .parquet files are present.
e.g. Folder heirarchy is like below
Container

  • _delta_logs
  • id=1
    -- 2022-01-02
    ---*.parquet
  • id=2
    -- 2022-01-02
    --- *.parquet

I don't see the json file with name you mentioned. I can see multiple json files with different numbers. e.g. 00000000000000086096.json, 00000000000000086097.json, 00000000000000086098.json etc etc.

@roeap
Copy link
Collaborator

roeap commented May 10, 2022

Thanks! In that case "adls2://{ContainerName}@{StorageAccountName}.dfs.core.windows.net" should be the way to go. The fact that we are getting the error you mentioned should already mean, that we can access the storage, but it seems we are not finding the entry point to the logs. AFAIK the log files should never be deleted and the all zero file i mentioned denotes the initial commit. I have to look a bit into our codebase to remind me how delta-rs starts parsing the log.

Given the number of commits, there should also be a file called _last_checkpoint, which should point to a parquet file containing all commit info up to that checkpoint and allows us to avoid parsing the log from the very beginning. If that file does not exist and we cannot find the aforementioned file, the error message you mentioned is shown.

So my question is, does that _last_checkpoint file exist, and if so, does it point to an exisiting parquet file in the log folder? In any case, the 00000000000000000000.json file not existing might already be considered a corrupt delta log, even though it should work as long as as checkpoint files exist with all relevant information.

@houqp - is that correct?

@ganesh-gawande
Copy link
Author

Thank for insights.

This still not working - adls2://{ContainerName}@{StorageAccountName}.dfs.core.windows.net

I have checked my other delta lake storage - logs folder as well - but did not find 00000000000000000000.json file.

Having said that, in all the delta lake log folders - I can locate the checkpoint file like this - 00000000000000085996.checkpoint.parquet which contains multiple records containing commit info.

Let me know when you take look at the code to check how delta-rs starts parsing the logs.

@roeap
Copy link
Collaborator

roeap commented May 10, 2022

just making sure - does the _last_checkpoint file exist?

the logic is roughly like this .. check if we find the _last_checkpoint file, if yes, start from that checkpoint, if not, start from the zeros file... if that is not found you see the message you posted in the beginning...

@ganesh-gawande
Copy link
Author

No file with name as _last_checkpoint.

The name of file is like 00000000000000085996.checkpoint.parquet

This is only file with name as checkpoint

More information:
I am writing this data to this Delta lake from the Databricks notebook in delta format.

@roeap
Copy link
Collaborator

roeap commented May 10, 2022

hmm strange ... this seems like a corruption in the delta log to me... when databricks creates a checkpoint it should also create a _last_checkpoint file. The rust implementation relies on either identifying the latest checkpoint via that file or starting from the beginning.

One way to load the table could be to use the load_version function i.e. table.load_version(85996). Looking at the delta sepcification really quick it seems to me this scenario (i.e. parts of the log missing) is not something a reader needs to support, but would be resilient to if the last checkpoint file exists.

If you use the load_version command mentioned above we search for the closest checkpoint with a lower or equal version and that "should" work. So it should work with any version higher then that checkpoints version. THe reasoing for all that logic is tables exactly like yours, where listing a directory with 10s of thousands of files becomes prohibitively expensive...

I'd be interested to know if databricks is able to load that table without specifying a specific version.

@ganesh-gawande
Copy link
Author

I am able to query the delta table correctly from the Databricks notebooks. I hope that is also working on the delta logs and latest checkpoint concept. So if those delta logs are corrupted then that might have cause problem when same delta table is getting queried from the databricks notebook. Does this understanding correct?

I have following configuration in Databricks cluster spark configuration. Does this have any relation to modify names of the delta log files or checkpoint files.

spark.databricks.delta.symlinkFormatManifest.fileSystemCheck.enabled false
spark.databricks.delta.preview.enabled true
spark.databricks.delta.schema.autoMerge.enabled true

@ganesh-gawande
Copy link
Author

In addition to test the corruption of delta logs - I have created a new Azure Storage account - and created delta table pointing to it.

Now I can see the JSON file 00000000000000000000.json in _delta_log folder.
image

I have updated the configuration my python code where I am using delta-rs package.
delta = DeltaTable("adls2://sample@sampledeltalakestorage.dfs.core.windows.net")

In above code, sample is container name and sampledeltalakestorage is my storage account name.

I am still getting the same error - Failed to read delta log object: Invalid object URI

@wjones127
Copy link
Collaborator

In any case, the 00000000000000000000.json file not existing might already be considered a corrupt delta log, even though it should work as long as as checkpoint files exist with all relevant information.

@roeap Actually, the first delta entry is not guaranteed to exist. See my update in delta-io/delta#913

Not sure if we are testing that in this repo though.

@roeap
Copy link
Collaborator

roeap commented May 10, 2022

@ganesh-gawande - sorry for sending you on a wild goose chase ... I recreated the behaviour locally. Seems like we have a bug where we cannot read a table from the root of a container. I opened #602 to track this and should soon be able to get to this.

Actually, the first delta entry is not guaranteed to exist

@wjones127 would we then expect that the _last_checkpoint file exists to discover a checkpoint, or do we have to rely on lexicographical sort to find the file? We should definitely have a test for this, not sure that we do. If we can do without the last checkpoint file we should have a closer look at the code path - not sure that we could handle that.

@wjones127
Copy link
Collaborator

would we then expect that the _last_checkpoint file exists to discover a checkpoint

It's vague in the protocol, but I don't think it necessarily exists. (And it's also not guaranteed to point to the most recent checkpoint.) We probably shouldn't rely on it 😢

@roeap
Copy link
Collaborator

roeap commented May 10, 2022

@ganesh-gawande - so the path you should be using is adls2://{StorageAccountName}/{ContainerName}/. After #603 is merged adls2://{StorageAccountName}/{ContainerName} should also work.

However I also tried loading a delta log with initial commit files remove, which only work if there is a _last_commit file present. When that file is missing we see the exact error message you encountered.

@wjones127 @houqp - I do remember the protocol explicitly mentioning lexicographical sort to work with the log. Should we implement that logic, or make sure first that delta needs to support finding checkpoints w/o that file. or are we already sure :).

I guess the core logic from loading a specific version can already largely be reused. Likely we would also want to mirror the logic in our writers to create a checkpoint every ten commits.

@ganesh-gawande
Copy link
Author

@roeap - As per you suggestions - I tried this path adls2://{StorageAccountName}/{ContainerName}/ and its not working.

Do I need to update the version using pip or something else I am missing? or shall I need to wait for both #602 and #603 need to be merge?

@roeap
Copy link
Collaborator

roeap commented May 11, 2022

kind of lost track... just making sure, so to clarify

  • Loading the original table does not work
    • That table does NOT have _last_commit file or 00000000000000000000.json.

Did you also try the test table you created which DOES have 00000000000000000000.json?

@ganesh-gawande
Copy link
Author

Yes. If you see the message which has screenshot - there I have tried to create new storage account and new delta table and which has 00000000000000000000.json file as well.

Still I am not able to access it. I am getting error as - Not a Delta table: No snapshot or version 0 found, perhaps adls2://sampledeltalakestorage/sample is an empty dir?

@roeap
Copy link
Collaborator

roeap commented May 11, 2022

also if you add a trailing "/"? This is actually the bug that gets fixed in #603. i.e. if the trailing slash is missing it also fails for me ...

@ganesh-gawande
Copy link
Author

Yes. Although I add / at last or do not add it, still getting same error.

@roeap
Copy link
Collaborator

roeap commented May 11, 2022

at which point do you see the error? I just tried with the released python package, I can load the table, get metadata, history etc, but am also seeing an error when trying to materialize the dataset.

could you try something like

table = DeltaTable("adls2://{StorageAccountName}/{ContainerName}/")
table.pyarrow_schema()

That would help narrow down the source of the error.

@ganesh-gawande
Copy link
Author

Can you please confirm once the version you are using ? Or point out to URL to download latest version so that I will check again ?

@roeap
Copy link
Collaborator

roeap commented May 11, 2022

i used the released version 0.5.7

@ganesh-gawande
Copy link
Author

ganesh-gawande commented May 12, 2022

When I tried to install or upgrade new version - I am getting following errors. See the first line and last line. It start download of 0.5.7 - but at the last line - it installs 0.5.6. Any thoughts on this?

I have upgraded the pip as well to pip 21.3.1

Collecting deltalake
Using cached deltalake-0.5.7.tar.gz (4.3 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\g.gawande\AppData\Local\Programs\Python\Python36\python.exe' 'C:\Users\g.gawande\AppData\Local\Programs\Python\Python36\lib\site-packages\pip_vendor\pep517\in_process_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\G40711.GAW\AppData\Local\Temp\tmprw4bi353'
cwd: C:\Users\G4071
1.GAW\AppData\Local\Temp\pip-install-0jujfpje\deltalake_4eba21a2f36d4e65952f26952f2d7b48
Complete output (6 lines):

Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/

Checking for Rust toolchain....

WARNING: Discarding https://files.pythonhosted.org/packages/4f/6c/fe7dafb8e4fed25e97652c1ab1bbd73ae4fb1f32881abc73e9dcaabe1167/deltalake-0.5.7.tar.gz#sha256=b14f7417f72fa363519e7080ed9c99f4fc31f93a8af8428fae2370f090297bc6 (from https://pypi.org/simple/deltalake/) (requires-python:>=3.6). Command errored out with exit status 1: 'C:\Users\g.gawande\AppData\Local\Programs\Python\Python36\python.exe' 'C:\Users\g.gawande\AppData\Local\Programs\Python\Python36\lib\site-packages\pip_vendor\pep517\in_process_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\G4071~1.GAW\AppData\Local\Temp\tmprw4bi353' Check the logs for full command output.
Using cached deltalake-0.5.6-cp36-abi3-win_amd64.whl (6.4 MB)
Requirement already satisfied: pyarrow>=4 in c:\users\g.gawande\appdata\local\programs\python\python36\lib\site-packages (from deltalake) (6.0.1)
Requirement already satisfied: dataclasses in c:\users\g.gawande\appdata\local\programs\python\python36\lib\site-packages (from deltalake) (0.8)
Requirement already satisfied: numpy<1.20.0 in c:\users\g.gawande\appdata\local\programs\python\python36\lib\site-packages
(from deltalake) (1.19.5)
Installing collected packages: deltalake
Successfully installed deltalake-0.5.6

@roeap
Copy link
Collaborator

roeap commented May 12, 2022

seems like your system is choosing a source distribution in case of 0.5.7, while using a pre-compiled wheel in case of 0.5.6. As you seem to not have cargo (i.e. the rust toolchain) installed on your system its failing to build the package locally. As to why that is the case I am not sure, as I am no too familiar with how python (or pip) makes a choice which install mehtod / artifact to choose.

@roeap
Copy link
Collaborator

roeap commented May 13, 2022

@ganesh-gawande - i was able to dig into loading the table from python. Turns out it was #602 which also caused the issue, since the trailing slash gets truncated when we initialize the file system internally.

I was able to load the table using the following workaround.

from deltalake import DeltaTable
from deltalake.fs import DeltaStorageHandler
import pyarrow.fs as pa_fs

path = "adls2://{StorageAccountName}/{ContainerName}/"

table = DeltaTable(path)
filesystem = pa_fs.PyFileSystem(DeltaStorageHandler(path))

ds = table.to_pyarrow_dataset(filesystem=filesystem)

@roeap
Copy link
Collaborator

roeap commented May 16, 2022

@ganesh-gawande is this still relevant, or did you manage to resolve this?

@ganesh-gawande
Copy link
Author

ganesh-gawande commented May 17, 2022

@roeap

I am able to upgrade the release version to 0.5.7. (For that I need to upgrade my python version to 3.8.10 and then upgrade the pip and then I was able to upgrade it to 0.5.7).

Now After that - I am trying to use the code you have given in #600 (comment)

I have created a new Delta Lake - at the root of the container - which has folder _delta_log and within that - 00000000000000000000.json file exists

Still I am getting the issue - Not a Delta table: No snapshot or version 0 found. Please find attached screenshot for the same.

image

@ganesh-gawande
Copy link
Author

@roeap - Please do let me know if you need more information on this? I can share the code file and associated storage account keys with you offline if you want to check it once.

@roeap
Copy link
Collaborator

roeap commented Jun 22, 2022

should be very shortly - the release PR just needs to be updated and merged, but all pending work that we wanted to include is on main.

@roeap
Copy link
Collaborator

roeap commented Jun 24, 2022

@ganesh-gawande - the new python bindings are released, could you check if that mitigates your error?

@ganesh-gawande
Copy link
Author

@roeap - Unfortunately I am getting the same error as earlier. I have removed the earlier deltalake package version and installed new 0.5.8. Sharing all the details again here for your quick reference.

Here is my Azure storage structure, the storage account name and container name.
Stroage Structure

Here are contents of _delta_log folder
Delta_log Folder

Here is code snippet I am using - as per shared by you.

from deltalake import DeltaTable
from deltalake.fs import DeltaStorageHandler
import pyarrow.fs as pa_fs
import os

path = "adls2://sampledeltalakestorage/sample/"
table = DeltaTable(path)

I am getting following error -

File "C:\Users\g.gawande\AppData\Local\Programs\Python\Python38\lib\site-packages\deltalake\table.py", line 90, in __init__
    self._table = RawDeltaTable(
deltalake.PyDeltaTableError: Not a Delta table: No snapshot or version 0 found, perhaps adls2://sampledeltalakestorage/sample is an empty dir?

@s-suryakiran-sureshkumar
Copy link

s-suryakiran-sureshkumar commented Jul 6, 2022

@roeap - Unfortunately I am getting the same error as earlier. I have removed the earlier deltalake package version and installed new 0.5.8. Sharing all the details again here for your quick reference.

Here is my Azure storage structure, the storage account name and container name. Stroage Structure

Here are contents of _delta_log folder Delta_log Folder

Here is code snippet I am using - as per shared by you.

from deltalake import DeltaTable
from deltalake.fs import DeltaStorageHandler
import pyarrow.fs as pa_fs
import os

path = "adls2://sampledeltalakestorage/sample/"
table = DeltaTable(path)

I am getting following error -

File "C:\Users\g.gawande\AppData\Local\Programs\Python\Python38\lib\site-packages\deltalake\table.py", line 90, in __init__
    self._table = RawDeltaTable(
deltalake.PyDeltaTableError: Not a Delta table: No snapshot or version 0 found, perhaps adls2://sampledeltalakestorage/sample is an empty dir?

I am also getting the same issue.

Here is code snippet that I used:

from deltalake import DeltaTable
import os

path = "adls2://{storage_account_name}/{container_name}/{delta_table}/"
table = DeltaTable(path)

I am getting following error -

    self._table = RawDeltaTable(
deltalake.PyDeltaTableError: Not a Delta table: No snapshot or version 0 found

Is there any solution for the issue?

@roeap
Copy link
Collaborator

roeap commented Jul 12, 2022

Hmm, this is a bit puzzling. Could you remind me which authorization mechanism you are using, and also validate that you can read and list on that account?

During tests as well as at my work we successfully working with tables stored in azure, and there really is not much more one can do other then provide proper authjorization. Given your code snipplet (and probably discussed above somewhere :)) I assume you are using environment variables, right?

There were some issus in azure lsit, that have been fixed recently, but reading a table with the initial version file present should not require any list operation ... In any case, if you can build off main and see if that works, that is always helpful :).

@michaelenew
Copy link

@roeap I am hitting this same error message today trying to write to S3 and local with deltalake.writer.write_deltalake. The error message appears when passing a DeltaTable object, but not when passing a string. Hopefully this helps with a repro case here.

// Error deltalake.PyDeltaTableError: Not a Delta table: No snapshot or version 0 found...
import pyarrow
import deltalake
deltalake.writer.write_deltalake(
    deltalake.DeltaTable('/path/to/my/table'),
    pyarrow.RecordBatch.from_pylist([{f'col{i}': i for i in range(5)}]),
    mode = 'append'
)
// Runs as expected
import pyarrow
import deltalake
deltalake.writer.write_deltalake(
    '/path/to/my/table',
    pyarrow.RecordBatch.from_pylist([{f'col{i}': i for i in range(5)}]),
    mode = 'append'
)

@roeap
Copy link
Collaborator

roeap commented Jul 31, 2022

@michaelenew - thanks for the report! Currently writing to remote stores is not supported using write_deltalake. However we are actively working on integrating a new object store implementation also with the intent to enable this.

@cdena
Copy link

cdena commented Sep 3, 2022

Update: I think something has changed in 0.6.0 and the docs aren't published yet. I re-installed with target version 0.5.8 and adls2 worked for reading the table. I'll wait for the 0.6.0 docs to see what may have changed and try with the newer version.

@roeap I am running the latest release 0.6.0 and also running into the azure error "Not a Delta table: No snapshot or version 0 found, perhaps adls2://accountname/sandbox/taxi_data is an empty dir?"

I have pulled the delta table directory down locally and run the DeltaTable call and it works fine.

I have double checked that the ENV entries are coming through. If I remove them then the error output complains about missing auth. I can also use azure.storage.blob with the key and account name and list every file. Below is a summary.

#works
path = r'C:/temp/taxi_data'
delta = DeltaTable(path)
dataFrames = delta.to_pyarrow_table().to_pandas()
print(dataFrames)

#Doesn't Work - gives  error:  Not a Delta table: No snapshot or version 0 found
delta = DeltaTable('adls2://accountname/sandbox/taxi_data/')
dataFrames = delta.to_pyarrow_table().to_pandas()

#Below Sort of Works -  DeltaTable runs,  subsequent call to_pyarrow_table  fails..    Exception has occurred: PyDeltaTableError
Object at location abfss:/container@account.dfs.core.windows.net/taxi_data/part-00000-19f3445e-57f8-49ea-8de1-1c496515a52d-c000.snappy.parquet not found: response error "<?xml version="1.0" encoding="utf-8"?><Error><Code>BlobNotFound</Code><Message>The specified blob does not exist.

delta = DeltaTable('abfss://container@account.dfs.core.windows.net/taxi_data/')   <---- runs here
dataFrames = delta.to_pyarrow_table().to_pandas()  <------- fails on this line

@roeap
Copy link
Collaborator

roeap commented Sep 7, 2022

@ganesh-gawande @cdena - we have released 0.6.1, could you check if this work for you now? Also we now recommend different table identifiers - https://delta-io.github.io/delta-rs/python/usage.html#loading-a-delta-table

@iamkk11
Copy link

iamkk11 commented Sep 15, 2022

@michaelenew - thanks for the report! Currently writing to remote stores is not supported using write_deltalake. However we are actively working on integrating a new object store implementation also with the intent to enable this.

When will writing to remote stores be supported? @roeap

@roeap
Copy link
Collaborator

roeap commented Sep 15, 2022

Current main already supports writing to remote stores, at least experimentally, We still have to iron out some bugs and do more integration testing, but the next release will have some initial support. I assume the next release will happen within the next few weeks.

@iamkk11
Copy link

iamkk11 commented Sep 15, 2022

Current main already supports writing to remote stores, at least experimentally, We still have to iron out some bugs and do more integration testing, but the next release will have some initial support. I assume the next release will happen within the next few weeks.

may I please have an example of how this works? How and where do I pass in the auth parameters to remote azure blob storage?

@roeap
Copy link
Collaborator

roeap commented Sep 15, 2022

in the docs is an example, as well as a link to the available azure options.

The same storage options can also be passed to the write_deltalake function. If you pass the table itself, the tables storage should be used, and its not requires to pass in the options separately.

@iamkk11
Copy link

iamkk11 commented Sep 16, 2022

@roeap I am doing the below:

path = "az://helios-poc/bronze/mulesoft_dev"
dt = DeltaTable(path, storage_options=storage_options)
write_deltalake(dt, df, mode='append')

And getting the below error:

deltalake.PyDeltaTableError: Failed to read delta log object: Generic MicrosoftAzure error: Account must be specified

@roeap
Copy link
Collaborator

roeap commented Sep 16, 2022

You are missing a setting for AZURE_STORAGE_ACCOUNT_NAME :).

By the way, Azure unfortunately is somewhat "special" here. Usually, I would recommend using one of the pyarrow.fs filesystems, as they require less transmission of data between language barriers. For azure though no native arrow implementation exists, due to C++ version incompatibilities.

In case you have the capacity, I would be really interested to see the difference in performance between the way you specified it (which uses the rust filesystem wrapped in several translation layers) and using adlfs, wrapped in a SubTreeFileSystem which still needs to pass language barriers, but fewer...

Eventually though we will indentify the bottenecks and bring up performance.

@iamkk11
Copy link

iamkk11 commented Sep 16, 2022

I do not think AZURE_STORAGE_ACCOUNT_NAME is the issue because I have specified it and I can even get the delta table version. @roeap

storage_options = {}
   storage_options['AZURE_STORAGE_ACCOUNT_KEY'] = "XXX"
   storage_options['AZURE_STORAGE_ACCOUNT_NAME'] = "XX"
   storage_options['AZURE_STORAGE_CONNECTION_STRING'] = "XX"
   storage_options['AZURE_STORAGE_CLIENT_ID'] = "XX"
   storage_options['AZURE_STORAGE_CLIENT_SECRET'] = "XX"
   storage_options['AZURE_STORAGE_TENANT_ID'] = "XXX"

@roeap
Copy link
Collaborator

roeap commented Sep 16, 2022

just to clarify, you are using a build of main, or a released version? The released versions right now do not yet really support writing to remote storages.

If you are using main, this may be a bug, but it may be able to circumvent that, by setting AZURE_STORAGE_ACCOUNT_NAME in the environment. The error itself originates in the file system builder, and tells us that specific config is missing.

@iamkk11
Copy link

iamkk11 commented Sep 16, 2022

I am using the release version which I installed via pip. @roeap .
Alright will be awaiting this feature in the next release I suppose.

@bkazour
Copy link

bkazour commented Jan 26, 2023

Hello,
Any update on this? I am using 0.7.0. I migrated to this library because of the great features in batching. However, I am still receiving this error even with the workarounds that are explained here. I am using the path in the abfs format. the old library is able to read it normally as well as databricks.
Any idea on this?

As an update. I downgraded to version 0.5.8
I got an invalid URI error with the same URI
my URI is as follows: abfs://xxxx@xxxxxx.dfs.core.windows.net/ilo/folder name/
NB: The folder name has space in it

@roeap
Copy link
Collaborator

roeap commented Jan 26, 2023

Hmm, i and many others are successfully using deltalake with azure storage. Could be that the space in the url is a problem. The latest release also fixed a bug using client id / secret.

Alternatively, could you try the recommended az://... url format and setting the account via parameters.

@bkazour
Copy link

bkazour commented Jan 27, 2023

@roeap ,
Thank you for your reply. Can you please let me know what is the az:// url format? I only found documentation on the abfs format, and the Microsoft Azure Storage explorer is listing the files in https format
I am using the keys: AZURE_STORAGE_ACCOUNT_NAME and AZURE_STORAGE_ACCOUNT_KEY for authentication and the _delta_log contains 00000000000000000000.json and 00000000000000000001.json and receiving this error PyDeltaTableError: Not a Delta table: No snapshot or version 0 found,

Any help would be appreciated.

@roeap
Copy link
Collaborator

roeap commented Jan 27, 2023

@bkazour - sure :).

The format is az://<container>/<path>. as storage options you would then use:

storage_options = {
    "account_name": "<my-account>",
    "account_key": "my-key",
}

That said, the config you provided should work as well. It may be that these is an issue with a space within the paths. Although the underlying object store crate does rigorous testing on all sorts of edge cases a paths may be specified in.

LEt me know if that works, otherwise I'll have to do some investigating.

@bkazour
Copy link

bkazour commented Jan 27, 2023

@roeap It is definitely the space in the name. Once I tried it with a file that has no space, it read normally.
My guess is that it is adding the encoding %20 in place of the space while trying to reach for the folder and not finding it. I assumed this because the returned error contains %20 while the sent string doesn't have it

@roeap
Copy link
Collaborator

roeap commented Feb 18, 2023

I opened #1156, to keep track of the space character encoding, Closing this generic issue since I validated various azure auth methods.

Feel free to re-open if the issue persists.

@roeap roeap closed this as completed Feb 18, 2023
@ganesh-gawande
Copy link
Author

@roeap _ I confirmed that the issue reported above is resolved in version 0.7.0. I am able to connect Azure Storage account with the change in path with az://{containerName}/path and with storage options parameter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants