-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[load_from_hf_hub] Add dataset_length, set_index #339
Conversation
Thanks @NielsRogge! I tried looking around a bit to see if we can fetch the number of rows from the hf_hub API directly without downloading the dataset to disk. I only found this method which returns some metadata but the number of rows is not guaranteed to be in there. I would opt for a different approach to calculate the required partitions (pseudo code to be tested): if self.n_rows_to_load is not None:
partitions_length = 0
for npartitions, partition in enumerate(dataset.partitions):
if partitions_length >= n_rows_to_load:
logger.info(f"Required number of partitions to load {n_rows_to_load} is {npartitions})
break
partitions_length += len(partition)
dask_df = dask_df.head(n_rows_to_load, npartitions=npartitions)
dask_df = dd.from_pandas(dask_df, npartitions=npartitions) This has the following advantages:
The disadvantage is that you have to called compute however since we're computing only a few individual partitions and not the whole dataset, I expect the overhead to be minimal. Regarding the index, the approach sounds sensible. We can include it in the component for now but I will move it to the backend in a separate PR |
Thanks @PhilippeMoussalli, I've verified your solution and it seems to work. Feel free to approve |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Some small comments to revert things and I think you need to run a pre-commit since some tests are failing
149c479
to
31d6a55
Compare
This PR adds 2 things to the `load_from_hf_hub` reusable component: - a `dataset_length` argument, which is required in case the user specifies `n_rows_to_load`. The reason why I added this is because I hit an issue when `n_rows_to_load` was larger than the partition size. The current code loads only the first partition, so even though I specified `n_rows_to_load` to be 150k, I only got 69,000 rows. So to solve this I calculate the size of a single partition, then return approximately the requested `n_rows_to_load`. - adds a monotonically increasing index as suggested by this Stackoverflow post, to solve the issue of duplicate indices due to every partition having indices that start at 0.
This PR adds 2 things to the
load_from_hf_hub
reusable component:dataset_length
argument, which is required in case the user specifiesn_rows_to_load
. The reason why I added this is because I hit an issue whenn_rows_to_load
was larger than the partition size. The current code loads only the first partition, so even though I specifiedn_rows_to_load
to be 150k, I only got 69,000 rows. So to solve this I calculate the size of a single partition, then return approximately the requestedn_rows_to_load
.