Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[load_from_hf_hub] Add dataset_length, set_index #339

Merged
merged 4 commits into from
Aug 8, 2023

Conversation

NielsRogge
Copy link
Contributor

This PR adds 2 things to the load_from_hf_hub reusable component:

  • a dataset_length argument, which is required in case the user specifies n_rows_to_load. The reason why I added this is because I hit an issue when n_rows_to_load was larger than the partition size. The current code loads only the first partition, so even though I specified n_rows_to_load to be 150k, I only got 69,000 rows. So to solve this I calculate the size of a single partition, then return approximately the requested n_rows_to_load.
  • adds a monotonically increasing index as suggested by this Stackoverflow post, to solve the issue of duplicate indices due to every partition having indices that start at 0.

@PhilippeMoussalli
Copy link
Contributor

PhilippeMoussalli commented Aug 8, 2023

Thanks @NielsRogge!

I tried looking around a bit to see if we can fetch the number of rows from the hf_hub API directly without downloading the dataset to disk. I only found this method which returns some metadata but the number of rows is not guaranteed to be in there.

I would opt for a different approach to calculate the required partitions (pseudo code to be tested):

if self.n_rows_to_load is not None:
  partitions_length = 0 
  for npartitions, partition in enumerate(dataset.partitions):
       if partitions_length >= n_rows_to_load:
           logger.info(f"Required number of partitions to load {n_rows_to_load} is {npartitions})
           break 
       partitions_length += len(partition)
  dask_df = dask_df.head(n_rows_to_load, npartitions=npartitions)
  dask_df = dd.from_pandas(dask_df, npartitions=npartitions)

This has the following advantages:

  • The user does not have to lookup the number of rows from hf hub and input it manually
  • It returns a more approximate number of rows if the partitions are largely imbalanced w.r.t to the n_rows

The disadvantage is that you have to called compute however since we're computing only a few individual partitions and not the whole dataset, I expect the overhead to be minimal.

Regarding the index, the approach sounds sensible. We can include it in the component for now but I will move it to the backend in a separate PR

@NielsRogge
Copy link
Contributor Author

Thanks @PhilippeMoussalli, I've verified your solution and it seems to work.

Feel free to approve

Copy link
Contributor

@PhilippeMoussalli PhilippeMoussalli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Some small comments to revert things and I think you need to run a pre-commit since some tests are failing

@NielsRogge NielsRogge merged commit dfdcee3 into main Aug 8, 2023
5 checks passed
@NielsRogge NielsRogge deleted the add_dataset_length branch August 8, 2023 12:19
Hakimovich99 pushed a commit that referenced this pull request Oct 16, 2023
This PR adds 2 things to the `load_from_hf_hub` reusable component:

- a `dataset_length` argument, which is required in case the user
specifies `n_rows_to_load`. The reason why I added this is because I hit
an issue when `n_rows_to_load` was larger than the partition size. The
current code loads only the first partition, so even though I specified
`n_rows_to_load` to be 150k, I only got 69,000 rows. So to solve this I
calculate the size of a single partition, then return approximately the
requested `n_rows_to_load`.
- adds a monotonically increasing index as suggested by this
Stackoverflow post, to solve the issue of duplicate indices due to every
partition having indices that start at 0.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants