Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: depth estimation dataset guide. #5379

Merged
merged 6 commits into from
Jan 13, 2023
Merged

feat: depth estimation dataset guide. #5379

merged 6 commits into from
Jan 13, 2023

Conversation

sayakpaul
Copy link
Member

@sayakpaul sayakpaul commented Dec 20, 2022

This PR adds a guide for prepping datasets for depth estimation.

PR to add documentation images is up here: https://huggingface.co/datasets/huggingface/documentation-images/discussions/22

@sayakpaul sayakpaul self-assigned this Dec 20, 2022
@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Dec 20, 2022

The documentation is not available anymore as the PR was closed or merged.

Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks awesome, thanks for this guide!

Also, I think you can just add the documentation images without opening a PR 🙂

docs/source/depth_estimation.mdx Outdated Show resolved Hide resolved
docs/source/depth_estimation.mdx Outdated Show resolved Hide resolved
docs/source/depth_estimation.mdx Outdated Show resolved Hide resolved
docs/source/depth_estimation.mdx Outdated Show resolved Hide resolved
docs/source/depth_estimation.mdx Outdated Show resolved Hide resolved
docs/source/depth_estimation.mdx Outdated Show resolved Hide resolved
>>> train_dataset.set_transform(transforms)
```

You can verify the transformation worked by indexing into the `pixel_values` and `labels` of an example:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would combine the two code snippets below.

Suggested change
You can verify the transformation worked by indexing into the `pixel_values` and `labels` of an example:
You can verify the transformation worked by indexing into the `pixel_values` and `labels` of an example image:

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The succession made it visually simpler to read through. Hence decided to keep it that way.

docs/source/depth_estimation.mdx Outdated Show resolved Hide resolved
sayakpaul and others added 2 commits December 21, 2022 08:32
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
@stevhliu
Copy link
Member

Thanks for the changes, looks good to me!

@sayakpaul
Copy link
Member Author

@stevhliu I have pushed some quality improvements both in terms of code and content. Would you be able to re-review?

Comment on lines +127 to +130
* Random cropping
* Random brightness and contrast
* Random gamma correction
* Random hue saturation
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to drop horizontal flipping. Here's why.

The behavior seems to be unstable with datasets when applied via set_transform(). Consider the following as an example.

debug_augmentations

You'll notice that subplots tagged as "wrong" have flipped images but their corresponding depth maps are not flipped. This is not a consistent case, though. Subplots tagged as "right", for example. You'll notice that the images and their depth maps have both been flipped there (compare them to the originals and you should be able to spot the difference).

I have verified if there's a problem with albumentations but that doesn't seem to be the case. Refer to this notebook: https://gist.github.com/sayakpaul/63ae5f7971f634894ab37e76e6b3879c. You'll notice that I took individual samples from the dataset and passed them through the aug chain and the results were fine.

Have you experienced this before? Is anything obviously wrong you can spot in the code?

Let me know if anything's unclear.

@nateraw @lhoestq

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm unable to reproduce this issue 🙁 . Take a look at this simplified notebook.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes @nateraw. But it doesn't seem to be consistent, as I stated here:

You'll notice that subplots tagged as "wrong" have flipped images but their corresponding depth maps are not flipped. This is not a consistent case, though. Subplots tagged as "right", for example.

Sometimes, the augmented depth maps are correct, i.e., the flip transformation has been correctly applied, and sometimes, that doesn't seem to be the case. You'll notice in the collage here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea I noticed that, but I ran it over and over and never got that behavior. Do you have a minimal reproduction of the behavior?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't. Strangely, it only comes up when I try to put it inside a collection of images.

I will try to reproduce the behavior minimally and let you know here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have this behavior because you call train_dataset[idx]["pixel_values"] and train_dataset[idx]["labels"] separately when plotting. For each call, it randomly gets flipped or not. To fix this, just do

>>> for i, idx in enumerate(random_indices):
...     ax = plt.subplot(3, 3, i + 1)
...     example = train_dataset[idx]
...     input_image = example["pixel_values"]
...     depth_target = example["labels"]
...     image_viz = merge_into_row(input_image, depth_target)
...     plt.imshow(image_viz.astype("uint8"))
...     plt.axis("off")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch @lhoestq !!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My god! Thanks for saving me!

Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very cool, love the new show_depthmap function! :)

docs/source/depth_estimation.mdx Outdated Show resolved Hide resolved
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
@github-actions
Copy link

github-actions bot commented Jan 5, 2023

Show benchmarks

PyArrow==6.0.0

Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.008325 / 0.011353 (-0.003028) 0.004432 / 0.011008 (-0.006576) 0.099794 / 0.038508 (0.061286) 0.029469 / 0.023109 (0.006360) 0.306554 / 0.275898 (0.030656) 0.367373 / 0.323480 (0.043893) 0.007532 / 0.007986 (-0.000454) 0.003310 / 0.004328 (-0.001018) 0.077453 / 0.004250 (0.073203) 0.034836 / 0.037052 (-0.002216) 0.311696 / 0.258489 (0.053207) 0.349683 / 0.293841 (0.055842) 0.033089 / 0.128546 (-0.095457) 0.011339 / 0.075646 (-0.064307) 0.321699 / 0.419271 (-0.097573) 0.040213 / 0.043533 (-0.003320) 0.304741 / 0.255139 (0.049602) 0.331569 / 0.283200 (0.048369) 0.090397 / 0.141683 (-0.051285) 1.526001 / 1.452155 (0.073847) 1.558863 / 1.492716 (0.066146)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.179446 / 0.018006 (0.161440) 0.416308 / 0.000490 (0.415818) 0.002390 / 0.000200 (0.002190) 0.000075 / 0.000054 (0.000021)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.023641 / 0.037411 (-0.013770) 0.096672 / 0.014526 (0.082147) 0.104330 / 0.176557 (-0.072227) 0.146338 / 0.737135 (-0.590797) 0.108278 / 0.296338 (-0.188060)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.420194 / 0.215209 (0.204985) 4.196981 / 2.077655 (2.119326) 1.861206 / 1.504120 (0.357086) 1.658748 / 1.541195 (0.117554) 1.704309 / 1.468490 (0.235819) 0.691639 / 4.584777 (-3.893138) 3.346303 / 3.745712 (-0.399409) 1.932962 / 5.269862 (-3.336900) 1.299395 / 4.565676 (-3.266281) 0.081869 / 0.424275 (-0.342406) 0.012415 / 0.007607 (0.004808) 0.530805 / 0.226044 (0.304761) 5.293486 / 2.268929 (3.024558) 2.328327 / 55.444624 (-53.116297) 1.964956 / 6.876477 (-4.911521) 2.002793 / 2.142072 (-0.139280) 0.813380 / 4.805227 (-3.991847) 0.150030 / 6.500664 (-6.350634) 0.065194 / 0.075469 (-0.010275)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.259421 / 1.841788 (-0.582367) 13.667796 / 8.074308 (5.593488) 13.819121 / 10.191392 (3.627729) 0.136718 / 0.680424 (-0.543706) 0.028510 / 0.534201 (-0.505691) 0.402246 / 0.579283 (-0.177037) 0.405279 / 0.434364 (-0.029085) 0.467185 / 0.540337 (-0.073153) 0.554213 / 1.386936 (-0.832723)
PyArrow==latest
Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.006738 / 0.011353 (-0.004615) 0.004616 / 0.011008 (-0.006393) 0.096978 / 0.038508 (0.058470) 0.027750 / 0.023109 (0.004640) 0.411505 / 0.275898 (0.135607) 0.441796 / 0.323480 (0.118316) 0.005073 / 0.007986 (-0.002913) 0.003360 / 0.004328 (-0.000968) 0.074445 / 0.004250 (0.070194) 0.040654 / 0.037052 (0.003602) 0.414277 / 0.258489 (0.155788) 0.448665 / 0.293841 (0.154824) 0.032346 / 0.128546 (-0.096200) 0.011533 / 0.075646 (-0.064114) 0.317349 / 0.419271 (-0.101923) 0.041934 / 0.043533 (-0.001599) 0.409102 / 0.255139 (0.153963) 0.429977 / 0.283200 (0.146777) 0.089459 / 0.141683 (-0.052224) 1.518127 / 1.452155 (0.065973) 1.569902 / 1.492716 (0.077186)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.232648 / 0.018006 (0.214642) 0.413751 / 0.000490 (0.413261) 0.000404 / 0.000200 (0.000204) 0.000057 / 0.000054 (0.000003)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.025468 / 0.037411 (-0.011943) 0.098195 / 0.014526 (0.083669) 0.108882 / 0.176557 (-0.067674) 0.150059 / 0.737135 (-0.587076) 0.110742 / 0.296338 (-0.185597)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.445326 / 0.215209 (0.230117) 4.449200 / 2.077655 (2.371545) 2.098939 / 1.504120 (0.594819) 1.861207 / 1.541195 (0.320012) 1.901385 / 1.468490 (0.432894) 0.695287 / 4.584777 (-3.889490) 3.461775 / 3.745712 (-0.283938) 2.998566 / 5.269862 (-2.271296) 1.555036 / 4.565676 (-3.010641) 0.082789 / 0.424275 (-0.341486) 0.012772 / 0.007607 (0.005165) 0.564855 / 0.226044 (0.338811) 5.631049 / 2.268929 (3.362120) 2.543771 / 55.444624 (-52.900854) 2.194378 / 6.876477 (-4.682099) 2.267168 / 2.142072 (0.125095) 0.803330 / 4.805227 (-4.001898) 0.151336 / 6.500664 (-6.349328) 0.067015 / 0.075469 (-0.008454)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.298422 / 1.841788 (-0.543366) 13.933637 / 8.074308 (5.859329) 13.570848 / 10.191392 (3.379456) 0.150787 / 0.680424 (-0.529637) 0.016911 / 0.534201 (-0.517290) 0.384771 / 0.579283 (-0.194512) 0.397505 / 0.434364 (-0.036858) 0.450931 / 0.540337 (-0.089406) 0.534501 / 1.386936 (-0.852435)

@sayakpaul
Copy link
Member Author

@lhoestq @nateraw made some changes as per the comments. PTAL and approve as necessary.

@github-actions
Copy link

github-actions bot commented Jan 5, 2023

Show benchmarks

PyArrow==6.0.0

Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.009037 / 0.011353 (-0.002316) 0.004970 / 0.011008 (-0.006038) 0.099223 / 0.038508 (0.060715) 0.034935 / 0.023109 (0.011826) 0.297027 / 0.275898 (0.021129) 0.352861 / 0.323480 (0.029382) 0.007558 / 0.007986 (-0.000427) 0.003903 / 0.004328 (-0.000425) 0.075663 / 0.004250 (0.071413) 0.042577 / 0.037052 (0.005524) 0.307182 / 0.258489 (0.048693) 0.344237 / 0.293841 (0.050396) 0.041438 / 0.128546 (-0.087108) 0.012159 / 0.075646 (-0.063487) 0.333771 / 0.419271 (-0.085501) 0.047847 / 0.043533 (0.004314) 0.290797 / 0.255139 (0.035658) 0.320517 / 0.283200 (0.037318) 0.098334 / 0.141683 (-0.043349) 1.446187 / 1.452155 (-0.005968) 1.495506 / 1.492716 (0.002789)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.203704 / 0.018006 (0.185698) 0.441325 / 0.000490 (0.440835) 0.001173 / 0.000200 (0.000973) 0.000080 / 0.000054 (0.000026)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.026694 / 0.037411 (-0.010718) 0.103819 / 0.014526 (0.089294) 0.116377 / 0.176557 (-0.060179) 0.158280 / 0.737135 (-0.578856) 0.119797 / 0.296338 (-0.176541)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.405723 / 0.215209 (0.190514) 4.047633 / 2.077655 (1.969979) 1.805652 / 1.504120 (0.301532) 1.611382 / 1.541195 (0.070187) 1.663117 / 1.468490 (0.194627) 0.692589 / 4.584777 (-3.892188) 3.689970 / 3.745712 (-0.055742) 2.089760 / 5.269862 (-3.180101) 1.450576 / 4.565676 (-3.115101) 0.085276 / 0.424275 (-0.338999) 0.012042 / 0.007607 (0.004434) 0.513159 / 0.226044 (0.287115) 5.123235 / 2.268929 (2.854306) 2.281864 / 55.444624 (-53.162761) 1.926170 / 6.876477 (-4.950307) 2.035093 / 2.142072 (-0.106979) 0.857457 / 4.805227 (-3.947770) 0.166088 / 6.500664 (-6.334576) 0.062115 / 0.075469 (-0.013354)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.197776 / 1.841788 (-0.644012) 14.674452 / 8.074308 (6.600144) 14.275990 / 10.191392 (4.084598) 0.170848 / 0.680424 (-0.509576) 0.028613 / 0.534201 (-0.505588) 0.438650 / 0.579283 (-0.140633) 0.439323 / 0.434364 (0.004959) 0.515090 / 0.540337 (-0.025247) 0.614216 / 1.386936 (-0.772720)
PyArrow==latest
Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.007159 / 0.011353 (-0.004194) 0.005142 / 0.011008 (-0.005866) 0.096953 / 0.038508 (0.058445) 0.033036 / 0.023109 (0.009927) 0.391790 / 0.275898 (0.115892) 0.427120 / 0.323480 (0.103640) 0.005691 / 0.007986 (-0.002294) 0.004848 / 0.004328 (0.000519) 0.072258 / 0.004250 (0.068008) 0.049017 / 0.037052 (0.011965) 0.387267 / 0.258489 (0.128778) 0.437112 / 0.293841 (0.143272) 0.036360 / 0.128546 (-0.092186) 0.012249 / 0.075646 (-0.063397) 0.336246 / 0.419271 (-0.083025) 0.048777 / 0.043533 (0.005244) 0.397872 / 0.255139 (0.142733) 0.399768 / 0.283200 (0.116568) 0.101283 / 0.141683 (-0.040400) 1.443999 / 1.452155 (-0.008156) 1.575496 / 1.492716 (0.082779)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.220952 / 0.018006 (0.202946) 0.442220 / 0.000490 (0.441730) 0.000406 / 0.000200 (0.000206) 0.000058 / 0.000054 (0.000004)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.028626 / 0.037411 (-0.008786) 0.109929 / 0.014526 (0.095403) 0.120989 / 0.176557 (-0.055568) 0.157377 / 0.737135 (-0.579758) 0.125522 / 0.296338 (-0.170816)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.436565 / 0.215209 (0.221356) 4.380771 / 2.077655 (2.303117) 2.200003 / 1.504120 (0.695883) 2.013289 / 1.541195 (0.472094) 2.052658 / 1.468490 (0.584168) 0.703706 / 4.584777 (-3.881071) 3.823289 / 3.745712 (0.077577) 2.064882 / 5.269862 (-3.204980) 1.330834 / 4.565676 (-3.234842) 0.085945 / 0.424275 (-0.338330) 0.012511 / 0.007607 (0.004904) 0.544171 / 0.226044 (0.318127) 5.476059 / 2.268929 (3.207130) 2.695586 / 55.444624 (-52.749039) 2.330239 / 6.876477 (-4.546238) 2.429290 / 2.142072 (0.287218) 0.843154 / 4.805227 (-3.962073) 0.169334 / 6.500664 (-6.331330) 0.064261 / 0.075469 (-0.011209)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.268344 / 1.841788 (-0.573444) 14.934342 / 8.074308 (6.860034) 13.555389 / 10.191392 (3.363997) 0.142725 / 0.680424 (-0.537699) 0.017891 / 0.534201 (-0.516310) 0.424833 / 0.579283 (-0.154450) 0.420035 / 0.434364 (-0.014329) 0.491009 / 0.540337 (-0.049329) 0.586953 / 1.386936 (-0.799983)

Copy link
Member

@lhoestq lhoestq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome ! LGTM :)

@sayakpaul
Copy link
Member Author

Merging this PR with approvals from @stevhliu @lhoestq.

@sayakpaul sayakpaul merged commit 1737015 into main Jan 13, 2023
@sayakpaul sayakpaul deleted the feat/de-guide branch January 13, 2023 12:23
@github-actions
Copy link

Show benchmarks

PyArrow==6.0.0

Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.008586 / 0.011353 (-0.002767) 0.004659 / 0.011008 (-0.006350) 0.100343 / 0.038508 (0.061835) 0.029861 / 0.023109 (0.006751) 0.301090 / 0.275898 (0.025192) 0.369528 / 0.323480 (0.046048) 0.006920 / 0.007986 (-0.001065) 0.003513 / 0.004328 (-0.000815) 0.078514 / 0.004250 (0.074263) 0.035285 / 0.037052 (-0.001767) 0.311257 / 0.258489 (0.052768) 0.353995 / 0.293841 (0.060154) 0.033733 / 0.128546 (-0.094813) 0.011489 / 0.075646 (-0.064157) 0.323095 / 0.419271 (-0.096176) 0.040808 / 0.043533 (-0.002725) 0.301779 / 0.255139 (0.046640) 0.348517 / 0.283200 (0.065318) 0.086962 / 0.141683 (-0.054721) 1.496270 / 1.452155 (0.044115) 1.514260 / 1.492716 (0.021544)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.189502 / 0.018006 (0.171496) 0.419326 / 0.000490 (0.418837) 0.002160 / 0.000200 (0.001960) 0.000084 / 0.000054 (0.000029)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.023669 / 0.037411 (-0.013742) 0.096574 / 0.014526 (0.082048) 0.105970 / 0.176557 (-0.070587) 0.148531 / 0.737135 (-0.588605) 0.109948 / 0.296338 (-0.186391)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.424968 / 0.215209 (0.209759) 4.246292 / 2.077655 (2.168637) 1.911062 / 1.504120 (0.406943) 1.700733 / 1.541195 (0.159538) 1.760756 / 1.468490 (0.292266) 0.696966 / 4.584777 (-3.887811) 3.372320 / 3.745712 (-0.373392) 2.886281 / 5.269862 (-2.383581) 1.553082 / 4.565676 (-3.012594) 0.082835 / 0.424275 (-0.341440) 0.012688 / 0.007607 (0.005081) 0.536352 / 0.226044 (0.310308) 5.382510 / 2.268929 (3.113582) 2.365664 / 55.444624 (-53.078960) 1.995631 / 6.876477 (-4.880845) 2.073865 / 2.142072 (-0.068207) 0.819109 / 4.805227 (-3.986118) 0.150278 / 6.500664 (-6.350386) 0.065201 / 0.075469 (-0.010268)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.239835 / 1.841788 (-0.601953) 13.911847 / 8.074308 (5.837539) 13.500433 / 10.191392 (3.309041) 0.137153 / 0.680424 (-0.543271) 0.028451 / 0.534201 (-0.505750) 0.394659 / 0.579283 (-0.184625) 0.404915 / 0.434364 (-0.029449) 0.458944 / 0.540337 (-0.081394) 0.542288 / 1.386936 (-0.844648)
PyArrow==latest
Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.006791 / 0.011353 (-0.004562) 0.004590 / 0.011008 (-0.006419) 0.098697 / 0.038508 (0.060189) 0.027634 / 0.023109 (0.004525) 0.344383 / 0.275898 (0.068485) 0.385607 / 0.323480 (0.062127) 0.005413 / 0.007986 (-0.002573) 0.003447 / 0.004328 (-0.000881) 0.077268 / 0.004250 (0.073018) 0.041823 / 0.037052 (0.004770) 0.342904 / 0.258489 (0.084414) 0.399371 / 0.293841 (0.105530) 0.032668 / 0.128546 (-0.095879) 0.011598 / 0.075646 (-0.064048) 0.319973 / 0.419271 (-0.099299) 0.041760 / 0.043533 (-0.001773) 0.340510 / 0.255139 (0.085371) 0.377929 / 0.283200 (0.094730) 0.090889 / 0.141683 (-0.050793) 1.496068 / 1.452155 (0.043913) 1.574884 / 1.492716 (0.082168)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.230489 / 0.018006 (0.212483) 0.425234 / 0.000490 (0.424745) 0.000406 / 0.000200 (0.000206) 0.000059 / 0.000054 (0.000004)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.024650 / 0.037411 (-0.012761) 0.102706 / 0.014526 (0.088180) 0.108017 / 0.176557 (-0.068539) 0.143645 / 0.737135 (-0.593490) 0.110556 / 0.296338 (-0.185782)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.468038 / 0.215209 (0.252829) 4.670514 / 2.077655 (2.592860) 2.446620 / 1.504120 (0.942500) 2.241255 / 1.541195 (0.700060) 2.286409 / 1.468490 (0.817919) 0.698923 / 4.584777 (-3.885854) 3.401121 / 3.745712 (-0.344592) 1.892399 / 5.269862 (-3.377462) 1.163101 / 4.565676 (-3.402575) 0.082567 / 0.424275 (-0.341708) 0.012662 / 0.007607 (0.005055) 0.571262 / 0.226044 (0.345218) 5.731740 / 2.268929 (3.462812) 2.879649 / 55.444624 (-52.564975) 2.533846 / 6.876477 (-4.342631) 2.654789 / 2.142072 (0.512717) 0.811345 / 4.805227 (-3.993882) 0.152495 / 6.500664 (-6.348169) 0.067748 / 0.075469 (-0.007721)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.267852 / 1.841788 (-0.573935) 14.114920 / 8.074308 (6.040612) 14.355403 / 10.191392 (4.164011) 0.150393 / 0.680424 (-0.530031) 0.016855 / 0.534201 (-0.517346) 0.378710 / 0.579283 (-0.200573) 0.385380 / 0.434364 (-0.048984) 0.439054 / 0.540337 (-0.101284) 0.524343 / 1.386936 (-0.862593)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants