-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: depth estimation dataset guide. #5379
Conversation
The documentation is not available anymore as the PR was closed or merged. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks awesome, thanks for this guide!
Also, I think you can just add the documentation images without opening a PR 🙂
docs/source/depth_estimation.mdx
Outdated
>>> train_dataset.set_transform(transforms) | ||
``` | ||
|
||
You can verify the transformation worked by indexing into the `pixel_values` and `labels` of an example: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would combine the two code snippets below.
You can verify the transformation worked by indexing into the `pixel_values` and `labels` of an example: | |
You can verify the transformation worked by indexing into the `pixel_values` and `labels` of an example image: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The succession made it visually simpler to read through. Hence decided to keep it that way.
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Thanks for the changes, looks good to me! |
@stevhliu I have pushed some quality improvements both in terms of code and content. Would you be able to re-review? |
* Random cropping | ||
* Random brightness and contrast | ||
* Random gamma correction | ||
* Random hue saturation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to drop horizontal flipping. Here's why.
The behavior seems to be unstable with datasets
when applied via set_transform()
. Consider the following as an example.
You'll notice that subplots tagged as "wrong" have flipped images but their corresponding depth maps are not flipped. This is not a consistent case, though. Subplots tagged as "right", for example. You'll notice that the images and their depth maps have both been flipped there (compare them to the originals and you should be able to spot the difference).
I have verified if there's a problem with albumentations
but that doesn't seem to be the case. Refer to this notebook: https://gist.github.com/sayakpaul/63ae5f7971f634894ab37e76e6b3879c. You'll notice that I took individual samples from the dataset and passed them through the aug
chain and the results were fine.
Have you experienced this before? Is anything obviously wrong you can spot in the code?
Let me know if anything's unclear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm unable to reproduce this issue 🙁 . Take a look at this simplified notebook.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes @nateraw. But it doesn't seem to be consistent, as I stated here:
You'll notice that subplots tagged as "wrong" have flipped images but their corresponding depth maps are not flipped. This is not a consistent case, though. Subplots tagged as "right", for example.
Sometimes, the augmented depth maps are correct, i.e., the flip transformation has been correctly applied, and sometimes, that doesn't seem to be the case. You'll notice in the collage here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea I noticed that, but I ran it over and over and never got that behavior. Do you have a minimal reproduction of the behavior?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't. Strangely, it only comes up when I try to put it inside a collection of images.
I will try to reproduce the behavior minimally and let you know here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have this behavior because you call train_dataset[idx]["pixel_values"]
and train_dataset[idx]["labels"]
separately when plotting. For each call, it randomly gets flipped or not. To fix this, just do
>>> for i, idx in enumerate(random_indices):
... ax = plt.subplot(3, 3, i + 1)
... example = train_dataset[idx]
... input_image = example["pixel_values"]
... depth_target = example["labels"]
... image_viz = merge_into_row(input_image, depth_target)
... plt.imshow(image_viz.astype("uint8"))
... plt.axis("off")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch @lhoestq !!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My god! Thanks for saving me!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very cool, love the new show_depthmap
function! :)
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Show benchmarksPyArrow==6.0.0 Show updated benchmarks!Benchmark: benchmark_array_xd.json
Benchmark: benchmark_getitem_100B.json
Benchmark: benchmark_indices_mapping.json
Benchmark: benchmark_iterating.json
Benchmark: benchmark_map_filter.json
Show updated benchmarks!Benchmark: benchmark_array_xd.json
Benchmark: benchmark_getitem_100B.json
Benchmark: benchmark_indices_mapping.json
Benchmark: benchmark_iterating.json
Benchmark: benchmark_map_filter.json
|
Show benchmarksPyArrow==6.0.0 Show updated benchmarks!Benchmark: benchmark_array_xd.json
Benchmark: benchmark_getitem_100B.json
Benchmark: benchmark_indices_mapping.json
Benchmark: benchmark_iterating.json
Benchmark: benchmark_map_filter.json
Show updated benchmarks!Benchmark: benchmark_array_xd.json
Benchmark: benchmark_getitem_100B.json
Benchmark: benchmark_indices_mapping.json
Benchmark: benchmark_iterating.json
Benchmark: benchmark_map_filter.json
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome ! LGTM :)
Show benchmarksPyArrow==6.0.0 Show updated benchmarks!Benchmark: benchmark_array_xd.json
Benchmark: benchmark_getitem_100B.json
Benchmark: benchmark_indices_mapping.json
Benchmark: benchmark_iterating.json
Benchmark: benchmark_map_filter.json
Show updated benchmarks!Benchmark: benchmark_array_xd.json
Benchmark: benchmark_getitem_100B.json
Benchmark: benchmark_indices_mapping.json
Benchmark: benchmark_iterating.json
Benchmark: benchmark_map_filter.json
|
This PR adds a guide for prepping datasets for depth estimation.
PR to add documentation images is up here: https://huggingface.co/datasets/huggingface/documentation-images/discussions/22