-
Notifications
You must be signed in to change notification settings - Fork 212
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1102 +/- ##
==========================================
+ Coverage 88.42% 88.49% +0.06%
==========================================
Files 284 284
Lines 12876 12891 +15
==========================================
+ Hits 11386 11408 +22
+ Misses 1490 1483 -7
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
limiting my review only to the docs, as the coding is a different cup of tea to during lol
in general, the Docs is very good, much improved, just minor questions and suggestions 🐰
predict_transform: The :class:`~flash.core.data.io.input_transform.InputTransform` type to use when | ||
predicting. | ||
input_cls: The :class:`~flash.core.data.io.input.Input` type to use for loading the data. | ||
transform_kwargs: Dict of keyword arguments to be provided when instantiating the transforms. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you pass different arguments for training and validation?
maybe just overload train_transform
with spatial, so we would fill arguments ahead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently it's a bit strange because you can pass a different transform class for train, val, test, predict, but they all share the same keyword arguments. We could think of just passing the instance, like this:
datamodule = ...from_x(
train_transform = ImageInputTransform(image_size=64),
...
)
It has issues if you want to change e.g. the image size for all transforms but could be an option
... {"file_name": "image_2.png", "height": 64, "width": 64, "id": 2}, | ||
... {"file_name": "image_3.png", "height": 64, "width": 64, "id": 3}, | ||
... ]} | ||
>>> with open("train_annotations.json", "w") as annotation_file: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw, do we have an internal check that the file extension is valid or we leave it to IceVision?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently left to icevision, but the stuff we started in #889 would change that
├── image_3.png | ||
... | ||
|
||
The folder ``train_masks`` has the following contents: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets write what is the annotation, image coded in uint8 wth 0 as background and 1,... for instances?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we should do this, will leave for a follow-up just because this PR is getting too heavy haha, but I plan to come back and add more detail
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
…ning/lightning-flash into docs/icevision_data
What does this PR do?
Part of #957
Fixes #1089
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃