All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- fix bug in unique labels
- add crop shape and overlap shape to filename when cropping ds
- Moved package building to pyproject.toml
- Fixed Pipy build
- Try to fix pypy build
split_train_test
now filters samples with too few unique sources to prevent errors in sklearn- refactored to_abs and to_rel method for easier readability and better performance
- retry reading a parquet file without opening as buffer if it fails
- semantic_segmentation can store files with numeric names
- new function:
split_train_test_multilabel
for fairly splitting multilabel datasets - added requirements
- vis_and_store can handle multilabel, singlelabel and None correctly now
- polygon validation fixed. Set required amount of points from 4 to 3.
- fixed in
validate_unique_annos
pandas drop() future warning crop_dataset
does not produce empty duplicates of crop positions anymoresegmentation_to_lost
fix bug where some contours of different classes were merged accidentiallymask_dataset
won't overwrite input dataframe anymore- added a serialization case where a columns has empty lists
- fixed voc_eval bug where gt_df bboxes weren't handled correctly
- fixed coco_eval bug - different dataframes having same uids for images and labels now
- to_coco meethod has arguments predef_img_mapping and predef_lbl_mapping now to hand over predefined label and image uids
- export method crop_img to lost_ds (lost_ds.crop_img and ds.crop_img formerly lost_ds.DSCropper.crop_img)
- resolve FutureWarning at transform_bbox_style
- make crop_dataset compatible with pandas > 1.5.0
- crop_component indexing is imagewise instead of global
- improve some filter and transform functions
- Add parallel option for multiple functions
- fix typo in some methods arguments
- method
split_train_test
which allows the stratify option for i.e. classification datasets coco_eval
method for detection (mAP, Average Recall)voc_eval
method for detection (tp, fp, fn, precision, revall, ap)to_coco
improvements and bugfixesvoc_score_iou_multiplex
method: shifting bbox score and iou_thresholds to find optimal thresolds- cropping method return additional column 'crop_position' now
- added arg 'random_state' for dataset-splitting
- added arg for optional parallelistaion
- added color selection for vis and store - can take column now
- improved detetion metrics
- file_man: Allow other Fsspec filesystems
- Progress callback for pack_ds
- Argument
fontscale
atvis_and_store
to enable manually control for textsize - Argument
cast_others
atsegmentation_to_lost
to allow ignoring unspecified pixel values
- Added zip support for pack_ds method
bbox_nms
method to directly apply non-max-suppresseion for bboxes- Argument
path_col
in methodsto_abs
andto_rel
to enable explicit column specification to look for image paths - Argument
mode
for methodremap_img_path
to enable not only replacement but also prepending of root paths - Set argument default of
inplace
of methodpack_ds
to False - Enable visualization of annos without
anno_lbl
specified
validate_unique_annos
uses data-hash for dropping duplicates nowvis_and_store
text-boxes won't be outside the image anymorevis_and_store
can determine optimal text-size for labeling when passing arg. line_thickness 'auto'.
- dependencies for lost_ds in requirements.txt + setup.py
- examples dataset and code-snippets
- improved function to load datasets at LOSTDataset
- new function
segmentation_to_lost
to convert pixelmaps to lost-annotations - new function
crop_components
to crop dataset based on annotations - new function
vis_semantic_segmentation
to color sem-seg. dataset - new function
to_coco
to generate and store coco datasets from LOSTDataset
- First version