-
Notifications
You must be signed in to change notification settings - Fork 8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Here you can post your trained models on different Datasets - 3 files: cfg, weights, names #3874
Comments
I would try Hope to see comments on these |
MobileNet V2 Imagenet https://github.com/AlexeyAB/darknet/files/3310574/mobilenet_v2.cfg.txt mobilenetv2_last.weights |
@ntd94 Thanks. What mAP (accuracy) did you get on Visdrone2019 val/test? |
|
Any yolo_v3_tiny_pan3 aa_ae_mixup scale_giou weights? @AlexeyAB |
DatasetCustom small dataset with 6 classes (crop and stems), images are approx. 2 MPix with a 4/3 ratio. For each class are tagged the whole plant and its stem. The stem annotation is a square box approx. the same relative size for each class and image. There is large overlap for some bounding boxes, some plants are very small and others almost fit half the image.
ResultsModel is trained for 10 000 steps using original parameters excepted for I Will update this post as soon as I have new results. I will also try to compare AlexeyAB Darknet implementation with other realtime frameworks (CenterNet, Ultralytics, RetinaNet, ...).
* Using pre-trained weights for deep networks such as yolo v3 and CSR significantly improves training stability and speed as well as the accuracy. This effect is less significant for shallower networks such as Tiny Yolo v3 Pan3.
* Using pre-trained weights from |
DataSet was one-class custom data with 33216 images for training and 4324 images for validation. Experimental results will be updated continuously.
A summary,
|
@Kyuuki93 Hi, Thanks! |
just download
|
In my application,
|
I tried to my own drone video. the result is really good. Please advice how to get the similar performance ? for my purposes, I need to separate truck category into several truck type, person, car, bus and motorcycle. is there any dataset that could be used especially for aerial view image ? |
@laclouis5: What is the difference between
From how I read it the mAP is both calculated at 50%. |
No, mAP@[0.5...0.95] is the average AP over ten different IoU thresholds ranging from 0.5 to 0.95 by 0.05 increment. |
mobilenetv2 cfg file is error, stride 16 need repeat 7 times,but it‘s stride 8 repeat 7 times, this add the bflops |
Pascal VOC 2007+2012 |
@laclouis5 hello there, is there a list / compilation of all those custom configs you used? i cannot seem to find them on cfg/ folder |
Hi @usamahjundia, all cfg files are attached in #3874 (comment), 2nd column. |
Thanks, but that is unfortunately not what i meant What i meant was, where did you discover them in the first place? or are they custom-made? Sorry for the confusion |
@usamahjundia They are regular cfg files developed in this repo, I did not customise them. You can found them in the cfg folder of the repo. |
@laclouis5 hello |
@ShaneHsieh Error on my side, I updated the post with the good image. Pre-trained is for models trained with initial weights from models trained on MS Coco. See the training tutorial of this repo for more information on how to do that. From scratch is without pre-trained weights. |
@laclouis5 Thank. |
MobileNetV2-YOLOv3-Lite&Nano DarknetMobile inference frameworks benchmark (4*ARM_CPU)
|
Here you can post your trained models on different Datasets - 3 files:
mAP@0.5
or/andmAP@0.5...0.95
COCO test-dev
ImageNet valid
The text was updated successfully, but these errors were encountered: