Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to evaluate on new dataset? #9

Open
marziehoghbaie opened this issue May 30, 2020 · 15 comments
Open

How to evaluate on new dataset? #9

marziehoghbaie opened this issue May 30, 2020 · 15 comments

Comments

@marziehoghbaie
Copy link

Hi,
How can I test on new dataset?
should the files be in video format or should I just give video frames as images?
tnx in advance

@clks-wzz
Copy link
Owner

clks-wzz commented Jun 1, 2020

You should convert video format to frame format for evaluation. Generally speaking, the score of one video is the average of all/sampled frames' scores of the corresponding video.

@marziehoghbaie
Copy link
Author

TNX, but I have another question, what is the structure of the test data folder? Do I need to put my video frames in a subfolder with a specific name ?

@clks-wzz
Copy link
Owner

clks-wzz commented Jun 4, 2020

TNX, but I have another question, what is the structure of the test data folder? Do I need to put my video frames in a subfolder with a specific name ?

Sure, it's better to put frames in well designed subfolders with specific names.

@10LGUO
Copy link

10LGUO commented Jul 8, 2020

You should convert video format to frame format for evaluation. Generally speaking, the score of one video is the average of all/sampled frames' scores of the corresponding video.

I didn't find any code related to video-frame conversion in your repo. Do I need to write such code on my own?

@clks-wzz
Copy link
Owner

You should convert video format to frame format for evaluation. Generally speaking, the score of one video is the average of all/sampled frames' scores of the corresponding video.

I didn't find any code related to video-frame conversion in your repo. Do I need to write such code on my own?

Yes, you can use python-cv2 or ffmpeg to convert videos to frames.

@10LGUO
Copy link

10LGUO commented Jul 10, 2020

You should convert video format to frame format for evaluation. Generally speaking, the score of one video is the average of all/sampled frames' scores of the corresponding video.

I didn't find any code related to video-frame conversion in your repo. Do I need to write such code on my own?

Yes, you can use python-cv2 or ffmpeg to convert videos to frames.

Thanks so much for the reply. May I ask what does the "scene.dat" correspond to in your code and how should I get it from a raw image set.

@eesaeedkarimi
Copy link

I have the same issue.

  1. What should be the names of frame images?
  2. How can I pass the address of the directory that includes frame images to the code?
  3. Should I compute the depth by PRNet for testing a video?
  4. Where should I put the depth output for the test mode? What should be the name of the depth outputs for the test mode?
    Is there anybody help me, please?

@mxbastidasr
Copy link

I have the same issue.

  1. What should be the names of frame images?
  2. How can I pass the address of the directory that includes frame images to the code?
  3. Should I compute the depth by PRNet for testing a video?
  4. Where should I put the depth output for the test mode? What should be the name of the depth outputs for the test mode?
    Is there anybody help me, please?

Hello! Did you figure it out?
So far, by checking the code, I know that you have a function per dataset on util_dataset.py and that the files names will be managed on the generate_existFaceLists_perfile function on the generate_data_test.py file, but I'm not sure if the depth need to be compute for the testing video with PRNet
Thanks in advance!

@clks-wzz
Copy link
Owner

You should convert video format to frame format for evaluation. Generally speaking, the score of one video is the average of all/sampled frames' scores of the corresponding video.

I didn't find any code related to video-frame conversion in your repo. Do I need to write such code on my own?

Yes, you can use python-cv2 or ffmpeg to convert videos to frames.

Thanks so much for the reply. May I ask what does the "scene.dat" correspond to in your code and how should I get it from a raw image set.

"scene.dat" denotes the file which contains the location(bbox) of face. You can make scripts or use face detector to get it.

@610821216
Copy link

610821216 commented Nov 3, 2020

You should convert video format to frame format for evaluation. Generally speaking, the score of one video is the average of all/sampled frames' scores of the corresponding video.

I didn't find any code related to video-frame conversion in your repo. Do I need to write such code on my own?

Yes, you can use python-cv2 or ffmpeg to convert videos to frames.

Thanks so much for the reply. May I ask what does the "scene.dat" correspond to in your code and how should I get it from a raw image set.

"scene.dat" denotes the file which contains the location(bbox) of face. You can make scripts or use face detector to get it.

May i ask scene.dat is original image or Depth image???
scene.dat is face position or feature??
and original or Depth input size is 256x256 and 32x32???
if it is possible can give me scene.dat?

@gabrielvannier
Copy link

Does someone have found how the dataset folder should be named and structured more specifically ? I can not find anything on the path to video frame for testing

@cuiwenbing
Copy link

use OULU-NPU dataset , What should be the names of frame images?

@cuiwenbing
Copy link

I have the same issue.

  1. What should be the names of frame images?
  2. How can I pass the address of the directory that includes frame images to the code?
  3. Should I compute the depth by PRNet for testing a video?
  4. Where should I put the depth output for the test mode? What should be the name of the depth outputs for the test mode?
    Is there anybody help me, please?

can you answer these questions now?

@JavanehBahrami
Copy link

I have the same issue.

  1. What should be the names of frame images?
  2. How can I pass the address of the directory that includes frame images to the code?
  3. Should I compute the depth by PRNet for testing a video?
  4. Where should I put the depth output for the test mode? What should be the name of the depth outputs for the test mode?
    Is there anybody help me, please?

@clks-wzz , can you please answer these questions?

@hollerback370
Copy link

这个项目真的完全不用心,开源是开源,数据集的格式,如何放置,具体推理用的模型完全没说清楚,乱七八糟的

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants