-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
train issues #22
Comments
I think the only two things to do are: (1) Write a data loader of the target AIGC dataset (2) Modify the training code by omitting the two auxiliary tasks (scene classification and distortion type identification) if you only want to train with quality labels. |
thank you, one more question, I want to know which one is the quality score fidelity loss function? |
@ctxya1207 I have updated the code by adding a script to enable single-database training of LIQE with quality labels only. See Readme: python train_liqe_single.py |
def loss_m(y_pred, y): |
In fact, I want to use the fidelity loss function when predicting the consistency score in AIGC image quality evaluation. I don't know how to write this function |
We have several implementation variants of fidelity loss. By default, we use loss_m4 in our original implementation, which adopts the predicted quality, the number of images sampled from each dataset, and the ground-truth quality as input, and compute the fidelity loss on each dataset and average them into the final loss value. |
If you only want the fidelity loss, loss_m3 would be fine. loss_m is an implementation of margin ranking loss, not fidelity loss. |
thank you very much,Can you give me a contact method so that we can communicate better? |
feel free to contact me via e-mail zwx8981@sjtu.edu.cn |
def loss_m3(y_pred, y):
|
Yes |
python train_liqe_single.py, in this file, total_loss = total_loss + 0.1*refine_loss, what does refine_loss mean, and why is the previous weight 0.1 |
Sorry, that's an uncleaned code. I've fixed it. Try it again. |
running_loss = beta * running_loss + (1 - beta) * total_loss.data.item(),Why is beta set to 0.9? |
This is only a momentum factor to compute the moving average loss, which does not affect the training effect. |
num_steps_per_epoch = 200, May I ask if this variable is equivalent to batch_size? |
|
Can you help me answer this? Why would an error be reported when running: FileNotFoundError: [Errno 2] No such file or directory: '/ IQA_Database/databaserelease2/gblur/img143.bmp' |
how to train on the AIGC dataset
The text was updated successfully, but these errors were encountered: