You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Originally posted by kevinPoliPoli April 21, 2024
Hi everyone, i recently trainer a ConvTranPlus (default parameters) model for a multi class problem (21 classes, 4 features) obtaining 94% of accuracy on the validation set (image below)
(sorry for the image, in the y axis loss goes from 3.0 to 0.5)
As we can see the valid loss is below the training loss level, so the model has an high generalization capability...
The problem is that when I try to make predictions on the same training set I used to train the model I obtain 0-4% accuracy. How it is possible? I also tried both with train and valid set, obtaining always 0-4 % accuracy.
Moreover, making inference on the same set multiple times, I obtain different results (0-4%)
ps: the dataset is private, I can't provide it here
Thank you in advance :)
from fastai.callback.tracker import EarlyStoppingCallback
from fastai.callback.schedule import *
# hyper-parameters
lr = 0.0006309573538601399 # found by some tests
n_epochs = 250
use_wandb = False
cbs = [EarlyStoppingCallback(min_delta=0.001, patience=3), ShowGraphCallback2(), PredictionDynamics(figsize=(6,5))]
metrics = [accuracy, Precision(average='macro'), Recall(average='macro'), F1Score(average='macro'), RocAuc()]
loss_func = CrossEntropyLossFlat()
model = ConvTranPlus(dls.vars, dls.c, dls.len)
learn = Learner(dls, model, loss_func=loss_func, metrics=metrics, cbs=ShowGraphCallback2())
learn.fit_one_cycle(n_epochs, lr)
save the model
# metadata
arch = "ConvTranPlus"
realorsim = 'simold'
filename = f'{arch}-n_epochs{n_epochs}-bs_t{bs_t}-bs_v{bs_v}-lr{lr}-type{realorsim}-norm-acc94'
PATH = Path(f'./models/{arch}/inference/{filename}.pkl')
learn.export(PATH) # for inference only
PATH = Path(f'./{arch}/withstate/{filename}-state') # autosave into ./models.
learn.save(PATH) # save model and optimizer state -> used for finetuning or incremental learning
Discussed in #895
Originally posted by kevinPoliPoli April 21, 2024
Hi everyone, i recently trainer a ConvTranPlus (default parameters) model for a multi class problem (21 classes, 4 features) obtaining 94% of accuracy on the validation set (image below)
(sorry for the image, in the y axis loss goes from 3.0 to 0.5)
As we can see the valid loss is below the training loss level, so the model has an high generalization capability...
The problem is that when I try to make predictions on the same training set I used to train the model I obtain 0-4% accuracy. How it is possible? I also tried both with train and valid set, obtaining always 0-4 % accuracy.
Moreover, making inference on the same set multiple times, I obtain different results (0-4%)
ps: the dataset is private, I can't provide it here
Thank you in advance :)
Here the code I wrote:
relabeling
create dataset and splits
apply standardization and prepare dataloader
build learner to find LR
train
save the model
make inference
The text was updated successfully, but these errors were encountered: