Skip to content

A1c0r-Z/KAGGLE-FeedBack3-ELL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 

Repository files navigation

KAGGLE-FeedBack3-ELL

This is a conclusion of Feedback Prize - English Language Learning here.\

Table of contents

OVERVIEW

Train

During the competition I only used deberta-v3-base and deberta-v3-large with mean pooling and attention pooling.
This time I'm going to train more model

model used:

  1. deberta-v3-base
  2. deberta-v3-large
  3. deberta-v2-xlarge
  4. roberta-large
  5. distilbert-base-uncased

Inference

LOG

12.17

Apply different loss rates per target.

Model-1

info:

  1. deberta-v3-base
  2. attention head
  3. layerwise learning rate decay
  4. last layer reinitialization(kaiming normal)
  5. Different loss rates per target {'cohesion':0.21, 'syntax':0.16, 'vocabulary':0.10, 'phraseology':0.16, 'grammar':0.21, 'conventions':0.16}
  6. Finetuned with optuna
    cv:0.4502 pb:0.4408 pb:0.4396

Model-2

  1. deberta-v3-base
  2. mean head
  3. layerwise learning rate decay
  4. last layer reinitialization(kaiming normal)
  5. Different loss rates per target {'cohesion':0.21, 'syntax':0.16, 'vocabulary':0.10, 'phraseology':0.16, 'grammar':0.21, 'conventions':0.16}
  6. Finetuned with optuna
    cv:0.4501

Model-3

  1. base
  2. weighted layer

Model-4

  1. large
  2. mean

Model-5

  1. large
  2. attention

Model-6

  1. large
  2. weighted layer

During the competition, I got 129/2654 on public lb,but got only 591/2654 on private lb.So I want to do this competition again and make a late submission.

About

This is a conclusion of Feedback Prize - English Language Learning,https://www.kaggle.com/competitions/feedback-prize-english-language-learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published