This repo is to describe our experiments and store our codes to proceed with the Reverse Dictionary Task, which
is one of the two tracks of SemEval 2022 Task 1.
Our Paper has been submitted to the SemEval2022 on Feb 28th 2022, and has been accepted.
The link to the camera-ready version of our paper: 1Cademy at SemEval-2022 Task 1
The slides to demonstrate our paper: SemEval2022_slides
All our models' codes are adapted from and inspired by organizers' baseline code.
We want to express our thanks to the organizers here again for their effort.
Our RNN-based Monolingual Model
Our BiRNN-based Monolingual Model
Our LSTM-based Monolingual Model
Our Elmo-based Monolingual Model
Our Transformer-based Monolingual Model
Our Transformer-based Multilingual Model
Our Transformer-based Multilingual Model(RC)
Our Transformer-based Multilingual Model(ALT)
Our Transformer-based Multilingual Model(RC+ALT)
Our Elmo-based Multilingual Model
Our Elmo-based Multilingual Model(ALT)
Our ELMO-based Monolingual Multitask model with DWA optimization
Tokenizers that we have used in the experiments
Other Python codes that we have used in the experiments