PyTorch implementation of Mimicking Word Embeddings using Subword RNNs (EMNLP 2017).
The original repo of the model is implemented in dynet, so I reimplemented the model using PyTorch.
python main.py [arguments]
-h, --help Help infomation
--char_embed Embedding size of characters
--embedding File path of original word embedding
Dataset | Vocabulary | Test Cosine Sim |
---|---|---|
GloVe.6B.300 | 400K | 0.322 |