Skip to content

AraBERT: Pre-training BERT for Arabic Language Understanding

Notifications You must be signed in to change notification settings

islamic-works/arabert

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AraBERT : Pre-training BERT for Arabic Language Understanding

**AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT PAPER](https://arxiv.org/abs/2003.00104v2)

There are two version off the model AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the Farasa Segmenter.

The model was trained on ~70M sentences or ~23GB of Arabic text with ~3B words. The training corpora are a collection of publically available large scale raw arabic text (Arabic Wikidumps, The 1.5B words Arabic Corpus, The OSIAN Corpus, Assafir news articles, and 4 other manually crawled news websites (Al-Akhbar, Annahar, AL-Ahram, AL-Wafd) from the Wayback Machine)

We evalaute both AraBERT models on different downstream tasks and compare it to mBERT, and other state of the art models (To the extent of our knowledge). The Tasks were Sentiment Analysis on 6 different datasets (HARD, ASTD-Balanced, ArsenTD-Lev, LABR, ArSaS), Named Entity Recognition with the ANERcorp, and Arabic Question Answering on Arabic-SQuAD and ARCD

Update 1 (21/4/2020) : Fixed an issue with ARCD fine-tuning which drastically improved performance. Initially we didn't account for the change of the answer_start during preprocessing.

Results (Acc.)

Task prev. SOTA mBERT AraBERTv0.1 AraBERTv1
HARD 95.7 ElJundi et.al. 95.7 96.2 96.1
ASTD 86.5 ElJundi et.al. 80.1 92.2 92.6
ArsenTD-Lev 52.4 ElJundi et.al. 51 58.9 59.4
AJGT 93 Dahou et.al. 83.6 93.1 93.8
LABR 87.5 Dahou et.al. 83 85.9 86.7
ANERcorp 81.7 (BiLSTM-CRF) 78.4 84.2 81.9
ARCD mBERT EM:34.2 F1: 61.3 EM:51.14 F1:82.13 EM:54.84 F1: 82.15

We would be extremly thankful if everyone can contibute to the Results table by adding more scores on different datasets

How to use

You can easily use AraBERT since it is almost fully compatible with existing codebases (You can use this repo instead of the official BERT one, the only difference is in the tokenization.py file where we modify the _is_punctuation function to make it compatible with the "+" symbol and the "[" and "]" characters)

To use HuggingFace's Transformer repository you only need to provide a list of token that forces the model to not split them, also make sure that the text is pre-segmented: Not all libraries built on top of transformers support the never_split argument

from transformers import AutoTokenizer, AutoModel
from preprocess_arabert import never_split_tokens, preprocess
from py4j.java_gateway import JavaGateway

arabert_tokenizer = AutoTokenizer.from_pretrained(
    "aubmindlab/bert-base-arabert",
    do_lower_case=False,
    do_basic_tokenize=True,
    never_split=never_split_tokens)
arabert_model = AutoModel.from_pretrained("aubmindlab/bert-base-arabert")

#Preprocess the text to make it compatible with AraBERT

gateway = JavaGateway.launch_gateway(classpath='./PATH_TO_FARASA/FarasaSegmenterJar.jar')
farasa = gateway.jvm.com.qcri.farasa.segmenter.Farasa()

text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
text_preprocessed = preprocess(text, do_farasa_tokenization=True , farasa=farasa)

>>>text_preprocessed: "و+ لن نبالغ إذا قل +نا إن هاتف أو كمبيوتر ال+ مكتب في زمن +نا هذا ضروري"

arabert_tokenizer.tokenize(text_preprocessed)

>>> ['و+', 'لن', 'نبال', '##غ', 'إذا', 'قل', '+نا', 'إن', 'هاتف', 'أو', 'كمبيوتر', 'ال+', 'مكتب', 'في', 'زمن', '+نا', 'هذا', 'ضروري']

AraBERTv0.1 is compatible with all existing libraries, since it needs no pre-segmentation.

from transformers import AutoTokenizer, AutoModel

arabert_tokenizer = AutoTokenizer.from_pretrained("aubmindlab/bert-base-arabertv01",do_lower_case=False)
arabert_model = AutoModel.from_pretrained("aubmindlab/bert-base-arabertv01")

text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
arabert_tokenizer.tokenize(text)

>>> ['ولن', 'ن', '##بالغ', 'إذا', 'قلنا', 'إن', 'هاتف', 'أو', 'كمبيوتر', 'المكتب', 'في', 'زمن', '##ن', '##ا', 'هذا', 'ضروري']

The araBERT_(Updated_Demo_TF).ipynb Notebook is a small demo using the AJGT dataset using TensorFlow (GPU and TPU compatible).

AraBERT on ARCD :

During the preprocessing step the answer_start character position needs to be recalculated. You can use the file arcd_preprocessing.py as shown below to clean, preprocess the ARCD dataset before running run_squad.py. More detailed Colab notebook is available in the SOQAL repo.

python arcd_preprocessing.py \
    --input_file="/PATH_TO/arcd-test.json" \
    --output_file="arcd-test-pre.json" \
    --do_farasa_tokenization=True \
    --path_to_farasa="/PATH_TO/FarasaSegmenterJar.jar" 
python SOQAL/bert/run_squad.py \
  --vocab_file="/PATH_TO/tf_arabert/vocab.txt" \
  --bert_config_file="/PATH_TO/tf_arabert/config.json" \
  --init_checkpoint=$model_dir \
  --do_train=True \
  --train_file=turk_combined_all_pre.json \
  --do_predict=True \
  --predict_file=arcd-test-pre.json \
  --train_batch_size=32 \
  --predict_batch_size=24 \
  --learning_rate=3e-5 \
  --num_train_epochs=4 \
  --max_seq_length=384 \
  --doc_stride=128 \
  --do_lower_case=False\
  --output_dir=$output_dir \
  --use_tpu=True \
  --tpu_name=$TPU_ADDRESS \

Model Weights and Vocab Download

Models AraBERTv0.1 AraBERTv1
TensorFlow Drive Link Drive Link
PyTorch Drive_Link Drive_Link

You can find the PyTorch models in HuggingFace's Transformer Library under the aubmindlab username

If you used this model please cite us as:

@misc{antoun2020arabert,
    title={AraBERT: Transformer-based Model for Arabic Language Understanding},
    author={Wissam Antoun and Fady Baly and Hazem Hajj},
    year={2020},
    eprint={2003.00104},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Acknowledgments

Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.

Contacts

Wissam Antoun: Linkedin | Twitter | Github | wfa07@mail.aub.edu | wissam.antoun@gmail.com

Fady Baly: Linkedin | Twitter | Github | fgb06@mail.aub.edu | baly.fady@gmail.com

We are looking for sponsors to train BERT-Large and other Transformer models, the sponsor only needs to cover to data storage and compute cost of the generating the pretraining data

About

AraBERT: Pre-training BERT for Arabic Language Understanding

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 67.4%
  • Jupyter Notebook 32.6%