%0 Conference Proceedings %T MultiFiT: Efficient Multi-lingual Language Model Fine-tuning %A Eisenschlos, Julian %A Ruder, Sebastian %A Czapla, Piotr %A Kadras, Marcin %A Gugger, Sylvain %A Howard, Jeremy %Y Inui, Kentaro %Y Jiang, Jing %Y Ng, Vincent %Y Wan, Xiaojun %S Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) %D 2019 %8 November %I Association for Computational Linguistics %C Hong Kong, China %F eisenschlos-etal-2019-multifit %X Pretrained language models are promising particularly for low-resource languages as they only require unlabelled data. However, training existing models requires huge amounts of compute, while pretrained cross-lingual models often underperform on low-resource languages. We propose Multi-lingual language model Fine-Tuning (MultiFiT) to enable practitioners to train and fine-tune language models efficiently in their own language. In addition, we propose a zero-shot method using an existing pretrained cross-lingual model. We evaluate our methods on two widely used cross-lingual classification datasets where they outperform models pretrained on orders of magnitude more data and compute. We release all models and code. %R 10.18653/v1/D19-1572 %U https://aclanthology.org/D19-1572 %U https://doi.org/10.18653/v1/D19-1572 %P 5702-5707