Important Dates:
NLP-LoResLM
Paper submission due | December 31, 2025 |
First Decision | March 31, 2026 - April 30, 2026 |
Revised Version Submission | May 1, 2026 - June 1, 2026 |
Final Decision | August 30, 2026 |
Journal Natural Language Processing - Special Issue on Language Models for Low-Resource Languages
Neural language models have revolutionised natural language processing (NLP) and have provided state-of-the-art results for many tasks. However, their effectiveness is largely dependent on the pre-training resources. Therefore, language models (LMs) often struggle with low-resource languages in both training and evaluation. Recently, there has been a growing trend in developing and adopting LMs for low-resource languages. This special issue aims to provide a forum for researchers to share and discuss their ongoing work on LMs for low-resource languages.
Topics
We invite submissions on a broad range of topics related to the development and evaluation of neural language models for low-resource languages, including but not limited to the following.
- Building language models for low-resource languages.
- Adapting/extending existing language models/large language models for low-resource languages.
- Corpora creation and curation technologies for training language models/large language models for low-resource languages.
- Benchmarks to evaluate language models/large language models in low-resource languages.
- Prompting/in-context learning strategies for low-resource languages with large language models.
- Review of available corpora to train/fine-tune language models/large language models for low-resource languages.
- Multilingual/cross-lingual language models/large language models for low-resource languages.
- Applications of language models/large language models for low-resource languages (i.e. machine translation, chatbots, content moderation, etc.
Submission
Submissions should be formatted according to the journal guidelines available here and submitted through the manuscript submission system . To ensure your manuscript is considered for this special issue, please select “Language Models for Low-Resource Languages” under Special Issue Designation when uploading your manuscript.
Guest Editors
Lancaster University, UK |
Lancaster University, UK |
![]() Lancaster University, UK |
![]() Lancaster University, UK |
Queensland University of Technology, |
Guest Editorial Board
- Gábor Bella - IMT Atlantique, France
- Ana-Maria Bucur - University of Bucharest, Romania
- Çağrı Çöltekin - University of Tübingen, Germany
- Vera Danilova - Uppsala University, Sweden
- Ona de Gibert - University of Helsinki, Finland
- Ignatius Ezeani - Lancaster University, UK
- Amal Htait - Aston University, UK
- Ali Hürriyetoğlu - Wageningen University & Research, Netherlands
- Danka Jokic - University of Belgrade, Serbia
- Diptesh Kanojia - University of Surrey, UK
- Taro Watanabe - Nara Institute of Science and Technology, Japan
- Muhidin Mohamed - Aston University, UK
- Alistair Plum - University of Luxembourg, Luxembourg
- Damith Premasiri - Lancaster University, UK
- Guokan Shang - Mohamed bin Zayed University of Artificial Intelligence, France
- Ravi Shekhar - University of Essex, UK
References
Damian Blasi, Antonios Anastasopoulos, and Graham Neubig. 2022. Systematic inequalities in language technology performance across the world’s languages. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 5486–5505, Dublin, Ireland. Association for Computational Linguistics
Alexandre Magueresse, Vincent Carles, and Evan Heet- derks. 2020. Low-resource languages: A review of past work and future challenges. arXiv preprint arXiv:2006.07264.
Shervin Minaee, Tomas Mikolov, Narjes Nikzad, Meysam Chenaghlu, Richard Socher, Xavier Amatriain, and Jianfeng Gao. 2024. Large language models: A survey. arXiv preprint arXiv:2402.06196.
Sebastian Ruder, Ivan Vulic, and Anders Søgaard. ´ 2022. Square one bias in NLP: Towards a multidimensional exploration of the research manifold. In Findings of the Association for Computational Linguistics: ACL 2022 , pages 2340–2354, Dublin, Ireland. Association for Computational Linguistics.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288
Daan van Esch, Tamar Lucassen, Sebastian Ruder, Isaac Caswell, and Clara Rivera. 2022. Writing system and speaker metadata for 2,800+ language varieties. In Proceedings of the Thirteenth Language Resources and Evaluation Conference , pages 5035–5046, Marseille, France. European Language Resources Association
Contact us
Stay in touch to receive updates about LoResLM 2025

