Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 2,711 Bytes
75bc8fc
 
b31d2cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75bc8fc
 
 
 
 
 
 
 
 
 
 
 
b31d2cd
c38aff5
b31d2cd
 
 
 
 
 
 
 
 
 
c38aff5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b31d2cd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
configs:
- config_name: en
  data_files: minimal_pair_mparalel_en.parquet
- config_name: fa
  data_files: minimal_pair_mparalel_fa.parquet
- config_name: is
  data_files: minimal_pair_mparalel_is.parquet
- config_name: et
  data_files: minimal_pair_mparalel_et.parquet
- config_name: sv
  data_files: minimal_pair_mparalel_sv.parquet
license: apache-2.0
language:
- fa
- fo
- is
- sv
- et
---

# Minimal Pair mParalel (multilingual)

This combined dataset groups five language-specific minimal pair datasets
into a single repo with the following subsets:

- **en**
- **fa**
- **is**
- **et**
- **sv**

The dataset as used in: 
```bibtex
  @misc{glocker2025growmergescalingstrategies,
        title={Grow Up and Merge: Scaling Strategies for Efficient Language Adaptation}, 
        author={Kevin Glocker and Kätriin Kukk and Romina Oji and Marcel Bollmann and Marco Kuhlmann and Jenny Kunz},
        year={2025},
        eprint={2512.10772},
        archivePrefix={arXiv},
        primaryClass={cs.CL},
        url={https://arxiv.org/abs/2512.10772}, 
  }
```

But originally introduced by: 
```bibtex
@inproceedings{fierro-sogaard-2022-factual,
    title = "Factual Consistency of Multilingual Pretrained Language Models",
    author = "Fierro, Constanza  and
      S{\o}gaard, Anders",
    editor = "Muresan, Smaranda  and
      Nakov, Preslav  and
      Villavicencio, Aline",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
    month = may,
    year = "2022",
    address = "Dublin, Ireland",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.findings-acl.240/",
    doi = "10.18653/v1/2022.findings-acl.240",
    pages = "3046--3052",
    abstract = "Pretrained language models can be queried for factual knowledge, with potential applications in knowledge base acquisition and tasks that require inference. However, for that, we need to know how reliable this knowledge is, and recent work has shown that monolingual English language models lack consistency when predicting factual knowledge, that is, they fill-in-the-blank differently for paraphrases describing the same fact. In this paper, we extend the analysis of consistency to a multilingual setting. We introduce a resource, mParaRel, and investigate (i) whether multilingual language models such as mBERT and XLM-R are more consistent than their monolingual counterparts;and (ii) if such models are equally consistent across languages. We find that mBERT is as inconsistent as English BERT in English paraphrases, but that both mBERT and XLM-R exhibit a high degree of inconsistency in English and even more so for all the other 45 languages."
}
```