Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 5,814 Bytes
e24cc15
 
 
d047d36
 
 
 
 
e24cc15
d047d36
 
41416f4
 
d047d36
e24cc15
 
11bc212
85efa0e
11bc212
bc021d0
85efa0e
bc021d0
85efa0e
 
e24cc15
 
 
 
 
329f315
 
1798182
 
 
 
 
 
 
3477949
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63155fd
3477949
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5f3679c
 
 
 
 
 
 
 
 
3477949
 
5f3679c
 
 
 
 
 
 
 
 
3477949
5f3679c
 
 
 
 
 
 
 
3477949
5f3679c
 
 
 
 
 
 
 
 
3477949
5f3679c
 
 
 
 
 
 
 
 
3477949
5f3679c
3477949
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
---
dataset_info:
  features:
  - name: question
    dtype: string
  - name: options
    sequence: string
  - name: rationale
    dtype: string
  - name: label
    dtype: string
  - name: label_idx
    dtype: int64
  - name: dataset
    dtype: string
  splits:
  - name: train
    num_bytes: 203046319
    num_examples: 200000
  - name: validation
    num_bytes: 264310
    num_examples: 519
  download_size: 122985245
  dataset_size: 203310629
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
license: apache-2.0
task_categories:
- multiple-choice
language:
- en
size_categories:
- 100K<n<1M
---


# MNLP M2 MCQA Dataset

A unified multiple-choice question answering (MCQA) benchmark on STEM subjects combining samples from OpenBookQA, SciQ, MMLU-auxiliary, AQUA-Rat, and MedMCQA.

## Dataset Summary

This dataset merges five existing science and knowledge-based MCQA datasets into one standardized format:

| Source | Train samples |
| ---------- | ------------: |
| OpenBookQA | 4 900 |
| SciQ | 10 000 |
| MMLU-aux | 85 100 |
| AQUA-Rat | 50 000 |
| MedMCQA | 50 000 |
| **Total** | **200 000** |

## Supported Tasks and Leaderboards

* **Task:** Multiple-Choice Question Answering (`multiple-choice-question-answering`)
* **Metrics:** Accuracy

## Languages

* English

## Dataset Structure

Each example has the following fields:

| Name | Type | Description |
| ----------- | -------------- | ------------------------------------------------ |
| `question` | `string` | The question stem. |
| `options` | `list[string]` | List of 4-5 answer choices. |
| `label` | `string` | The correct answer letter, e.g. `"A"`, or `"a"`. |
| `label_idx` | `int` | Zero-based index of the correct answer (0–4). |
| `rationale` | `string` | (Optional) Supporting fact or rationale text. |
| `dataset` | `string` | Source dataset name (`openbookqa`, `sciq`, etc.) |

### Splits

```
DatasetDict({
train: Dataset(num_rows=200000),
validation: Dataset(num_rows=519),
})
```

## Dataset Creation

1. **Source Datasets**

* OpenBookQA (`allenai/openbookqa`)
* SciQ (`allenai/sciq`)
* MMLU-auxiliary (`cais/mmlu`, config=`all`)
* AQUA-Rat (`deepmind/aqua_rat`)
* MedMCQA (`openlifescienceai/medmcqa`)

2. **Sampling**
We sample each training split down to a fixed size (4 900–85 100 examples). Validation examples are sampled per source by first computing each dataset’s original validation-to-train ratio (len(validation)/len(train)), taking the minimum of these ratios and 5 %, and then holding out that fraction from each source.

3. **Unification**
All examples are mapped to a common schema (`question`, `options`, `label`, …) with minimal preprocessing.

4. **Push to Hub**

```python
from datasets import DatasetDict, load_dataset, concatenate_datasets

# after loading, sampling, mapping…
ds = DatasetDict({"train": combined, "validation": val_combined})
ds.push_to_hub("NicoHelemon/MNLP_M2_mcqa_dataset", private=False)
```

## Usage

```python
from datasets import load_dataset

ds = load_dataset("NicoHelemon/MNLP_M2_mcqa_dataset")
print(ds["train"][0])
# {
# "question": "What can genes do?",
# "options": ["Give a young goat hair that looks like its mother's hair", ...],
# "label": "A",
# "label_idx": 0,
# "rationale": "Key fact: genes are a vehicle for passing inherited…",
# "dataset": "openbookqa"
# }
```

## Licensing

This collection is released under the **Apache-2.0** license.
Original source datasets may carry their own licenses—please cite appropriately.

## Citation

If you use this dataset, please cite:

```bibtex


@misc
{helemon2025m2mcqa,
title = {MNLP M2 MCQA Dataset},
author = {Nicolas Gonzalez},
year = 2025,
howpublished = {\url{https://huggingface.co/datasets/NicoHelemon/MNLP_M2_mcqa_dataset}},
}
```

And please also cite the original datasets:

```bibtex

@misc{mihaylov2018suitarmorconductelectricity,
      title={Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering}, 
      author={Todor Mihaylov and Peter Clark and Tushar Khot and Ashish Sabharwal},
      year={2018},
      eprint={1809.02789},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/1809.02789}, 
}

@misc{welbl2017crowdsourcingmultiplechoicescience,
      title={Crowdsourcing Multiple Choice Science Questions}, 
      author={Johannes Welbl and Nelson F. Liu and Matt Gardner},
      year={2017},
      eprint={1707.06209},
      archivePrefix={arXiv},
      primaryClass={cs.HC},
      url={https://arxiv.org/abs/1707.06209}, 
}

@misc{hendrycks2021measuringmassivemultitasklanguage,
      title={Measuring Massive Multitask Language Understanding}, 
      author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
      year={2021},
      eprint={2009.03300},
      archivePrefix={arXiv},
      primaryClass={cs.CY},
      url={https://arxiv.org/abs/2009.03300}, 
}

@misc{ling2017programinductionrationalegeneration,
      title={Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems}, 
      author={Wang Ling and Dani Yogatama and Chris Dyer and Phil Blunsom},
      year={2017},
      eprint={1705.04146},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/1705.04146}, 
}

@misc{pal2022medmcqalargescalemultisubject,
      title={MedMCQA : A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering}, 
      author={Ankit Pal and Logesh Kumar Umapathi and Malaikannan Sankarasubbu},
      year={2022},
      eprint={2203.14371},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2203.14371}, 
}

```