Dataset Card for MarineEval
The work introduces MarineEval, the first large-scale benchmark specifically designed to evaluate the marine understanding capabilities of Vision-Language Models (VLMs). MarineEval contains 2,000 expert-verified image-based question–answer pairs across 7 task dimensions and 20 domain-specific capacity dimensions, emphasizing specialized marine knowledge, visual reasoning, and real-world complexity. Through comprehensive benchmarking of 17 state-of-the-art VLMs, the study reveals that existing general-purpose models perform poorly on marine tasks, including particularly in spatial reasoning, species identification, and ecological understanding, highlighting the need for domain-aware training and evaluation. This resource aims to foster progress toward domain-expert VLMs capable of advancing research and conservation in marine science.
Dataset
Dataset Structure
The dataset structure is as follows:
dataset/
├── dimension 1
│ ├── sub dimension 1
│ │ ├── images/
│ │ ├── data.json
│ ├── sub dimension 1
│ │ ├── images/
│ │ ├── data.json
├── dimension 2
│ ├── sub dimension 1
│ │ ├── images/
│ │ ├── data.json
...
JSON File Structure
Each data.json file follows this structure:
"data": [
{
"id": 0,
"question": "string",
"answers": [
{
"answer": "string",
}
],
"qusetion_format": 0
}
]
Question Formats
The MarineEval dataset includes five types of question formats:
| Code | Question Format | Description |
|---|---|---|
| 0 | Yes-No Question | Models make binary classification to determine whether a statement is true or false. |
| 1 | Multiple Choice Question | Models select one or more than one correct option from at least four choices |
| 2 | Summarization Question | Models are asked to summarize the insight of the given image in free format |
| 3 | Localization Question | Models are asked to provide bounding box of target objects in COCO format. |
| 4 | Closed-Form (Loose) | Models response in a restricted format, which is evaluated with flexible semantic matching by LLM. |
| 5 | Closed-Form (Strict) | Models respond in a restricted format, which requires an exact match with the ground truth. |
Citation
@misc{wong2025marineevalassessingmarineintelligence,
title={MarineEval: Assessing the Marine Intelligence of Vision-Language Models},
author={YuK-Kwan Wong and Tuan-An To and Jipeng Zhang and Ziqiang Zheng and Sai-Kit Yeung},
year={2025},
eprint={2512.21126},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.21126},
}
- Downloads last month
- 85