--- license: mit language: - en - zh pretty_name: B2NERD --- # B2NER We present B2NERD, a cohesive and efficient dataset that can improve LLMs' generalization on the challenging Open NER task, refined from 54 existing English or Chinese datasets. Our B2NER models, trained on B2NERD, outperform GPT-4 by 6.8-12.0 F1 points and surpass previous methods in 3 out-of-domain benchmarks across 15 datasets and 6 languages. - 📖 Paper: [Beyond Boundaries: Learning a Universal Entity Taxonomy across Datasets and Languages for Open Named Entity Recognition](http://arxiv.org/abs/2406.11192) - 🎮 GitHub Repo: https://github.com/UmeanNever/B2NER . - 📀 Data: You can download from here (the B2NERD_data.zip in the "Files and versions" tab). See below data section for more information. - 💾 Model (LoRA Adapters): See [7B model](https://huggingface.co/Umean/B2NER-Internlm2.5-7B-LoRA) and [20B model](https://huggingface.co/Umean/B2NER-Internlm2-20B-LoRA). You may refer to the github repo for quick demo usage. **Feature Highlights:** - Curated dataset (B2NERD) refined from the largest bilingual NER dataset collection to date for training Open NER models. - Achieves SoTA OOD NER performance across multiple benchmarks with light-weight LoRA adapters (<=50MB). - Uses simple natural language format prompt, achieving 4X faster inference speed than previous SoTA which use complex prompts. - Easy integration with other IE tasks by adopting UIE-style instructions. - Provides a universal entity taxonomy that guides the definition and label naming of new entities. - We have open-sourced our data, code, and models, and provided easy-to-follow usage instructions. | Model | Avg. F1 on OOD English datasets | Avg. F1 on OOD Chinese datasets | Avg. F1 on OOD multilingual dataset |-------|------------------------|------------------------|--| | Previous SoTA | 69.1 | 42.7 | 36.6 | GPT | 60.1 | 54.7 | 31.8 | B2NER | **72.1** | **61.3** | **43.3** See our [GitHub Repo](https://github.com/UmeanNever/B2NER) for more information on data usage and this work. # Data One of the paper's core contribution is the construction of B2NERD dataset. It's a cohesive and efficient collection refined from 54 English and Chinese datasets and designed for Open NER model training. **The preprocessed test datasets (7 for Chinese NER and 7 for English NER) used for Open NER OOD evaluation in our paper are also included in the released dataset** to facilitate convenient evaluation for future research. See the tables below for our train/test splits and dataset statistics. We provide 3 versions of our dataset. - `B2NERD` (**Recommended**): Contain ~52k samples from 54 Chinese or English datasets. This is the final version of our dataset suitable for out-of-domain / zero-shot NER model training. It features standardized entity definitions and pruned, diverse training data, while also including separate unpruned test data. - `B2NERD_all`: Contain ~1.4M samples from 54 datasets. The full-data version of our dataset suitable for in-domain supervised evaluation. It has standardized entity definitions but does not undergo any data selection or pruning. - `B2NERD_raw`: The raw collected datasets with raw entity labels. It goes through basic format preprocessing but without further standardization. Example Data Format: ```json [ { "sentence": "Barak announced 2 weeks ago that he would call for early elections .", "entities": [ { "name": "Barak", "type": "person", "pos": [ 0, 5 ] }, { "name": "2 weeks ago", "type": "date or period", "pos": [ 16, 27 ] } ] }, ] ``` You can download the data from here (the B2NERD_data.zip in the "Files and versions" tab). Please ensure that you have the proper licenses to access the raw datasets in our collection. Below are the datasets statistics and source datasets for `B2NERD` dataset. | Split | Lang. | Datasets | Types | Num | Raw Num | |-------|-------|----------|-------|-----|---------| | Train | En | 19 | 119 | 25,403 | 838,648 | | | Zh | 21 | 222 | 26,504 | 580,513 | | | Total | 40 | 341 | 51,907 | 1,419,161 | | Test | En | 7 | 85 | - | 6,466 | | | Zh | 7 | 60 | - | 14,257 | | | Total | 14 | 145 | - | 20,723 | More information can be found in our paper. # Cite ``` @inproceedings{yang-etal-2025-beyond, title = "Beyond Boundaries: Learning a Universal Entity Taxonomy across Datasets and Languages for Open Named Entity Recognition", author = "Yang, Yuming and Zhao, Wantong and Huang, Caishuang and Ye, Junjie and Wang, Xiao and Zheng, Huiyuan and Nan, Yang and Wang, Yuran and Xu, Xueying and Huang, Kaixin and Zhang, Yunke and Gui, Tao and Zhang, Qi and Huang, Xuanjing", editor = "Rambow, Owen and Wanner, Leo and Apidianaki, Marianna and Al-Khalifa, Hend and Eugenio, Barbara Di and Schockaert, Steven", booktitle = "Proceedings of the 31st International Conference on Computational Linguistics", month = jan, year = "2025", address = "Abu Dhabi, UAE", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.coling-main.725/", pages = "10902--10923" } ```