ATRIE / README.md
KcLuo's picture
Update README.md
dd5af60 verified
---
dataset_info:
features:
- name: _id
dtype: int64
- name: lai_yuan
dtype: string
- name: lai_yuan_id
dtype: int64
- name: biao_ti
dtype: string
- name: fa_yuan
dtype: string
- name: fa_ting_guan_dian
dtype: string
- name: shen_li_jie_duan
dtype: string
- name: ri_qi_s31
dtype: string
- name: ri_q1_s41
dtype: string
- name: fu_yan
dtype: string
- name: an_you
sequence: string
- name: dang_shi_ren
sequence: string
- name: an_jian_lei_xing
dtype: string
- name: wen_shu_bian_hao
dtype: string
- name: shen_li_qing_kuang
dtype: string
- name: guan_jian_ci
sequence: string
- name: ting_shen_guo_cheng
dtype: string
- name: ren_yuan
dtype: string
- name: pan_jue_jie_guo
dtype: string
- name: label
struct:
- name: classification
dtype: string
- name: reason
dtype: string
- name: charge
dtype: string
- name: concept
dtype: string
splits:
- name: train
num_bytes: 51203688
num_examples: 2914
- name: test
num_bytes: 51902129
num_examples: 2928
download_size: 42728249
dataset_size: 103105817
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: apache-2.0
task_categories:
- text-classification
- text-generation
language:
- zh
tags:
- legal
pretty_name: Legal Concept Entailment
size_categories:
- 1K<n<10K
---
# Legal Concept Entailment Dataset
## Dataset Description
This dataset is released as part of our ACL 2025 main conference paper: [Automating Legal Interpretation with LLMs: Retrieval, Generation, and Evaluation](https://arxiv.org/abs/2501.01743). It is designed for the **Legal Concept Entailment (LCE)** task, which evaluates the quality of legal interpretations by assessing a model's ability to understand and apply vague legal concepts to specific, unseen cases.
The core idea is that a high-quality interpretation of a legal concept should improve a model's performance on determining whether that concept applies to the fact of an unseen case. This dataset serves as a benchmark for this evaluation.
## Legal Concept Entailment
The LCE task is a dual-part task designed to test a model's understanding of legal concepts.
* **Binary Classification**: Given the fact description of a case and a relevant vague legal concept, the model must predict a binary label (`Yes`/`No`) indicating whether the concept applies to the facts of the case.
* **Reason Generation**: The model must also generate a textual reason explaining its classification decision. The quality of this reason is evaluated for consistency against a "gold" reason derived from the court's actual judgment.
An example of the task is shown below:
![LCE Task Example](https://arxiv.org/html/2501.01743v3/x2.png)
## Languages
The data is in **Chinese (zh)**, as it is sourced from [China Judgments Online](https://wenshu.court.gov.cn/).
## Dataset Structure
### Data Splits
The dataset is divided into two splits:
- `train`: Contains 2914 instances for generating legal concept interpretation.
- `test`: Contains 2928 instances for testing.
### Data Instances
Each instance in the dataset corresponds to a legal case and a specific legal concept.
- `label`: Contains the classification label (`Yes` or `No`) and the gold reason for the classification extracted from the court view.
- `concept`: The vague legal concept being evaluated.
- `ting_shen_guo_cheng`: The fact description of the case.
- `fa_ting_guan_dian`: The court view on the case, which includes the legal interpretation of the concept.