Dataset Viewer
Auto-converted to Parquet Duplicate
doc_id
stringlengths
5
7
text
stringlengths
5.8k
75.8k
qid
stringclasses
1 value
doc_0
Passage 1: Margaret, Countess of Brienne Marguerite d'Enghien (born 1365 - d. after 1394), was the ruling suo jure Countess of Brienne and of Conversano, suo jure Lady of Enghien, and Lady of Beauvois from 1394 until an unknown date. Life Marguerite was born in 1365, the eldest daughter of Louis of Enghien, Count of B...
doc_1
Passage 1: Victoria's Secret Fashion Show 2003 The Victoria's Secret Fashion Show is an annual fashion show sponsored by Victoria's Secret, a brand of lingerie and sleepwear. Victoria's Secret uses the show to promote and market its goods in high-profile settings. The show features some of the world's leading fashion m...
doc_2
Passage 1: Henry III, Duke of Münsterberg-Oels Henry III of Münsterberg-Oels (also: Henry III of Poděbrady, Henry III of Bernstadt; German: Heinrich III. von Podiebrad; Czech: Jindřich III-Minstrbersko Olešnický; 29 April 1542, Oleśnica – 10 April 1587, Oleśnica) was Duke of Münsterberg from 1565 to 1574 and Duke of Be...
doc_3
Passage 1: The Museums at Washington and Chapin The Museums at Washington and Chapin are several museums that share a campus in South Bend, Indiana. The name is derived from the location, at the corner of Washington Street and Chapin Street in South Bend. Both museums have one common entrance off Thomas Street, one blo...
doc_4
Passage 1: The Rebel Gladiators The Rebel Gladiators (Italian: Ursus il gladiatore ribelle/ Ursus, the Rebel Gladiator) is a 1962 Italian peplum film directed by Domenico Paolella starring Dan Vadis, Josè Greci and Alan Steel. Plot The newly crowned emperor Commodus kidnaps the beautiful Arminia, who happens to be bet...
doc_5
Passage 1: Bill Smith (footballer, born 1897) William Thomas Smith (9 April 1897 – after 1924) was an English professional footballer. Career During his amateur career, Smith played in 17 finals, and captained the Third Army team in Germany when he was stationed in Koblenz after the armistice during the First World Wa...
doc_6
"Passage 1:\nBrian Kennedy (gallery director)\nBrian Patrick Kennedy (born 5 November 1961) is an Ir(...TRUNCATED)
doc_7
"Passage 1:\nGeorge Alagiah\nGeorge Maxwell Alagiah ( born 22 November 1955) is a British newsreade(...TRUNCATED)
doc_8
"Passage 1:\nOttakoothar\nOttakoothar (c. 12th century CE) was a Tamil court poet to three Later Cho(...TRUNCATED)
doc_9
"Passage 1:\nDance of Death (disambiguation)\nDance of Death, also called Danse Macabre, is a late-m(...TRUNCATED)
End of preview. Expand in Data Studio

Introduction

This repo contains the LongEmbed benchmark proposed in the paper LongEmbed: Extending Embedding Models for Long Context Retrieval. Dawei Zhu, Liang Wang, Nan Yang, Yifan Song, Wenhao Wu, Furu Wei, Sujian Li, arxiv 2024.04. Github Repo for LongEmbed: https://github.com/dwzhu-pku/LongEmbed.

LongEmbed is designed to benchmark long context retrieval. It includes two synthetic tasks and four real-world tasks, featuring documents of varying lengths and dispersed target information. It has been integrated into MTEB for the convenience of evaluation.

How to use it?

Loading Data

LongEmbed contains six datasets: NarrativeQA, QMSum, 2WikiMultihopQA, SummScreenFD, Passkey, and Needle. Each dataset has three splits: corpus, queries, and qrels. The corpus.jsonl file contains the documents, the queries.jsonl file contains the queries, and the qrels.jsonl file describes the relevance. To spefic split of load each dataset, you may use:

from datasets import load_dataset

# dataset_name in ["narrativeqa", "summ_screen_fd", "qmsum", "2wikimqa", "passkey", "needle"]
# split_name in ["corpus", "queries", "qrels"]
data_list = load_dataset(path="dwzhu/LongEmbed", name="dataset_name", split="split_name")

Evaluation

The evaluation of LongEmbed can be easily conducted using MTEB (>=1.6.22). For the four real tasks, you can evaluate as follows:

from mteb import MTEB
retrieval_task_list = ["LEMBSummScreenFDRetrieval", "LEMBQMSumRetrieval","LEMBWikimQARetrieval","LEMBNarrativeQARetrieval"]
output_dict = {}
evaluation = MTEB(tasks=retrieval_task_list)
#TODO load the model before evaluation
results = evaluation.run(model,output_folder=args.output_dir, overwrite_results=True, batch_size=args.batch_size,verbosity=0)
for key, value in results.items():
    split = "test" if "test" in value else "validation"
    output_dict[key] = {"ndcg@1": value[split]["ndcg_at_1"], "ndcg@10": value[split]["ndcg_at_10"]}
print(output_dict)

For the two synthetic tasks, since we examine a broad context range of {256, 512, 1024, 2048, 4096, 8192, 16384, 32768} tokens, an additional parameter of context_length is required. You may evaluate as follows:

from mteb import MTEB
needle_passkey_task_list = ["LEMBNeedleRetrieval", "LEMBPasskeyRetrieval"]
output_dict = {}
context_length_list = [256, 512, 1024, 2048, 4096, 8192, 16384, 32768]
evaluation = MTEB(tasks=needle_passkey_task_list)
#TODO load the model before evaluation
results = evaluation.run(model, output_folder=args.output_dir, overwrite_results=True,batch_size=args.batch_size,verbosity=0)
for key, value in results.items():
    needle_passkey_score_list = []
    for ctx_len in context_length_list:
        needle_passkey_score_list.append([ctx_len, value[f"test_{ctx_len}"]["ndcg_at_1"]])
    needle_passkey_score_list.append(["avg", sum([x[1] for x in needle_passkey_score_list])/len(context_length_list)])
    output_dict[key] = {item[0]: item[1] for item in needle_passkey_score_list}
print(output_dict)

Task Description

LongEmbed includes 4 real-world retrieval tasks curated from long-form QA and summarization. Note that for QA and summarization datasets, we use the questions and summaries as queries, respectively.

  • NarrativeQA: A QA dataset comprising long stories averaging 50,474 words and corresponding questions about specific content such as characters, events. We adopt the test set of the original dataset.
  • 2WikiMultihopQA: A multi-hop QA dataset featuring questions with up to 5 hops, synthesized through manually designed templates to prevent shortcut solutions. We use the test split of the length-uniformly sampled version from LongBench.
  • QMSum: A query-based meeting summarization dataset that requires selecting and summarizing relevant segments of meetings in response to queries. We use the version processed by SCROLLS. Since its test set does not include ground truth summarizations, and its validation set only have 60 documents, which is too small for document retrieval, we include the train set in addition to the validation set.
  • SummScreenFD: A screenplay summarization dataset comprising pairs of TV series transcripts and human-written summaries. Similar to QMSum, its plot details are scattered throughout the transcript and must be integrated to form succinct descriptions in the summary. We use validation set of the version processed by SCROLLS.

We also include two synthetic tasks, namely needle and passkey retrieval. The former is tailored from the Needle-in-a-Haystack Retrieval for LLMs. The later is adopted from Personalized Passkey Retrieval, with slight change for the efficiency of evaluation. The advantage of synthetic data is that we can flexibly control context length and distribution of target information. For both tasks, we evaluate a broad context range of {256, 512, 1024, 2048, 4096, 8192, 16384, 32768} tokens. For each context length, we include 50 test samples, each comprising 1 query and 100 candidate documents.

Task Statistics

Dataset Domain # Queries # Docs Avg. Query Words Avg. Doc Words
NarrativeQA Literature, File 10,449 355 9 50,474
QMSum Meeting 1,527 197 71 10,058
2WikimQA Wikipedia 300 300 12 6,132
SummScreenFD ScreenWriting 336 336 102 5,582
Passkey Synthetic 400 800 11 -
Needle Synthetic 400 800 7 -

Citation

If you find our paper helpful, please consider cite as follows:

@article{zhu2024longembed,
  title={LongEmbed: Extending Embedding Models for Long Context Retrieval},
  author={Zhu, Dawei and Wang, Liang and Yang, Nan and Song, Yifan and Wu, Wenhao and Wei, Furu and Li, Sujian},
  journal={arXiv preprint arXiv:2404.12096},
  year={2024}
}
Downloads last month
4,391

Models trained or fine-tuned on dwzhu/LongEmbed

Paper for dwzhu/LongEmbed