SentenceTransformer based on intfloat/e5-small-v2

This is a sentence-transformers model finetuned from intfloat/e5-small-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: intfloat/e5-small-v2
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Krelle/e5-small-v2-imo-pairs")
# Run inference
sentences = [
    'In Exercise\u202f1.41 we are asked to find identities for (a+b)3(a+b)^3 and (a+b)4(a+b)^4. What are the correct expanded forms, and how do they relate to the binomial theorem?',
    'Section 1.6: More on sets\n\n\n\nPropositions are important, but are confined by the binary values\nof true and false. We would like to work mathematically with \nobjects like integers, floating point numbers, neural networks,\ncomputer programs and so on.\n\nSubsection 1.6.1: Objects and equality\n\n\n\nOne of the cornerstones of modern mathematics is\ndeciding when two objects are the same i.e.,\ngiven two objects $A$ and $B$, deciding whether\nthe proposition $A=B$ is true of false. Oftentimes\nan algorithm for evaluating $A=B$ is needed.\n\nYou may laugh here, but this is\nnot always that easy. Even though objects appear different they are the same as\nin, for example the propositions\n$$\n\\frac{105}{189} = \\frac{35}{63}\\qquad\\text{and}\\qquad \\sin\\left(\\frac{\\pi}{2}\\right) = 1.\n$$\nThe first proposition above is an identity of fractions (rational numbers). The second is\nan identity, which calls for knowledge of the sine function and real numbers. Each of these\nidentities calls for some rather advanced mathematics. The first proposition is true in\na very precise way, since $105\\cdot 63 = 189 \\cdot 35$.\n\n\nExercise 1.40:\n\n\n\n\nUse the Sage window above to reason \nabout equality in the quiz below. In each case describe the objects i.e.,\nare they numbers, symbols, etc.? Also, please check your computations\nby hand with the old fashioned paper and pencil, especially $(a+b)(a-b)$.\n\n\\begin{quiz}\n\\question\nClick on the right equalities below.\n\\answer{T}\n$$a + b - 2 b = a - b$$\n\\answer{F}\n$$(a+b)^2 = a^2 + b^2$$\n\\answer{T}\n$$(a + b)(a - b) = a^2 - b^2$$\n\\answer{T}\n$$(a + b)^2 = a^2 + 2 a b +  b^2$$\n\\answer{F}\n$$(a+b)^3 = a^3 + 2 a^2 b + 2 a b^2 + b^3$$\n\\answer{F}\n$$\\frac{3}{8} = \\frac{5}{13}$$ \n\\answer{F}\n$$\n\\pi = \\frac{22}{7}\n$$\n\\answer{T}\n$$\n\\cos^2(\\pi) + \\sin^2(\\pi) = 1\n$$\n\\end{quiz}\n\n/Exercise\n\n\nExercise 1.41:\n\nYou know that $(a+ b)^2 = a^2 + 2 a b + b^2$. Use Sage to find a similar identities\nfor $(a + b)^3$ and $(a + b)^4$.\n\n\\begin{hint}\n  Go back and look at (the beginning of) Exercise (1.40).\n\\end{hint}\n\n/Exercise\n\nFor two objects $A$ and $B$ we will use the notation $A \\neq B$ for the proposition $\\neg (A = B)$.\n\nWe have already defined a set (informally) as a collection of distinct objects or *elements*.\nWe introduce some more set theory here.\nA set\nis also an object as described in section (1.6.1) and it makes sense to\nask when two sets are equal.\n\n\nDefinition 1.42:\n\nTwo sets $A$ and $B$ are equal i.e., $A = B$ if they contain the same elements.\n\n/Definition\n\nAn example of a set could be \nthe set $\\{1,2,3\\}$ of natural numbers between $0$ and $4$. Notice again that we use the symbol\n"$\\{$" to start the listing of elements in a set and the symbol "$\\}$" to denote the end of the listing.\nNotice also that (by our definition of equality between sets), the order of the elements in the listing does not matter i.e.,\n$$\n\\{1, 2, 3\\} = \\{2, 3, 1\\}.\n$$\nWe are also not allowing duplicates like for\nexample in the listing $\\{1, 2, 2, 3, 3, 3\\}$ (such a thing is called a multiset: https://en.m.wikipedia.org/wiki/Multiset).\n\nAn example of a set not involving numbers could be the set of letters \n$$\nS=\\{A, n, e, x, a, m, p, l, c, o, u, d, b, t, h, s, r, i\\}\n$$ \nused in this sentence. The number of elements in a set $S$ is called the *cardinality* of the set.\nWe will denote it by $|S|$.\n\nTo convince someone beyond a doubt (we will talk about this formally later in this chapter) that two sets $A$ and $B$ are equal, one needs to argue that if $x$ is an element of $A$, then $x$ is an element of $B$ and the other way round, if $y$ is an element of $B$, then $y$ is an element of $A$. If this is true, then\n$A$ and $B$ must contain the same elements.\n\n\nExercise 1.43:\n\nGive a precise reason as to why the two sets $\\{1, 2, 3\\}$ and $\\{1, 2, 4\\}$ are not equal.\nIs it possible for a set with $5$ elements to be equal to a set with $7$ elements?\n\n/Exercise  \n\nSets may be explored using (only) python. This is illustrated in the snippet below. \n\n<a href="#a314f450-54ad-4acd-bbf0-475e00ac5949" class ="btn btn-default Sagebutton" data-toggle="collapse"></a><div id=a314f450-54ad-4acd-bbf0-475e00ac5949 class = "collapse Sage envbuttons"><div class=sagepython><script type="text/x-sage">\nX = {1, 2, 3}\nY = {2, 3, 1}\nprint("X=Y is ", X==Y)\n\nS = {\'A\',\'n\',\'e\',\'x\',\'a\',\'m\',\'p\',\'l\',\'c\',\'o\',\'u\',\'d\',\'b\',\'t\',\'h\',\'s\',\'r\',\'i\'}\nprint("S = ", S) \nprint("The number of elements in S is |S|=", len(S))\n</script></div></div>\n\n\n\nExercise 1.44:\n\nCome up with three lines of Sage code that verifies $\\{1, 2, 3\\} \\neq \\{1, 2, 4\\}$. Try it out.\n\n/Exercise',
    'Chapter 1 on the language of mathematics is an introduction to the fundamental mathematics used in the notes.\nWithout understanding the basic concepts in it, you do not have the background to understand\nthe rest of the notes. Important highlights from the chapter are\n\n- Introduction to prompting. This is your ticket to using large language models effectively\n- How to use computer algebra (Sage). Sage can be very helpful in understanding the mathematics\n- Introduction of the numbers we use. Here the natural numbers, integers, rationals and real numbers are defined. Also the arithmetic rules for using them are given\n- Logic is the framework for reasoning in mathematics. Study this! First comes propositional logic. This is basic logic involving true and false statements with and, or etc as seen in truth tables. Then comes predicate logic, where variables are used. Here you must learn the meaning of "for every" and "there exists"\n- Proofs are described. Proof by contradiction is a must here! Do not skip it\n- The language of sets. Learn the operations on sets. Especially focus on the set builder notation and products of sets\n- Ordering of numbers. This is the formal definition of comparing numbers\n- Proof by induction. How to prove infinitely many propositions involving the natural numbers with one hack\n- The concept of a function. This is extremely important. Notice that a function is defined not by a rule. Also, in its definition enters crucially where it is defined\n- Functions from and into products\n- The preimage. This will become very important working with continuous functions',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.3078, 0.0796],
#         [0.3078, 1.0000, 0.2794],
#         [0.0796, 0.2794, 1.0000]])

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.6908
cosine_accuracy@3 0.8348
cosine_accuracy@5 0.8881
cosine_accuracy@10 0.9254
cosine_precision@1 0.6908
cosine_precision@3 0.2783
cosine_precision@5 0.1776
cosine_precision@10 0.0925
cosine_recall@1 0.6908
cosine_recall@3 0.8348
cosine_recall@5 0.8881
cosine_recall@10 0.9254
cosine_ndcg@3 0.7763
cosine_ndcg@5 0.798
cosine_ndcg@10 0.81
cosine_mrr@3 0.756
cosine_mrr@5 0.7679
cosine_mrr@10 0.7728
cosine_map@100 0.7764

Information Retrieval

Metric Value
cosine_accuracy@1 0.6663
cosine_accuracy@3 0.8116
cosine_accuracy@5 0.8698
cosine_accuracy@10 0.9117
cosine_precision@1 0.6663
cosine_precision@3 0.2705
cosine_precision@5 0.174
cosine_precision@10 0.0912
cosine_recall@1 0.6663
cosine_recall@3 0.8116
cosine_recall@5 0.8698
cosine_recall@10 0.9117
cosine_ndcg@3 0.7519
cosine_ndcg@5 0.7762
cosine_ndcg@10 0.7897
cosine_mrr@3 0.7313
cosine_mrr@5 0.7449
cosine_mrr@10 0.7504
cosine_map@100 0.7542

Training Details

Training Dataset

Unnamed Dataset

  • Size: 2,778 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 14 tokens
    • mean: 41.25 tokens
    • max: 125 tokens
    • min: 37 tokens
    • mean: 351.42 tokens
    • max: 512 tokens
  • Samples:
    anchor positive
    In Definition 8.2, why is the Hessian matrix defined with second partial derivatives evaluated at the point (v)? Definition 8.2:


    The Hessian matrix of $F$ at the point
    $v\in \mathbb{R}^n$ is defined by

    $$
    \nabla^2 F(v) :=
    \begin{pmatrix}
    \dfrac{ \partial^2 F}{ \partial x_1 \partial x_1}(v) &
    \cdots & \dfrac{ \partial^2 F}{ \partial x_1 \partial
    x_n}(v)
    \
    \vdots & \ddots & \vdots
    \
    \dfrac{ \partial^2 F}{ \partial x_n \partial x_1}(v) &
    \cdots & \dfrac{\partial^2 F}{ \partial x_n\partial
    x_n}(v)
    \end{pmatrix}
    .

    $$

    /Definition

    A very important observation is that $\nabla^2 F(v)$ above is a
    symmetric matrix if $F$ satisfies the condition in the last part of Theorem 7.13.
    The definition shows the entry (\frac{\partial^2 F}{\partial x_i \partial x_j}(v)). Does the order of differentiation matter for the Hessian? Definition 8.2:


    The Hessian matrix of $F$ at the point
    $v\in \mathbb{R}^n$ is defined by

    $$
    \nabla^2 F(v) :=
    \begin{pmatrix}
    \dfrac{ \partial^2 F}{ \partial x_1 \partial x_1}(v) &
    \cdots & \dfrac{ \partial^2 F}{ \partial x_1 \partial
    x_n}(v)
    \
    \vdots & \ddots & \vdots
    \
    \dfrac{ \partial^2 F}{ \partial x_n \partial x_1}(v) &
    \cdots & \dfrac{\partial^2 F}{ \partial x_n\partial
    x_n}(v)
    \end{pmatrix}
    .

    $$

    /Definition

    A very important observation is that $\nabla^2 F(v)$ above is a
    symmetric matrix if $F$ satisfies the condition in the last part of Theorem 7.13.
    The text says the Hessian is symmetric if (F) satisfies the condition in the last part of Theorem 7.13. What is that condition exactly? Definition 8.2:


    The Hessian matrix of $F$ at the point
    $v\in \mathbb{R}^n$ is defined by

    $$
    \nabla^2 F(v) :=
    \begin{pmatrix}
    \dfrac{ \partial^2 F}{ \partial x_1 \partial x_1}(v) &
    \cdots & \dfrac{ \partial^2 F}{ \partial x_1 \partial
    x_n}(v)
    \
    \vdots & \ddots & \vdots
    \
    \dfrac{ \partial^2 F}{ \partial x_n \partial x_1}(v) &
    \cdots & \dfrac{\partial^2 F}{ \partial x_n\partial
    x_n}(v)
    \end{pmatrix}
    .

    $$

    /Definition

    A very important observation is that $\nabla^2 F(v)$ above is a
    symmetric matrix if $F$ satisfies the condition in the last part of Theorem 7.13.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 929 evaluation samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 929 samples:
    anchor positive
    type string string
    details
    • min: 6 tokens
    • mean: 29.95 tokens
    • max: 96 tokens
    • min: 36 tokens
    • mean: 383.98 tokens
    • max: 512 tokens
  • Samples:
    anchor positive
    In Section 1.1, why does the author warn that prompting without any knowledge of the mathematics can be disastrous? Chapter 1: The language of mathematics and prompting



    Section 1.1: The art of prompting



    As of August 2024, there is a multitude of chatbots available on the internet. Some of them, like
    ChatGPT: https://chatgpt.com, Claude: https://claude.ai and Gemini: https://gemini.google.com (and Llama 3.1, Mistral, ... the list goes on)



    have quite impressive reasoning capabilities.
    These models are now multimodal i.e., they
    even accept non-textual input, such as images, sound and video. In principle you can upload a picture of a math exercise and
    the chatbot will provide a solution. Well, that is, on a good day and for a not too difficult exercise.

    The use of chatbots is encouraged throughout this course. In fact,
    they are even allowed during the exam. It is my hope that you will
    learn mathematics on a deeper level by communicating with the machine
    using carefully designed prompts - see
    the OpenAI guide: https://platform.openai.com/docs/guides/prompt-engineering on prompt engineering.
    ...
    The first prompting block asks for "two examples of good prompts"—how should I include LaTeX code in such a prompt according to the example? Chapter 1: The language of mathematics and prompting



    Section 1.1: The art of prompting



    As of August 2024, there is a multitude of chatbots available on the internet. Some of them, like
    ChatGPT: https://chatgpt.com, Claude: https://claude.ai and Gemini: https://gemini.google.com (and Llama 3.1, Mistral, ... the list goes on)



    have quite impressive reasoning capabilities.
    These models are now multimodal i.e., they
    even accept non-textual input, such as images, sound and video. In principle you can upload a picture of a math exercise and
    the chatbot will provide a solution. Well, that is, on a good day and for a not too difficult exercise.

    The use of chatbots is encouraged throughout this course. In fact,
    they are even allowed during the exam. It is my hope that you will
    learn mathematics on a deeper level by communicating with the machine
    using carefully designed prompts - see
    the OpenAI guide: https://platform.openai.com/docs/guides/prompt-engineering on prompt engineering.
    ...
    In the second prompting block, the equation $x^2 - x - 1 = 0$ is given; what level of detail does "Guide me through the steps" expect from the chatbot? Chapter 1: The language of mathematics and prompting



    Section 1.1: The art of prompting



    As of August 2024, there is a multitude of chatbots available on the internet. Some of them, like
    ChatGPT: https://chatgpt.com, Claude: https://claude.ai and Gemini: https://gemini.google.com (and Llama 3.1, Mistral, ... the list goes on)



    have quite impressive reasoning capabilities.
    These models are now multimodal i.e., they
    even accept non-textual input, such as images, sound and video. In principle you can upload a picture of a math exercise and
    the chatbot will provide a solution. Well, that is, on a good day and for a not too difficult exercise.

    The use of chatbots is encouraged throughout this course. In fact,
    they are even allowed during the exam. It is my hope that you will
    learn mathematics on a deeper level by communicating with the machine
    using carefully designed prompts - see
    the OpenAI guide: https://platform.openai.com/docs/guides/prompt-engineering on prompt engineering.
    ...
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • learning_rate: 2e-05
  • num_train_epochs: 8
  • warmup_ratio: 0.1
  • fp16: True
  • load_best_model_at_end: True
  • prompts: {'anchor': 'query:', 'positive': 'passage:', 'negative': 'passage:'}
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 8
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: {'anchor': 'query:', 'positive': 'passage:', 'negative': 'passage:'}
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss cosine_ndcg@10
-1 -1 - - 0.4709
1.1494 100 1.2817 0.7786 0.7818
2.2989 200 0.3207 0.7569 0.7762
3.4483 300 0.2454 0.7324 0.7823
4.5977 400 0.1875 0.7012 0.7948
5.7471 500 0.1479 0.7016 0.7897
6.8966 600 0.1325 0.6992 0.7897
-1 -1 - - 0.8100
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.12.12
  • Sentence Transformers: 5.1.2
  • Transformers: 4.57.1
  • PyTorch: 2.8.0+cu126
  • Accelerate: 1.11.0
  • Datasets: 4.0.0
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
367
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Krelle/e5-small-v2-imo-pairs

Finetuned
(21)
this model

Evaluation results