SentenceTransformer based on intfloat/e5-small-v2
This is a sentence-transformers model finetuned from intfloat/e5-small-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: intfloat/e5-small-v2
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 384 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Krelle/e5-small-v2-imo-pairs")
# Run inference
sentences = [
'In Exercise\u202f1.41 we are asked to find identities for and . What are the correct expanded forms, and how do they relate to the binomial theorem?',
'Section 1.6: More on sets\n\n\n\nPropositions are important, but are confined by the binary values\nof true and false. We would like to work mathematically with \nobjects like integers, floating point numbers, neural networks,\ncomputer programs and so on.\n\nSubsection 1.6.1: Objects and equality\n\n\n\nOne of the cornerstones of modern mathematics is\ndeciding when two objects are the same i.e.,\ngiven two objects $A$ and $B$, deciding whether\nthe proposition $A=B$ is true of false. Oftentimes\nan algorithm for evaluating $A=B$ is needed.\n\nYou may laugh here, but this is\nnot always that easy. Even though objects appear different they are the same as\nin, for example the propositions\n$$\n\\frac{105}{189} = \\frac{35}{63}\\qquad\\text{and}\\qquad \\sin\\left(\\frac{\\pi}{2}\\right) = 1.\n$$\nThe first proposition above is an identity of fractions (rational numbers). The second is\nan identity, which calls for knowledge of the sine function and real numbers. Each of these\nidentities calls for some rather advanced mathematics. The first proposition is true in\na very precise way, since $105\\cdot 63 = 189 \\cdot 35$.\n\n\nExercise 1.40:\n\n\n\n\nUse the Sage window above to reason \nabout equality in the quiz below. In each case describe the objects i.e.,\nare they numbers, symbols, etc.? Also, please check your computations\nby hand with the old fashioned paper and pencil, especially $(a+b)(a-b)$.\n\n\\begin{quiz}\n\\question\nClick on the right equalities below.\n\\answer{T}\n$$a + b - 2 b = a - b$$\n\\answer{F}\n$$(a+b)^2 = a^2 + b^2$$\n\\answer{T}\n$$(a + b)(a - b) = a^2 - b^2$$\n\\answer{T}\n$$(a + b)^2 = a^2 + 2 a b + b^2$$\n\\answer{F}\n$$(a+b)^3 = a^3 + 2 a^2 b + 2 a b^2 + b^3$$\n\\answer{F}\n$$\\frac{3}{8} = \\frac{5}{13}$$ \n\\answer{F}\n$$\n\\pi = \\frac{22}{7}\n$$\n\\answer{T}\n$$\n\\cos^2(\\pi) + \\sin^2(\\pi) = 1\n$$\n\\end{quiz}\n\n/Exercise\n\n\nExercise 1.41:\n\nYou know that $(a+ b)^2 = a^2 + 2 a b + b^2$. Use Sage to find a similar identities\nfor $(a + b)^3$ and $(a + b)^4$.\n\n\\begin{hint}\n Go back and look at (the beginning of) Exercise (1.40).\n\\end{hint}\n\n/Exercise\n\nFor two objects $A$ and $B$ we will use the notation $A \\neq B$ for the proposition $\\neg (A = B)$.\n\nWe have already defined a set (informally) as a collection of distinct objects or *elements*.\nWe introduce some more set theory here.\nA set\nis also an object as described in section (1.6.1) and it makes sense to\nask when two sets are equal.\n\n\nDefinition 1.42:\n\nTwo sets $A$ and $B$ are equal i.e., $A = B$ if they contain the same elements.\n\n/Definition\n\nAn example of a set could be \nthe set $\\{1,2,3\\}$ of natural numbers between $0$ and $4$. Notice again that we use the symbol\n"$\\{$" to start the listing of elements in a set and the symbol "$\\}$" to denote the end of the listing.\nNotice also that (by our definition of equality between sets), the order of the elements in the listing does not matter i.e.,\n$$\n\\{1, 2, 3\\} = \\{2, 3, 1\\}.\n$$\nWe are also not allowing duplicates like for\nexample in the listing $\\{1, 2, 2, 3, 3, 3\\}$ (such a thing is called a multiset: https://en.m.wikipedia.org/wiki/Multiset).\n\nAn example of a set not involving numbers could be the set of letters \n$$\nS=\\{A, n, e, x, a, m, p, l, c, o, u, d, b, t, h, s, r, i\\}\n$$ \nused in this sentence. The number of elements in a set $S$ is called the *cardinality* of the set.\nWe will denote it by $|S|$.\n\nTo convince someone beyond a doubt (we will talk about this formally later in this chapter) that two sets $A$ and $B$ are equal, one needs to argue that if $x$ is an element of $A$, then $x$ is an element of $B$ and the other way round, if $y$ is an element of $B$, then $y$ is an element of $A$. If this is true, then\n$A$ and $B$ must contain the same elements.\n\n\nExercise 1.43:\n\nGive a precise reason as to why the two sets $\\{1, 2, 3\\}$ and $\\{1, 2, 4\\}$ are not equal.\nIs it possible for a set with $5$ elements to be equal to a set with $7$ elements?\n\n/Exercise \n\nSets may be explored using (only) python. This is illustrated in the snippet below. \n\n<a href="#a314f450-54ad-4acd-bbf0-475e00ac5949" class ="btn btn-default Sagebutton" data-toggle="collapse"></a><div id=a314f450-54ad-4acd-bbf0-475e00ac5949 class = "collapse Sage envbuttons"><div class=sagepython><script type="text/x-sage">\nX = {1, 2, 3}\nY = {2, 3, 1}\nprint("X=Y is ", X==Y)\n\nS = {\'A\',\'n\',\'e\',\'x\',\'a\',\'m\',\'p\',\'l\',\'c\',\'o\',\'u\',\'d\',\'b\',\'t\',\'h\',\'s\',\'r\',\'i\'}\nprint("S = ", S) \nprint("The number of elements in S is |S|=", len(S))\n</script></div></div>\n\n\n\nExercise 1.44:\n\nCome up with three lines of Sage code that verifies $\\{1, 2, 3\\} \\neq \\{1, 2, 4\\}$. Try it out.\n\n/Exercise',
'Chapter 1 on the language of mathematics is an introduction to the fundamental mathematics used in the notes.\nWithout understanding the basic concepts in it, you do not have the background to understand\nthe rest of the notes. Important highlights from the chapter are\n\n- Introduction to prompting. This is your ticket to using large language models effectively\n- How to use computer algebra (Sage). Sage can be very helpful in understanding the mathematics\n- Introduction of the numbers we use. Here the natural numbers, integers, rationals and real numbers are defined. Also the arithmetic rules for using them are given\n- Logic is the framework for reasoning in mathematics. Study this! First comes propositional logic. This is basic logic involving true and false statements with and, or etc as seen in truth tables. Then comes predicate logic, where variables are used. Here you must learn the meaning of "for every" and "there exists"\n- Proofs are described. Proof by contradiction is a must here! Do not skip it\n- The language of sets. Learn the operations on sets. Especially focus on the set builder notation and products of sets\n- Ordering of numbers. This is the formal definition of comparing numbers\n- Proof by induction. How to prove infinitely many propositions involving the natural numbers with one hack\n- The concept of a function. This is extremely important. Notice that a function is defined not by a rule. Also, in its definition enters crucially where it is defined\n- Functions from and into products\n- The preimage. This will become very important working with continuous functions',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.3078, 0.0796],
# [0.3078, 1.0000, 0.2794],
# [0.0796, 0.2794, 1.0000]])
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluatorwith these parameters:{ "query_prompt": "query:", "corpus_prompt": "passage:" }
| Metric | Value |
|---|---|
| cosine_accuracy@1 | 0.6908 |
| cosine_accuracy@3 | 0.8348 |
| cosine_accuracy@5 | 0.8881 |
| cosine_accuracy@10 | 0.9254 |
| cosine_precision@1 | 0.6908 |
| cosine_precision@3 | 0.2783 |
| cosine_precision@5 | 0.1776 |
| cosine_precision@10 | 0.0925 |
| cosine_recall@1 | 0.6908 |
| cosine_recall@3 | 0.8348 |
| cosine_recall@5 | 0.8881 |
| cosine_recall@10 | 0.9254 |
| cosine_ndcg@3 | 0.7763 |
| cosine_ndcg@5 | 0.798 |
| cosine_ndcg@10 | 0.81 |
| cosine_mrr@3 | 0.756 |
| cosine_mrr@5 | 0.7679 |
| cosine_mrr@10 | 0.7728 |
| cosine_map@100 | 0.7764 |
Information Retrieval
- Evaluated with
InformationRetrievalEvaluatorwith these parameters:{ "query_prompt": "query:", "corpus_prompt": "passage:" }
| Metric | Value |
|---|---|
| cosine_accuracy@1 | 0.6663 |
| cosine_accuracy@3 | 0.8116 |
| cosine_accuracy@5 | 0.8698 |
| cosine_accuracy@10 | 0.9117 |
| cosine_precision@1 | 0.6663 |
| cosine_precision@3 | 0.2705 |
| cosine_precision@5 | 0.174 |
| cosine_precision@10 | 0.0912 |
| cosine_recall@1 | 0.6663 |
| cosine_recall@3 | 0.8116 |
| cosine_recall@5 | 0.8698 |
| cosine_recall@10 | 0.9117 |
| cosine_ndcg@3 | 0.7519 |
| cosine_ndcg@5 | 0.7762 |
| cosine_ndcg@10 | 0.7897 |
| cosine_mrr@3 | 0.7313 |
| cosine_mrr@5 | 0.7449 |
| cosine_mrr@10 | 0.7504 |
| cosine_map@100 | 0.7542 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 2,778 training samples
- Columns:
anchorandpositive - Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 14 tokens
- mean: 41.25 tokens
- max: 125 tokens
- min: 37 tokens
- mean: 351.42 tokens
- max: 512 tokens
- Samples:
anchor positive In Definition 8.2, why is the Hessian matrix defined with second partial derivatives evaluated at the point (v)?Definition 8.2:
The Hessian matrix of $F$ at the point
$v\in \mathbb{R}^n$ is defined by
$$
\nabla^2 F(v) :=
\begin{pmatrix}
\dfrac{ \partial^2 F}{ \partial x_1 \partial x_1}(v) &
\cdots & \dfrac{ \partial^2 F}{ \partial x_1 \partial
x_n}(v)
\
\vdots & \ddots & \vdots
\
\dfrac{ \partial^2 F}{ \partial x_n \partial x_1}(v) &
\cdots & \dfrac{\partial^2 F}{ \partial x_n\partial
x_n}(v)
\end{pmatrix}
.
$$
/Definition
A very important observation is that $\nabla^2 F(v)$ above is a
symmetric matrix if $F$ satisfies the condition in the last part of Theorem 7.13.The definition shows the entry (\frac{\partial^2 F}{\partial x_i \partial x_j}(v)). Does the order of differentiation matter for the Hessian?Definition 8.2:
The Hessian matrix of $F$ at the point
$v\in \mathbb{R}^n$ is defined by
$$
\nabla^2 F(v) :=
\begin{pmatrix}
\dfrac{ \partial^2 F}{ \partial x_1 \partial x_1}(v) &
\cdots & \dfrac{ \partial^2 F}{ \partial x_1 \partial
x_n}(v)
\
\vdots & \ddots & \vdots
\
\dfrac{ \partial^2 F}{ \partial x_n \partial x_1}(v) &
\cdots & \dfrac{\partial^2 F}{ \partial x_n\partial
x_n}(v)
\end{pmatrix}
.
$$
/Definition
A very important observation is that $\nabla^2 F(v)$ above is a
symmetric matrix if $F$ satisfies the condition in the last part of Theorem 7.13.The text says the Hessian is symmetric if (F) satisfies the condition in the last part of Theorem 7.13. What is that condition exactly?Definition 8.2:
The Hessian matrix of $F$ at the point
$v\in \mathbb{R}^n$ is defined by
$$
\nabla^2 F(v) :=
\begin{pmatrix}
\dfrac{ \partial^2 F}{ \partial x_1 \partial x_1}(v) &
\cdots & \dfrac{ \partial^2 F}{ \partial x_1 \partial
x_n}(v)
\
\vdots & \ddots & \vdots
\
\dfrac{ \partial^2 F}{ \partial x_n \partial x_1}(v) &
\cdots & \dfrac{\partial^2 F}{ \partial x_n\partial
x_n}(v)
\end{pmatrix}
.
$$
/Definition
A very important observation is that $\nabla^2 F(v)$ above is a
symmetric matrix if $F$ satisfies the condition in the last part of Theorem 7.13. - Loss:
MultipleNegativesRankingLosswith these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false }
Evaluation Dataset
Unnamed Dataset
- Size: 929 evaluation samples
- Columns:
anchorandpositive - Approximate statistics based on the first 929 samples:
anchor positive type string string details - min: 6 tokens
- mean: 29.95 tokens
- max: 96 tokens
- min: 36 tokens
- mean: 383.98 tokens
- max: 512 tokens
- Samples:
anchor positive In Section 1.1, why does the author warn that prompting without any knowledge of the mathematics can be disastrous?Chapter 1: The language of mathematics and prompting
Section 1.1: The art of prompting
As of August 2024, there is a multitude of chatbots available on the internet. Some of them, like
ChatGPT: https://chatgpt.com, Claude: https://claude.ai and Gemini: https://gemini.google.com (and Llama 3.1, Mistral, ... the list goes on)
have quite impressive reasoning capabilities.
These models are now multimodal i.e., they
even accept non-textual input, such as images, sound and video. In principle you can upload a picture of a math exercise and
the chatbot will provide a solution. Well, that is, on a good day and for a not too difficult exercise.
The use of chatbots is encouraged throughout this course. In fact,
they are even allowed during the exam. It is my hope that you will
learn mathematics on a deeper level by communicating with the machine
using carefully designed prompts - see
the OpenAI guide: https://platform.openai.com/docs/guides/prompt-engineering on prompt engineering.
...The first prompting block asks for "two examples of good prompts"—how should I include LaTeX code in such a prompt according to the example?Chapter 1: The language of mathematics and prompting
Section 1.1: The art of prompting
As of August 2024, there is a multitude of chatbots available on the internet. Some of them, like
ChatGPT: https://chatgpt.com, Claude: https://claude.ai and Gemini: https://gemini.google.com (and Llama 3.1, Mistral, ... the list goes on)
have quite impressive reasoning capabilities.
These models are now multimodal i.e., they
even accept non-textual input, such as images, sound and video. In principle you can upload a picture of a math exercise and
the chatbot will provide a solution. Well, that is, on a good day and for a not too difficult exercise.
The use of chatbots is encouraged throughout this course. In fact,
they are even allowed during the exam. It is my hope that you will
learn mathematics on a deeper level by communicating with the machine
using carefully designed prompts - see
the OpenAI guide: https://platform.openai.com/docs/guides/prompt-engineering on prompt engineering.
...In the second prompting block, the equation $x^2 - x - 1 = 0$ is given; what level of detail does "Guide me through the steps" expect from the chatbot?Chapter 1: The language of mathematics and prompting
Section 1.1: The art of prompting
As of August 2024, there is a multitude of chatbots available on the internet. Some of them, like
ChatGPT: https://chatgpt.com, Claude: https://claude.ai and Gemini: https://gemini.google.com (and Llama 3.1, Mistral, ... the list goes on)
have quite impressive reasoning capabilities.
These models are now multimodal i.e., they
even accept non-textual input, such as images, sound and video. In principle you can upload a picture of a math exercise and
the chatbot will provide a solution. Well, that is, on a good day and for a not too difficult exercise.
The use of chatbots is encouraged throughout this course. In fact,
they are even allowed during the exam. It is my hope that you will
learn mathematics on a deeper level by communicating with the machine
using carefully designed prompts - see
the OpenAI guide: https://platform.openai.com/docs/guides/prompt-engineering on prompt engineering.
... - Loss:
MultipleNegativesRankingLosswith these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepsper_device_train_batch_size: 32per_device_eval_batch_size: 32learning_rate: 2e-05num_train_epochs: 8warmup_ratio: 0.1fp16: Trueload_best_model_at_end: Trueprompts: {'anchor': 'query:', 'positive': 'passage:', 'negative': 'passage:'}batch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 32per_device_eval_batch_size: 32per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 8max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Trueignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthproject: huggingfacetrackio_space_id: trackioddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: noneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Trueprompts: {'anchor': 'query:', 'positive': 'passage:', 'negative': 'passage:'}batch_sampler: no_duplicatesmulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}
Training Logs
| Epoch | Step | Training Loss | Validation Loss | cosine_ndcg@10 |
|---|---|---|---|---|
| -1 | -1 | - | - | 0.4709 |
| 1.1494 | 100 | 1.2817 | 0.7786 | 0.7818 |
| 2.2989 | 200 | 0.3207 | 0.7569 | 0.7762 |
| 3.4483 | 300 | 0.2454 | 0.7324 | 0.7823 |
| 4.5977 | 400 | 0.1875 | 0.7012 | 0.7948 |
| 5.7471 | 500 | 0.1479 | 0.7016 | 0.7897 |
| 6.8966 | 600 | 0.1325 | 0.6992 | 0.7897 |
| -1 | -1 | - | - | 0.8100 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.12.12
- Sentence Transformers: 5.1.2
- Transformers: 4.57.1
- PyTorch: 2.8.0+cu126
- Accelerate: 1.11.0
- Datasets: 4.0.0
- Tokenizers: 0.22.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 367
Model tree for Krelle/e5-small-v2-imo-pairs
Base model
intfloat/e5-small-v2Evaluation results
- Cosine Accuracy@1 on Unknownself-reported0.691
- Cosine Accuracy@3 on Unknownself-reported0.835
- Cosine Accuracy@5 on Unknownself-reported0.888
- Cosine Accuracy@10 on Unknownself-reported0.925
- Cosine Precision@1 on Unknownself-reported0.691
- Cosine Precision@3 on Unknownself-reported0.278
- Cosine Precision@5 on Unknownself-reported0.178
- Cosine Precision@10 on Unknownself-reported0.093
- Cosine Recall@1 on Unknownself-reported0.691
- Cosine Recall@3 on Unknownself-reported0.835