You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 (Physics)

This is a sentence-transformers model finetuned from
sentence-transformers/all-MiniLM-L6-v2.

It maps sentences & paragraphs to a 384-dimensional dense vector space and is optimised for:

  • semantic textual similarity in Physics
  • semantic search over NCERT Physics-style content
  • paraphrase mining, clustering, and downstream classification for physics questions and explanations.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Domain: CBSE / NCERT Class 11–12 Physics text (questions, explanations, summaries)
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False,
                'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False,
                'pooling_mode_mean_sqrt_len_tokens': False,
                'pooling_mode_weightedmean_tokens': False,
                'pooling_mode_lasttoken': False,
                'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

Install the library:

pip install -U sentence-transformers

Then load the model and run inference:

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")

sentences = [
    "Interference of light in Young's double-slit experiment leads to a pattern of bright and dark fringes.",
    "The superposition of coherent light waves from two slits produces constructive and destructive interference on a screen.",
    "Centripetal force is required to keep an object in uniform circular motion and acts towards the centre of the circle.",
]

embeddings = model.encode(sentences)
print(embeddings.shape)  # [3, 384]

# Pairwise similarity
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000,  0.7669, -0.0300],
#         [ 0.7669,  1.0000,  0.0155],
#         [-0.0300,  0.0155,  1.0000]])

Training Details

The model was trained in two phases, with a focus on Physics-specific text.

Phase 1 – Contrastive Fine-Tuning (Physics Positives)

Training Dataset

Unnamed internal dataset of Physics pairs:

  • Size: 33,038 training samples

  • Columns: and (semantically similar Physics sentences)

  • Approximate token statistics (first 1000 samples):

Column Type Min Tokens Mean Tokens Max Tokens
sentence_0 string 4 18.77 75
sentence_1 string 6 31.75 67
  • Example training pairs (Physics):
sentence_0 sentence_1
In an isolated system, the total linear momentum remains conserved during any type of collision. For a system with no external force, the vector sum of momenta before and after a collision stays the same, allowing final velocities to be computed using conservation of momentum.
Simple harmonic motion A periodic motion in which the restoring force is directly proportional to the displacement from the mean position and always directed towards that position, e.g., a mass attached to a spring.
Resonance in forced oscillations When a system is driven by an external periodic force whose frequency matches the system's natural frequency, the amplitude of oscillation becomes maximum; this effect is called resonance.
  • Loss: MultipleNegativesRankingLoss
{
  "scale": 20.0,
  "similarity_fct": "cos_sim",
  "gather_across_devices": false
}

Phase 2 – Hard-Negative Training (NCE on NCERT Physics)

This phase focuses on making the embedding model more discriminative for Retrieval-Augmented Generation (RAG) over NCERT Physics by introducing hard-negative mining and Noise-Contrastive Estimation (NCE) training.


1. Hard-Negative Mining

To expose the model to challenging non-answer passages, we performed systematic hard-negative mining.

Data Preparation

  • 810 NCERT Physics chunks embedded and indexed using FAISS.
  • 1,859 Physics queries collected as phase-2 seed questions.

Retrieval & Re-Ranking

  1. For each query, top-k nearest neighbors were retrieved using FAISS.
  2. Retrieved candidates were re-ranked using a cross-encoder:

2. Noise-Contrastive Estimation (NCE) Training

Initialization

  • Training started from the Phase-1 encoder checkpoint.

Objective

The model is optimized to:

  • Maximize similarity between the query and the gold NCERT chunk.
  • Minimize similarity between the query and the hard negative in the embedding space.

This effectively teaches the model to distinguish very similar but incorrect Physics passages from the correct one.


3. Training Performance

Training Metrics

  • Train Loss:
    ≈ 2.18 → 0.13 / 0.03

  • Train Acc@1:
    ≈ 0.40 → 0.98–1.00

Validation Metrics

  • Validation Acc@1 (3 epochs):0.81
  • Validation Acc@1 (5 epochs):0.76

Checkpoint Selection

  • The best checkpoint was selected using lowest validation loss
    (early-stopping style criterion).

4. Outcome

This phase significantly sharpens the embedding space so that for NCERT Physics questions:

  • The correct Physics chunk is ranked above:
  • Semantically similar
  • Conceptually misleading
  • Structurally related but incorrect passages

This directly improves RAG retrieval precision for downstream QA and tutoring systems.


5. Summary

Component Result
FAISS Index Size 810 chunks
Queries 1,859
Train Records 1,674
Validation Records 185
Training Method NCE with Hard Negatives
Best Val Acc@1 ~0.81
Encoder Init Phase-1 Checkpoint

✅ This phase enables high-precision Physics retrieval under domain-specific confusion, which is critical for exam-grade question answering systems. Evaluation is reported using top-1 classification accuracy on triplets:

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 100
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Framework Versions

  • Python: 3.12.12
  • Sentence Transformers: 5.1.2
  • Transformers: 4.57.2
  • PyTorch: 2.9.0+cu126
  • Accelerate: 1.12.0
  • Datasets: 4.0.0
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
19
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mihir0009/MiniLM_L6_V2-Physics-Finetuned

Finetuned
(671)
this model