📝 myX-Semantic-Light: An Efficient Burmese Sentence Embedding Model

Model Description

myX-Semantic-Light is a lightweight sentence-transformer model optimized for the Burmese (Myanmar 🇲🇲) language. It is designed for high-speed inference and low-resource environments while maintaining robust semantic understanding.

This model was trained using Knowledge Distillation from a multilingual teacher model. It maps Burmese sentences into a 384-dimensional dense vector space, making it twice as memory-efficient as the standard 768-dimensional versions.

Key Applications

  • Real-time Semantic Search: Ideal for mobile or edge applications requiring fast retrieval.
  • Efficient Clustering: Grouping large-scale Burmese datasets with reduced memory overhead.
  • Similarity Scoring: Determining the relationship between short phrases and sentences.

Development & Distribution

Technical Specifications

  • Base Model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
  • Max Sequence Length: 128 tokens (Optimized for short-to-medium text)
  • Output Dimension: 384 dimensions
  • Similarity Function: Cosine Similarity
  • Loss Function: MSELoss

Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_mean_tokens': True})
)

Usage

Installation

pip install -U sentence-transformers

Direct Usage (Inference)

from sentence_transformers import SentenceTransformer, util

# Load the lightweight model
model = SentenceTransformer("DatarrX/myX-Semantic-Light")

sentences = [
    "ဝက်ခြံ ပျောက်ကင်းအောင် ဘယ်လိုလုပ်ရမလဲ။",
    "မျက်နှာ အသားအရေ ထိန်းသိမ်းနည်းများ",
    "နည်းပညာ သတင်းများ ဖတ်ရှုရန်"
]

embeddings = model.encode(sentences)
similarities = model.similarity(embeddings, embeddings)
print(similarities)

Implementation Guidelines (Thresholds)

Because this model is a lightweight variant trained on a smaller subset (500K rows), its score distribution differs slightly from the 1M SOTA version.

  • Recommended Threshold: A Cosine Similarity score of 0.40 or higher is generally sufficient to indicate a semantic relationship.
  • Note: For tasks requiring higher precision and deeper contextual reasoning, we recommend using the larger myX-Semantic (1M) version with a threshold of 0.60.

Training Details

  • Samples: 500,000 training pairs.
  • Batch Size: 64
  • Epochs: 1
  • Optimizer: AdamW (adamw_torch_fused)
  • Training Time: ~37 minutes on multi-GPU setup.

Training Logs

Epoch Step Training Loss
0.13 500 0.0035
0.51 2000 0.0029
0.90 3500 0.0027

Limitations & Bias

  • Encoding: Optimized for Unicode Burmese. Zawgyi encoding is not supported.
  • Sequence Length: Performance may degrade for documents longer than 128 tokens due to the sequence length constraint during training.

License

This model is licensed under the Apache License 2.0.

Citation

@software{khantsintheinn2026myxsemantic_light,
  author = {Khant Sint Heinn},
  title = {myX-Semantic-Light: An Efficient Burmese Sentence Embedding Model},
  year = {2026},
  publisher = {DatarrX},
  url = {[https://huggingface.co/DatarrX/myX-Semantic-Light}
}

About the Author

Khant Sint Heinn, working under the name Kalix Louis, is a Machine Learning Engineer focused on Natural Language Processing (NLP), data foundations, and open-source AI development. His work is centered on improving support for the Burmese (Myanmar) language in modern AI systems by building high-quality datasets, practical tools, and scalable infrastructure for language technology.

He is currently the Lead Developer at DatarrX, where he develops data pipelines, manages large-scale data collection workflows, and helps create open-source resources for researchers, developers, and organizations. His experience includes data engineering, web scripting, dataset curation, and building systems that support real-world machine learning applications.

Khant Sint Heinn is especially interested in advancing low-resource languages and making AI more accessible to underrepresented communities. Through his open-source contributions, he works to strengthen the Burmese (Myanmar) tech ecosystem and provide reliable building blocks for future language models, search systems, and intelligent applications.

His goal is simple: to turn limited language resources into practical opportunities through clean data, useful tools, and community-driven innovation.

Connect with the Author:
GitHub | Hugging Face | Kaggle

Downloads last month
20
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DatarrX/myX-Semantic-Light

Dataset used to train DatarrX/myX-Semantic-Light