Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Paper
• 1908.10084 • Published
• 12
This is a sentence-transformers model finetuned from ahmed11emad/cv-job-matching-model. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Python Developer',
'Experienced Frontend Developer with a passion for creating responsive and user-friendly web applications. Proficient in HTML5, CSS3, and JavaScript, with expertise in modern frontend frameworks such as React.js and Vue.js. Skilled in building dynamic user interfaces, optimizing web performance, and ensuring cross-browser compatibility. Led projects such as developing e-commerce platforms with React.js and Redux, building interactive data visualizations with D3.js, and implementing responsive designs with CSS Grid and Flexbox. Also contributed to projects including integrating APIs for data fetching and authentication, and implementing state management with Redux and Vuex. Actively staying updated with the latest frontend technologies and best practices through online courses and workshops.',
'Dynamic Full Stack Developer with a diverse background in frontend and backend technologies. Proficient in HTML, CSS, JavaScript, Python, and SQL, with expertise in frameworks such as React.js, Vue.js, Flask, and Django. Skilled in designing and implementing scalable architectures, optimizing application performance, and conducting code reviews. Led projects such as developing fintech platforms, building CRM systems, and implementing data-driven dashboards. Also contributed to projects including designing GraphQL APIs, integrating with third-party services such as Twilio and SendGrid, and implementing CI/CD pipelines for automated deployment. Actively participating in tech communities and contributing to open-source projects to foster collaboration and innovation.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, -0.0039, -0.0078],
# [-0.0039, 1.0000, -0.0648],
# [-0.0078, -0.0648, 1.0000]])
sentence_0, sentence_1, and label| sentence_0 | sentence_1 | label | |
|---|---|---|---|
| type | string | string | float |
| details |
|
|
|
| sentence_0 | sentence_1 | label |
|---|---|---|
Machine Learning Engineer |
Passionate Full Stack Developer with a focus on delivering high-quality and user-centric web applications. Proficient in frontend technologies such as HTML5, CSS3, JavaScript, and React.js, as well as backend technologies including Node.js and Django. Skilled in database management systems including MySQL and MongoDB. Led projects such as building full stack applications for content management systems, implementing responsive designs and cross-browser compatibility, and optimizing website performance through lazy loading and code splitting techniques. Also contributed to projects including integrating authentication and authorization mechanisms, and setting up automated testing frameworks for frontend and backend components. Actively attending full stack conferences and workshops to stay updated with the latest technologies and best practices in web development. |
0.0 |
Python Developer |
Dynamic Data Scientist with expertise in statistical modeling and predictive analytics. Proficient in programming languages such as Python and MATLAB, with expertise in libraries such as SciPy and StatsModels. Skilled in designing and conducting experiments for hypothesis testing and causal inference. Led projects such as building forecasting models for financial markets, analyzing healthcare data for disease prediction, and developing churn prediction models for customer retention. Also contributed to projects including implementing Bayesian methods for uncertainty estimation, and developing recommendation systems for content personalization. Actively exploring new data science techniques and methodologies to tackle challenging problems. |
0.0 |
Machine Learning Engineer |
Passionate Full Stack Developer with a focus on delivering innovative and user-friendly software solutions. Proficient in frontend technologies such as HTML, CSS, and JavaScript, with expertise in frameworks like React.js and Vue.js for building responsive and interactive web applications. Skilled in backend development using Node.js, Express.js, and Django for creating RESTful APIs and server-side logic. Led projects such as developing online booking systems with calendar integration, building task management applications with real-time updates, and deploying applications to cloud platforms like AWS and Google Cloud. Also contributed to projects including implementing OAuth authentication for secure login, and setting up monitoring and logging with tools like Prometheus and ELK stack. Actively participating in open-source projects and contributing to developer communities to share knowledge and collaborate with others. |
0.0 |
CosineSimilarityLoss with these parameters:{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
per_device_train_batch_size: 16per_device_eval_batch_size: 16num_train_epochs: 2multi_dataset_batch_sampler: round_robindo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 16per_device_eval_batch_size: 16gradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 2max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: Nonewarmup_ratio: Nonewarmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Trueenable_jit_checkpoint: Falsesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseuse_cpu: Falseseed: 42data_seed: Nonebf16: Falsefp16: Falsebf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: -1ddp_backend: Nonedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonedisable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Nonegroup_by_length: Falselength_column_name: lengthproject: huggingfacetrackio_space_id: trackioddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Truepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_for_metrics: []eval_do_concat_batches: Trueauto_find_batch_size: Falsefull_determinism: Falseddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_num_input_tokens_seen: noneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Trueuse_cache: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robinrouter_mapping: {}learning_rate_mapping: {}@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
Unable to build the model tree, the base model loops to the model itself. Learn more.