Model Card for BERT-Wikt-large-verb

Model Description

This model is an English language model based on BERT-large, fine-tuned using verb examples from English Wiktionary via supervised contrastive learning. The fine-tuning improves token-level semantic representations, particularly for tasks like Word-in-Context (WiC) and Word Sense Disambiguation (WSD).

Although trained on verbs, the model shows enhanced representation quality across the lexicon.

  • Developed by: Anna Mosolova, Marie Candito, Carlos Ramisch
  • Funded by: ANR Selexini
  • Model type: BERT-based transformer (BERT-large)
  • Language: English
  • License: MIT
  • Finetuned from model: google-bert/bert-large-uncased

Model Sources

Uses

The model is intended for extracting token-level embeddings for English, with improved sense separation.

How to Get Started with the Model

from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-large-uncased")
model = AutoModel.from_pretrained("annamos/BERT-Wikt-large-verb")
sentence = 'You should knock before you enter'
tokenized = tokenizer(sentence, return_tensors='pt')
embeddings = model(**tokenized)[0]
Downloads last month
5
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support