Model Card for BERT-Wikt-large-verb
Model Description
This model is an English language model based on BERT-large, fine-tuned using verb examples from English Wiktionary via supervised contrastive learning. The fine-tuning improves token-level semantic representations, particularly for tasks like Word-in-Context (WiC) and Word Sense Disambiguation (WSD).
Although trained on verbs, the model shows enhanced representation quality across the lexicon.
- Developed by: Anna Mosolova, Marie Candito, Carlos Ramisch
- Funded by: ANR Selexini
- Model type: BERT-based transformer (BERT-large)
- Language: English
- License: MIT
- Finetuned from model: google-bert/bert-large-uncased
Model Sources
- Repository: https://github.com/anya-bel/contrastive_learning_transfer
- Paper: Raffinage des représentations des tokens dans les modèles de langue pré-entraînés avec l’apprentissage contrastif : une étude entre modèles et entre langues
Uses
The model is intended for extracting token-level embeddings for English, with improved sense separation.
How to Get Started with the Model
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-large-uncased")
model = AutoModel.from_pretrained("annamos/BERT-Wikt-large-verb")
sentence = 'You should knock before you enter'
tokenized = tokenizer(sentence, return_tensors='pt')
embeddings = model(**tokenized)[0]
- Downloads last month
- 5