EMNLP 2025 main

"Bridging the Gap Between Molecule and Textual Descriptions via Substructure-aware Alignment"

GitHub Paper

MolBridge-Gen model before training on the ChEBI-20 dataset.

from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("laituan245/molt5-base", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('PhTae/MolBridge-Gen-Base')

canonicalized_smiles = 'CC(=O)N[C@@H](CCCN=C(N)N)C(=O)[O-]'
canonicalized_smiles = 'Provide a whole descriptions of this molecule: ' + canonicalized_smiles

token = tokenizer(canonicalized_smiles, return_tensors='pt', padding='longest', truncation=True)
gen_results = model.generate(input_ids=token['input_ids'],
                             attention_mask=token['attention_mask'], 
                             num_beams=5, 
                             max_new_tokens=512)

gen_results = tokenizer.decode(gen_results[0], skip_special_tokens=True)
print(gen_results)
Downloads last month
6
Safetensors
Model size
0.2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for PhTae/MolBridge-Gen-Base

Finetuned
(2)
this model
Finetunes
2 models