stanfordnlp/imdb
Viewer • Updated • 100k • 178k • 370
How to use lannelin/bert-imdb-1hidden with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="lannelin/bert-imdb-1hidden") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("lannelin/bert-imdb-1hidden")
model = AutoModelForSequenceClassification.from_pretrained("lannelin/bert-imdb-1hidden")A bert-base-uncased model was restricted to 1 hidden layer and
fine-tuned for sequence classification on the
imdb dataset loaded using the datasets library.
from transformers import AutoTokenizer, AutoModelForSequenceClassification
pretrained = "lannelin/bert-imdb-1hidden"
tokenizer = AutoTokenizer.from_pretrained(pretrained)
model = AutoModelForSequenceClassification.from_pretrained(pretrained)
LABELS = ["negative", "positive"]
def get_sentiment(text: str):
inputs = tokenizer.encode_plus(text, return_tensors='pt')
output = model(**inputs)[0].squeeze()
return LABELS[(output.argmax())]
print(get_sentiment("What a terrible film!"))
No special consideration given to limitations and bias.
Any bias held by the imdb dataset may be reflected in the model's output.
Initialised with bert-base-uncased
Fine tuned on imdb
The model was fine-tuned for 1 epoch with a batch size of 64, a learning rate of 5e-5, and a maximum sequence length of 512.
Accuracy on imdb test set: 0.87132