Detoxify Original Model
Mirror of the Detoxify original model checkpoint for offline use.
Original source: unitary/detoxify
Model Info
- File:
toxic_original-c1212f89.ckpt - Size: 418 MB
- Model Type: original
Usage
Online (Normal)
from detoxify import Detoxify
model = Detoxify('original')
result = model.predict("Your text here")
print(result)
Offline Setup
- Download
toxic_original-c1212f89.ckptfrom this repo - Place it in
~/.cache/torch/hub/checkpoints/ - Use normally (Detoxify will find it automatically)
from detoxify import Detoxify
# Works offline if checkpoint is in cache
model = Detoxify('original', device='cpu')
result = model.predict("Your text here")
Toxicity Categories
The model predicts scores for:
toxicity- Overall toxicitysevere_toxicity- Severe toxic contentobscene- Obscene languagethreat- Threatening languageinsult- Insultsidentity_attack- Attacks on identity
Citation
@misc{detoxify,
title={Detoxify},
author={Hanu, Laura and Unitary team},
howpublished={Github. https://github.com/unitaryai/detoxify},
year={2020}
}