Update README.md
Browse files
README.md
CHANGED
|
@@ -7,7 +7,7 @@ tags:
|
|
| 7 |
- speech
|
| 8 |
---
|
| 9 |
|
| 10 |
-
### Contrastive user encoder
|
| 11 |
This model is a `DistilBertModel` trained by fine-tuning `distilbert-base-uncased` on author-based triplet loss.
|
| 12 |
|
| 13 |
#### Details
|
|
@@ -17,7 +17,7 @@ Training and evaluation details are provided in our EMNLP Findings paper:
|
|
| 17 |
|
| 18 |
#### Training
|
| 19 |
We fine-tuned DistilBERT on triplets consisting of:
|
| 20 |
-
- a set of Reddit submissions from a given user (10 posts, called "anchors");
|
| 21 |
- an additional post from the same user (a "positive example");
|
| 22 |
- a post from a different, randomly selected user (the "negative example")
|
| 23 |
|
|
|
|
| 7 |
- speech
|
| 8 |
---
|
| 9 |
|
| 10 |
+
### Contrastive user encoder (multi-post)
|
| 11 |
This model is a `DistilBertModel` trained by fine-tuning `distilbert-base-uncased` on author-based triplet loss.
|
| 12 |
|
| 13 |
#### Details
|
|
|
|
| 17 |
|
| 18 |
#### Training
|
| 19 |
We fine-tuned DistilBERT on triplets consisting of:
|
| 20 |
+
- a set of Reddit submissions from a given user (10 posts, called "anchors") - see ```rbroc/contrastive-user-encoder-singlepost``` for an equivalent model trained on a single anchor;
|
| 21 |
- an additional post from the same user (a "positive example");
|
| 22 |
- a post from a different, randomly selected user (the "negative example")
|
| 23 |
|