Add example compute function
Browse files
README.md
CHANGED
|
@@ -115,7 +115,18 @@ TBA
|
|
| 115 |
|
| 116 |
*Give code examples of the metric being used. Try to include examples that clear up any potential ambiguity left from the metric description above. If possible, provide a range of examples that show both typical and atypical results, as well as examples where a variety of input parameters are passed.*
|
| 117 |
|
| 118 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 119 |
|
| 120 |
## Limitations and Bias
|
| 121 |
|
|
|
|
| 115 |
|
| 116 |
*Give code examples of the metric being used. Try to include examples that clear up any potential ambiguity left from the metric description above. If possible, provide a range of examples that show both typical and atypical results, as well as examples where a variety of input parameters are passed.*
|
| 117 |
|
| 118 |
+
```python
|
| 119 |
+
def compute_metrics(p: EvalPrediction):
|
| 120 |
+
preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions
|
| 121 |
+
preds = np.argmax(preds, axis=1)
|
| 122 |
+
# Use labels instead of ids for hierarchical ISCO-08 classification
|
| 123 |
+
preds = [model.config.id2label[p] for p in preds]
|
| 124 |
+
refs = [model.config.id2label[r] for r in p.label_ids]
|
| 125 |
+
result = metric.compute(predictions=preds, references=refs)
|
| 126 |
+
return result
|
| 127 |
+
```
|
| 128 |
+
|
| 129 |
+
More TBA
|
| 130 |
|
| 131 |
## Limitations and Bias
|
| 132 |
|