Slim Frikha
commited on
Update README.md
Browse files
README.md
CHANGED
|
@@ -91,7 +91,7 @@ print(response)
|
|
| 91 |
## Benchmarks
|
| 92 |
We report in the following table our internal pipeline benchmarks.
|
| 93 |
- We use [lm-evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness).
|
| 94 |
-
- We report **raw scores** obtained by applying chat template
|
| 95 |
- We use same batch-size across all models.
|
| 96 |
|
| 97 |
<table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
|
|
|
|
| 91 |
## Benchmarks
|
| 92 |
We report in the following table our internal pipeline benchmarks.
|
| 93 |
- We use [lm-evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness).
|
| 94 |
+
- We report **raw scores** obtained by applying chat template and fewshot_as_multiturn.
|
| 95 |
- We use same batch-size across all models.
|
| 96 |
|
| 97 |
<table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
|