šš Cohere releases Aya 8B & 32B: SOTA multilingual models for 23 languages !
How did they manage to beat top contenders while also adding 23 languages?
š š§šæš®š¶š» š¼š» ššš»ššµš²šš¶š° š±š®šš®: ⢠Synthetic data has been said to cause model-collapse after too much training ⢠Cohere has introduced "data arbitrage" to prevent this by strategically sampling from a pool of several teacher models instead of one single teacher ⢠First train a model pool for each different groups of languages, and employ an internal Reward Model named "Arbiter" to evaluate and select the optimal generation. Then only the best generation is kept as the final completion for each prompt ā”ļø This process is particularly effective for multilingual setting, where no single teacher model performs in all languages : here "Multilingual Arbitrage" singlehandedly improves win rates of the 8B model vs Gemma-2-9B by 10 points!
ā”ļø ššæš²š®š š½š²šæš³š¼šæšŗš®š»š°š²: Automatic evaluations on Arena-Hard-Auto dataset: ā”ļø Aya Expanse 8B beats models from its weight class such as Gemma 2 9B, Llama 3.1 8B, and the recent Ministral 8B, with win rates ranging from 60.4% to 70.6% ā”ļø Aya Expanse 32B outperforms Gemma 2 27B, Mistral 8x22B, and Llama 3.1 70B (2x its size) ⢠ā ļø But this performance eval comes from only one benchmark! Let's wait for Open LLM leaderboard evals;