Loading
One of the most popular AI benchmarking sites is lmarena.ai
It ranks models by showing people two anonymous answers and asking which one they like more (crowd voting)
But there’s a problem: contamination.
New models often train on the same test data, meaning they get artificially high scores because they’ve already seen the answers.
This study from MIT and Stanford explains how this gives unfair advantages, especially to big tech models.
That’s why I don’t use LM Arena to judge AIs.
Instead, I use livebench.ai, which releases new, unseen questions every month and focuses on harder tasks that really test intelligence.
submitted by /u/deen1802
[link] [comments]