There are a lot of other cases that extend well beyond LMArena where it was shown certain benchmark performance increases by the major US labs were only attributable to being over-optimized for the specific benchmarks. Some in ways that are not explainable by the benchmark tests merely contaminating the corpus.
There are cases where merely rewording the questions or assigning different letters to the answer dropped models like Llama 30% in the evaluations while others were unchanged
Open-LLM-Leaderboard had to rate limit because a "handful of labs" were doing so many evals in a single day that it hogged the entire eval cluster
“Coding Benchmarks Are Already Contaminated” (Ortiz et al., 2025)
“GSM-PLUS: A Re-translation Reveals Data Contamination” (Shi et al., ACL 2024).
“Prompt-Tuning Can Add 30 Points to TruthfulQA” (Perez et al., 2023).
“HellaSwag Can Be Gamed by a Linear Probe” (Rajpurohit & Berg-Kirkpatrick, EMNLP 2024).
“Label Bias Explains MMLU Jumps” (Hassan et al., arXiv 2025)
“HumanEval-Revival: A Re-typed Test for LLM Coding Ability” (Yang & Liu, ICML 2024 workshop).
“Data Contamination or Over-fitting? Detecting MMLU Memorisation in Open LLMs” (IBM, 2024)
And yes I relied on LLM to summarize these instead of reading the full papers
There are cases where merely rewording the questions or assigning different letters to the answer dropped models like Llama 30% in the evaluations while others were unchanged
Open-LLM-Leaderboard had to rate limit because a "handful of labs" were doing so many evals in a single day that it hogged the entire eval cluster
“Coding Benchmarks Are Already Contaminated” (Ortiz et al., 2025) “GSM-PLUS: A Re-translation Reveals Data Contamination” (Shi et al., ACL 2024). “Prompt-Tuning Can Add 30 Points to TruthfulQA” (Perez et al., 2023). “HellaSwag Can Be Gamed by a Linear Probe” (Rajpurohit & Berg-Kirkpatrick, EMNLP 2024). “Label Bias Explains MMLU Jumps” (Hassan et al., arXiv 2025) “HumanEval-Revival: A Re-typed Test for LLM Coding Ability” (Yang & Liu, ICML 2024 workshop). “Data Contamination or Over-fitting? Detecting MMLU Memorisation in Open LLMs” (IBM, 2024)
And yes I relied on LLM to summarize these instead of reading the full papers