Find its new home at Meta Research here: github.com/facebookresearch/ZeroSumEval
ZeroSumEval: An extensible framework for evaluating LLMs using games! β
ZeroSumEval is a dynamic evaluation benchmark for LLMs using competitive scenarios that scales with model capabilities (i.e. as models get better, the benchmark gets harder). Instead of fixed evaluation benchmarks or subjective judging criteria, ZeroSumEval uses multi-agent simulations with clear win conditions to pit models against each other.
The framework tests various model capabilities, including knowledge, reasoning, and planning. In addition, ZeroSumEval uses DSPy optimization to test the self-improvement capability of models and ensure the competition between models is fair.
The eval suite consists of a growing number of simulations, including text-based challenges, board games, and Capture The Flag (CTF) competitions.
Key features:
- One-click evals on the existing suite of games
- Easily extendable abstractions for new game implementations
- Integration with DSPy for automated prompt optimization
- Comprehensive logging and analysis tools
TODO: barcharts should go here
The project is organized as follows:
zero_sum_eval/
: Main package containing the core frameworkanalysis/
: Modules for analyzing game performance and calculating ratingscore/
: Core game-related components, including player and game state managementgames/
: Individual game implementationsmanagers/
: Game and match management classesutils/
: Utility functions for logging, configuration, checkpointing, and type definitionsmain.py
: Entry point for running games and matches
data/
: Game-specific data and examplesconfigs/
: Configuration files for different games and scenarios
-
Use
pip
to install ZeroSumEval:pip install zero-sum-eval
-
test installation:
zseval --help
It's possible to run a single game or a series of matches with or without a detailed config file.
single game:
zseval -g chess -p "white=openai/gpt-4o" "black=openai/gpt-4o"
pool of matches:
zseval --pool -g chess -p "white=openai/gpt-4o" "black=openai/gpt-4o"
single game:
zseval -c configs/chess.yaml
pool of matches:
zseval --pool -c configs/pool/chess.yaml
Add the --calculate_ratings
flag to output ELO ratings for the models after a pool of matches:
zseval --pool -c configs/pool/chess.yaml --calculate_ratings
Or directly calculate the ratings from a given match pool log directory:
zseval --calculate_ratings --output_dir match_pool_log/
ZeroSumEval currently supports the following games:
- Chess
- Debate
- Gandalf (Password Guessing)
- Liar's Dice
- Math Quiz
- Poker (Simple Texas Hold'em)
- PyJail (Capture The Flag)
Each game is implemented as a separate module in the zero_sum_eval/games/
directory.
Game configurations are defined in YAML files located in the configs/
directory. These files specify:
- Logging settings
- Manager settings
- Game parameters
- Player configurations
- LLM settings
Example Configuration (chess.yaml):
logging:
output_dir: ../output/chess_game
manager:
args:
max_player_attempts: 5
max_rounds: 200
game:
name: chess
args:
players:
white:
class: chess_player
args:
id: llama3.1 70b white
actions:
- name: MakeMove
optimize: true
metric: chess_move_validation_metric
dataset: chess_dataset
dataset_args:
filename: ./data/chess/stockfish_examples.jsonl
player_key: white
num_examples: 10
lm:
model: openrouter/meta-llama/llama-3.3-70b-instruct
optimizer: BootstrapFewshot
optimizer_args:
max_bootstrapped_demos: 1
max_tries: 5
black:
class: chess_player
args:
id: llama3.3 70b black
lm:
model: openrouter/meta-llama/llama-3.3-70b-instruct
max_tries: 5
If you use ZeroSumEval in your work, please cite it as follows:
@article{khanzerosumeval,
title={ZeroSumEval: Scaling LLM Evaluation with Inter-Model Competition},
author={Khan, Haidar and Alyahya, Hisham Abdullah and Ritchie, Colton and Alnumay, Yazeed and Bari, M Saiful and Yener, Bulent}
}
Contributions to ZeroSumEval are welcome! Please follow the contribution guidelines and open a pull request or issue on the GitHub repository.
This project is licensed under the Apache License 2.0. See the LICENSE file for details.