This repository provides the supporting code for the WildFrame dataset, originally introduced in the paper WildFrame: Comparing Framing in Humans and LLMs on Naturally Occurring Texts.
This repository includes:
- Data
data/
: Includes human annotations collected from Mechanical Turk. - LLM Inference Scripts
model_predictions/
: Code for running sentiment inference using various LLMs. Includes outputs from our runs inmodel_predictions/inference/
. - Analysis Scripts
analysis/
: Tools for evaluating and comparing sentiment shifts in both humans and LLMs. - Analysis Outputs
_output/
: Results generated by the analysis scripts.
To set up the environment and install dependencies, run:
git clone https://github.com/SLAB-NLP/WildFrame-Eval.git
cd WildFrame-Eval
pip install -r requirements.txt