10000 GitHub - open-thoughts/open-thoughts: Fully open data curation for reasoning models
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

open-thoughts/open-thoughts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Open Thoughts GitHub Repository

Static Badge Hugging Face Discord
Curating the best open reasoning datasets
A collaboration led by Bespoke Labs and the DataComp community


Our first goal is to curate a reasoning dataset to train state-of-the-art small reasoning models that surpass DeepSeek-R1-Distill-Qwen-32B and DeepSeek-R1-Distill-Qwen-7B on math and code reasoning benchmarks.

News

Results

Our OpenThinker3-7B model trained on OpenThoughts3-1.2M is the state-of-the-art open-data 7B reasoning model. The numbers reported in the table below are evaluated with our open-source tool Evalchemy.

Model Data AIME24 AIME25 AMC23 MATH500 HMMT O2/25 LCB 06/24-01/25 CodeElo CodeForces GPQA-D JEEBench
OpenThinker-7B βœ… 30.7 22.0 72.5 82.8 15.7 26.1 11.1 14.9 38.6 45.3
OpenThinker2-7B βœ… 60.7 38.7 89.8 87.6 24.7 40.6 22.8 26.6 47.0 65.1
OpenThinker3-7B βœ… 69.0 53.3 93.5 90.0 42.7 51.7 31.0 32.2 53.7 72.4
DeepSeek-R1-Distill-Qwen-32B ❌ 51.3 38.0 92.0 88.0 25.0 34.5 19.9 21.1 33.2 50.4
OpenR1-Distill-7B βœ… 57.7 39.7 87.0 88.0 25.7 30.7 30.1 29.3 58.9 68.7
Llama-3.1-Nemotron-Nano-8B-v1 βœ… 62.0 48.0 94.0 89.4 26.7 50.9 30.9 32.9 52.9 70.7
AceReason-Nemotron-7B βœ… 71.0 50.7 93.8 89.8 33.3 44.3 32.9 30.9 52.9 64.3

To mitigate variance in evaluation accuracy, we compute average scores over multiple evaluation runs with different seeds. More details can be found in our OpenThoughts paper.

We are fully open-source. Our model weights, datasets, data generation code, evaluation code, and training code are all publicly available.

Installation

make install
poetry shell

Set the DeepSeek API key:

export DEEPSEEK_API_KEY=your_api_key

Set HF_ORG to your organization id. Set HF_PRIVATE=true if you want to push to a private repo.

export HF_ORG=your_org_id
export HF_PRIVATE=false

OpenThoughts3-1.2M Data Generation

The OpenThoughts3-1.2M dataset consists of 850,000 math questions, 250,000 code questions, and 100,000 science questions. As opposed to previous OpenThoughts models that used R1 annotations, OpenThoughts3's reasoning traces are generated with QwQ-32B. This dataset is the result of 1000+ experiments to test out various design choices involved in dataset curation. More details can be found in our OpenThoughts paper.

Data Curation Recipe

OpenThoughts2-1M Data Generation

The OpenThoughts2-1M dataset is a combination of OpenThoughts-114k, OpenR1-Math, and our newly generated math and code reasoning data. We generate the additional math and code data by ablating on 26 different question generation methodologies and sampling from the highest performing ones.

The recipe is outlined below: Data Curation Recipe

More details can be found in our blog post.

OpenThoughts-114k Data Generation

For OpenThoughts-114k, we generate data for the following domains:

  1. Code
  2. Math
  3. Science
  4. Puzzle

The recipe is outlined below: Data Curation Recipe

More instructions are in open_thoughts/README.md.

Training and Evaluation

Training and evaluation code coming soon.

Links

About Us

We are a team of researchers and engineers from Bespoke Labs, Stanford, University of California Berkeley, University of Washington, UT Austin, Juelich Supercomputing Center (JSC), LAION, UCLA, UNC Chapel Hill, UT Austin, and Toyota Research Institute united around building the best datasets (and thus the best models). See our previous works at datacomp.ai and mlfoundations.

Sponsors

Open Thoughts is supported by

Community

Make an edit to add your project!

Join our Discord community to discuss OpenThoughts and connect with other users!

What the open source community is building with OpenThoughts:

Citation

@misc{guha2025openthoughtsdatarecipesreasoning,
  title={OpenThoughts: Data Recipes for Reasoning Models}, 
  author={Etash Guha and Ryan Marten and Sedrick Keh and Negin Raoof and Georgios Smyrnis and Hritik Bansal and Marianna Nezhurina and Jean Mercat and Trung Vu and Zayne Sprague and Ashima Suvarna and Benjamin Feuer and Liangyu Chen and Zaid Khan and Eric Frankel and Sachin Grover and Caroline Choi and Niklas Muennighoff and Shiye Su and Wanjia Zhao and John Yang and Shreyas Pimpalgaonkar and Kartik Sharma and Charlie Cheng-Jie Ji and Yichuan Deng and Sarah Pratt and Vivek Ramanujan and Jon Saad-Falcon and Jeffrey Li and Achal Dave and Alon Albalak and Kushal Arora and Blake Wulfe and Chinmay Hegde and Greg Durrett and Sewoong Oh and Mohit Bansal and Saadia Gabriel and Aditya Grover and Kai-Wei Chang and Vaishaal Shankar and Aaron Gokaslan and Mike A. Merrill and Tatsunori Hashimoto and Yejin Choi and Jenia Jitsev and Reinhard Heckel and Maheswaran Sathiamoorthy and Alexandros G. Dimakis and Ludwig Schmidt},
  year={2025},
  eprint={2506.04178},
  archivePrefix={arXiv},
  primaryClass={cs.LG},
  url={https://arxiv.org/abs/2506.04178}, 
}

About

Fully open data curation for reasoning models

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0