GitHub - open-thoughts/open-thoughts: Fully open data curation for reasoning models
Extracto
Fully open data curation for reasoning models. Contribute to open-thoughts/open-thoughts development by creating an account on GitHub.
Resumen
Como experto analista de contenido web, presento un resumen profesional y detallado sobre el proyecto Open Thoughts.
Resumen Principal
El proyecto Open Thoughts, una destacada colaboración liderada por Bespoke Labs y la comunidad DataComp, se ha posicionado como un referente en la curación de conjuntos de datos de razonamiento de código abierto. Su objetivo primordial es desarrollar modelos de razonamiento pequeños de vanguardia que superen el rendimiento de modelos como DeepSeek-R1-Distill-Qwen-32B y -7B en exigentes benchmarks de razonamiento matemático y de código. A través de una serie de lanzamientos estratégicos, como OpenThinker3 y el conjunto de datos OpenThoughts3-1.2M, el equipo ha demostrado un progreso constante, culminando en la creación del modelo OpenThinker3-7B, reconocido como el modelo de razonamiento de 7B de datos abiertos más avanzado hasta la fecha. Este éxito se sustenta en una metodología rigurosa de generación de datos y un compromiso firme con el código abierto, ofreciendo un gran valor a la comunidad de inteligencia artificial.
Elementos Clave
- Meta y Colaboración Estratégica: El proyecto busca específicamente curar datasets de razonamiento para entrenar modelos pequeños que superen a los de DeepSeek en benchmarks de matemáticas y código. Esta ambiciosa meta es impulsada por una colaboración clave entre Bespoke Labs y la comunidad DataComp, uniendo expertise de diversas instituciones académicas y centros de investigación de renombre.
- Logros y Lanzamientos de Alto Impacto: Se han realizado múltiples lanzamientos significativos, destacando OpenThinker3 y el paper OpenThoughts, ambos el 4 de junio de 2025. Previamente, el dataset OpenThoughts2-1M se convirtió en el #1 trending dataset en Hugging Face, y OpenThinker-32B fue lanzado como el "mejor modelo de razonamiento de datos abiertos", demostrando un ciclo continuo de innovación y reconocimiento en la comunidad.
- Rendimiento Superior en Benchmarks: El modelo OpenThinker3-7B, entrenado con OpenThoughts3-1.2M, ha establecido un nuevo estándar de state-of-the-art para modelos de razonamiento de 7B de datos abiertos. Los resultados, evaluados con la herramienta de código abierto Evalchemy, muestran mejoras sustanciales en múltiples benchmarks como AIME24, CodeElo y JEEBench, superando consistentemente a sus predecesores y a modelos competitivos como DeepSeek y Llama-3.1.
- Metodologías Avanzadas de Generación de Datos: La calidad de los modelos se basa en una sofisticada curación de datos. El conjunto OpenThoughts3-1.2M comprende 850,000 preguntas de matemáticas, 250,000 de código y 100,000 de ciencia. A diferencia de versiones anteriores, las trazas de razonamiento de OpenThoughts3 son generadas utilizando QwQ-32B y son el resultado de más de 1000 experimentos, lo que subraya un enfoque altamente científico y experimental en la creación de datasets.
Análisis e Implicaciones
Este proyecto representa un avance fundamental en el desarrollo de modelos de razonamiento accesibles y de alto rendimiento, democratizando la capacidad de crear modelos de IA más inteligentes. Su enfoque en la transparencia y el open-source fomenta la innovación colaborativa y permite a la comunidad replicar y expandir sus logros, acelerando la investigación en inteligencia artificial. El establecimiento de nuevos estándares de rendimiento tiene implicaciones directas para campos como la educación, el desarrollo de software y la investigación científica, al ofrecer herramientas de razonamiento más potentes y fiables.
Contexto Adicional
El equipo detrás de Open Thoughts está compuesto por investigadores e ingenieros de instituciones líderes como Stanford y UC Berkeley, con el respaldo de patrocinadores clave como Bespoke Labs y Toyota Research Institute, consolidando una base sólida de experiencia y recursos.
Contenido
Curating the best open reasoning datasets
A collaboration led by Bespoke Labs and the DataComp community
Our first goal is to curate a reasoning dataset to train state-of-the-art small reasoning models that surpass DeepSeek-R1-Distill-Qwen-32B and DeepSeek-R1-Distill-Qwen-7B on math and code reasoning benchmarks.
News
- [2025/06/04] 🎉🎉🎉 We release our OpenThoughts paper!
- [2025/06/04] 🎉🎉🎉 OpenThinker3 is released!
- [2025/05/09] 🎉 Join our Discord community to discuss OpenThoughts and connect with other users!
- [2025/04/07] 🎉 OpenThoughts2-1M dataset is the #1 trending dataset on Hugging Face.
- [2025/04/03] 🎉 OpenThinker2 has arrived: OpenThoughts2-1M, OpenThinker2-7B, OpenThinker2-32B.
- [2025/03/13] 🎉 We release an analysis of reasoning models on Alice in Wonderland.
- [2025/02/16] 🎉 OpenThinker on Ollama reaches 400k downloads.
- [2025/02/14] 🎉 Chat with OpenThinker in the online playground.
- [2025/02/13] 🎉 OpenThinker is now available on Ollama for easy local inference.
- [2025/02/12] 🎉 We release OpenThinker-32B, the best open-data reasoning model.
- [2025/02/02] 🎉 OpenThoughts-114k dataset is the #1 trending dataset on Hugging Face.
- [2025/01/30] 🎉 Reasoning benchmarks are added to Evalchemy and compared to publicly reported scores.
- [2025/01/28] 🎉 Open Thoughts launches with OpenThoughts-114k dataset and OpenThinker-7B model.
- [2025/01/27] 🎉 Bespoke-Stratos-17k dataset is the #2 trending dataset on Hugging Face.
- [2025/01/22] 🎉 Bespoke-Stratos-17k dataset and Bespoke-Stratos-32B model are announced.
Results
Our OpenThinker3-7B model trained on OpenThoughts3-1.2M is the state-of-the-art open-data 7B reasoning model. The numbers reported in the table below are evaluated with our open-source tool Evalchemy.
| Model | Data | AIME24 | AIME25 | AMC23 | MATH500 | HMMT O2/25 | LCB 06/24-01/25 | CodeElo | CodeForces | GPQA-D | JEEBench |
|---|---|---|---|---|---|---|---|---|---|---|---|
| OpenThinker-7B | ✅ | 30.7 | 22.0 | 72.5 | 82.8 | 15.7 | 26.1 | 11.1 | 14.9 | 38.6 | 45.3 |
| OpenThinker2-7B | ✅ | 60.7 | 38.7 | 89.8 | 87.6 | 24.7 | 40.6 | 22.8 | 26.6 | 47.0 | 65.1 |
| OpenThinker3-7B | ✅ | 69.0 | 53.3 | 93.5 | 90.0 | 42.7 | 51.7 | 31.0 | 32.2 | 53.7 | 72.4 |
| DeepSeek-R1-Distill-Qwen-32B | ❌ | 51.3 | 38.0 | 92.0 | 88.0 | 25.0 | 34.5 | 19.9 | 21.1 | 33.2 | 50.4 |
| OpenR1-Distill-7B | ✅ | 57.7 | 39.7 | 87.0 | 88.0 | 25.7 | 30.7 | 30.1 | 29.3 | 58.9 | 68.7 |
| Llama-3.1-Nemotron-Nano-8B-v1 | ✅ | 62.0 | 48.0 | 94.0 | 89.4 | 26.7 | 50.9 | 30.9 | 32.9 | 52.9 | 70.7 |
| AceReason-Nemotron-7B | ✅ | 71.0 | 50.7 | 93.8 | 89.8 | 33.3 | 44.3 | 32.9 | 30.9 | 52.9 | 64.3 |
To mitigate variance in evaluation accuracy, we compute average scores over multiple evaluation runs with different seeds. More details can be found in our OpenThoughts paper.
We are fully open-source. Our model weights, datasets, data generation code, evaluation code, and training code are all publicly available.
Installation
make install
poetry shell
Set the DeepSeek API key:
export DEEPSEEK_API_KEY=your_api_key
Set HF_ORG to your organization id. Set HF_PRIVATE=true if you want to push to a private repo.
export HF_ORG=your_org_id
export HF_PRIVATE=false
OpenThoughts3-1.2M Data Generation
The OpenThoughts3-1.2M dataset consists of 850,000 math questions, 250,000 code questions, and 100,000 science questions. As opposed to previous OpenThoughts models that used R1 annotations, OpenThoughts3's reasoning traces are generated with QwQ-32B. This dataset is the result of 1000+ experiments to test out various design choices involved in dataset curation. More details can be found in our OpenThoughts paper.
OpenThoughts2-1M Data Generation
The OpenThoughts2-1M dataset is a combination of OpenThoughts-114k, OpenR1-Math, and our newly generated math and code reasoning data. We generate the additional math and code data by ablating on 26 different question generation methodologies and sampling from the highest performing ones.
The recipe is outlined below:
More details can be found in our blog post.
OpenThoughts-114k Data Generation
For OpenThoughts-114k, we generate data for the following domains:
- Code
- Math
- Science
- Puzzle
The recipe is outlined below:
More instructions are in open_thoughts/README.md.
Training and Evaluation
Training and evaluation code coming soon.
Links
- 📝 OpenThoughts Paper
- 📊 OpenThoughts3-1.2M and OpenThinker3-7B Blog Post
- 💻 Open Thoughts GitHub Repository
- 🧠 OpenThoughts3-1.2M dataset
- 🤖 OpenThinker3-7B model
About Us
We are a team of researchers and engineers from Bespoke Labs, Stanford, University of California Berkeley, University of Washington, UT Austin, Juelich Supercomputing Center (JSC), LAION, UCLA, UNC Chapel Hill, UT Austin, and Toyota Research Institute united around building the best datasets (and thus the best models). See our previous works at datacomp.ai and mlfoundations.
Sponsors
Open Thoughts is supported by
- Bespoke Labs
- Toyota Research Institute
- Lambda Labs
- NSF IFML
- UT Austin Machine Learning Lab
- Juelich Supercomputing Center
Community
Make an edit to add your project!
Join our Discord community to discuss OpenThoughts and connect with other users!
What the open source community is building with OpenThoughts:
- Light-R1-SFT includes examples from OpenThoughts-114k and is used to train Light-R1-14B-DS, Light-R1-32B, Light-R1-7B-DS, Light-R1-32B-DS
- Traceback-12B is a reasoning model trained on a dataset that includes OpenThoughts-114k and Bespoke-Stratos-17k
- 190+ public models on Hugging Face have been trained using OpenThoughts-114k
- 100+ public models on Hugging Face have been trained using Bespoke-Stratos-17k
- Sky-T1 uses Bespoke-Stratos-17k for their R1 SFT experiments
- Ollama has created quantized versions of the OpenThinker-7B and OpenThinker-32B models, for running locally on your laptop
- CuratedThoughts is a filtered version of OpenThoughts-114k to make it suitable for RL training
- OpenThoughts-114k-math is a filtered version of the math subset in OpenThoughts-114k using Math-Verify verification on top of our LLM Judge with GT verification
- SmallThoughts regenerates a 50k version of OpenThoughts-114k using a fork of this repo
- AM-DeepSeek-R1-Distilled-1.4M is a state of the art reasoning dataset mix containing OpenThoughts-114k and Bespoke-Stratos-17k
- Marin 8B of the Stanford Marin Project, a collaborative effort to develop open-source foundation models, is trained on Bespoke-Stratos-17k.
Citation
@misc{guha2025openthoughtsdatarecipesreasoning,
title={OpenThoughts: Data Recipes for Reasoning Models},
author={Etash Guha and Ryan Marten and Sedrick Keh and Negin Raoof and Georgios Smyrnis and Hritik Bansal and Marianna Nezhurina and Jean Mercat and Trung Vu and Zayne Sprague and Ashima Suvarna and Benjamin Feuer and Liangyu Chen and Zaid Khan and Eric Frankel and Sachin Grover and Caroline Choi and Niklas Muennighoff and Shiye Su and Wanjia Zhao and John Yang and Shreyas Pimpalgaonkar and Kartik Sharma and Charlie Cheng-Jie Ji and Yichuan Deng and Sarah Pratt and Vivek Ramanujan and Jon Saad-Falcon and Jeffrey Li and Achal Dave and Alon Albalak and Kushal Arora and Blake Wulfe and Chinmay Hegde and Greg Durrett and Sewoong Oh and Mohit Bansal and Saadia Gabriel and Aditya Grover and Kai-Wei Chang and Vaishaal Shankar and Aaron Gokaslan and Mike A. Merrill and Tatsunori Hashimoto and Yejin Choi and Jenia Jitsev and Reinhard Heckel and Maheswaran Sathiamoorthy and Alexandros G. Dimakis and Ludwig Schmidt},
year={2025},
eprint={2506.04178},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2506.04178},
}
Fuente: GitHub
