This is a collection of research papers & blogs for OpenAI Strawberry(o1) and Reasoning.
And the repository will be continuously updated to track the frontier of LLM Reasoning.
- [OpenAI] o3 preview & o3 mini
- [OpenAI] Introducing ChatGPT Pro
- [Google DeepMind] Gemini 2.0 Flash Thinking
- [Ilya Sutskever] AI with reasoning power will be less predictable
- [SemianAlysis] Scaling Laws – O1 Pro Architecture, Reasoning Training Infrastructure, Orion and Claude 3.5 Opus “Failures”
- [DeepSeek] DeepSeek-R1-Lite-Preview is now live: unleashing supercharged reasoning power!
- [Moonshoot] 数学对标o1系列,搜索再次进化,Kimi 新推理模型与你一起拓展智能边界
- [Moonshoot] Kimi 发布视觉思考模型 k1,多项理科测试行业领先
- [InternLM] 强推理模型书生InternThinker开放体验:自主生成高智力密度数据、具备元动作思考能力
- [新智元] 万字独家爆光,首揭o1 pro架构!惊人反转,Claude 3.5 Opus没失败?
- [OpenAI] Learning to Reason with LLMs
- [OpenAI] OpenAI o1-mini Advancing cost-efficient reasoning
- [OpenAI] Finding GPT-4’s mistakes with GPT-4
- [ARC-AGI] OpenAI o3 Breakthrough High Score on ARC-AGI-Pub
- [Tibor Blaho] Summary of what we have learned during AMA hour with the OpenAI o1 team
- [hijkzzz] Exploring OpenAI O1 Model Replication
- [hijkzzz] A Survey of Reinforcement Learning from Human Feedback (RLHF)
- [Nathan Lambert] OpenAI’s Strawberry, LM self-talk, inference scaling laws, and spending more on inference
- [Nathan Lambert] Reverse engineering OpenAI’s o1
- [Andreas Stuhlmüller, jungofthewon] Supervise Process, not Outcomes
- [Nouha Dziri] Have o1 Models Cracked Human Reasoning?
- [Rishabh Agarwal] Improving LLM Reasoning using Self-generated data: RL and Verifiers
- [Wei Shen] Generalization Progress in RLHF: Insights into the Impact of Reward Models and PPO
- [Dominater069] Codeforces - Analyzing how good O1-Mini actually is
- [hijkzzz] o1 复现以及关于 REINFORCE & GRPO 的碎碎念
- [Noam Brown] Parables on the Power of Planning in AI: From Poker to Diplomacy
- [Noam Brown] OpenAI o1 and Teaching LLMs to Reason Better
- [Hyung Won Chung] Don't teach. Incentivize.
OpenAI Developers
- [OpenO1 Team] Open-Source O1
- [Alibaba Qwen Team] QwQ
- [Alibaba Qwen Team] QvQ
- [GAIR-NLP] O1 Replication Journey: A Strategic Progress Report
- [Skywork] Skywork o1 Open model series
- [Steiner] A Small Step Towards Reproducing OpenAI o1: Progress Report on the Steiner Open Source Models
- [Alibaba] Marco-o1
- [openreasoner] OpenR
- [OpenRLHF Team] OpenRLHF
- [Maitrix.org] LLM Reasoners
- [bklieger-groq] g1: Using Llama-3.1 70b on Groq to create o1-like reasoning chains
- [o1-chain-of-thought] Transcription of o1 Reasoning Traces from OpenAI blog post
format:
- [title](paper link) [links]
- author1, author2, and author3...
- publisher
- code
- experimental environments and datasets
Relevant Paper from OpenAI o1 contributors
- Deliberative alignment: reasoning enables safer language models
- OpenAI
- MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering
- Jun Shern Chan, Neil Chowdhury, Oliver Jaffe, James Aung, Dane Sherburn, Evan Mays, Giulio Starace, Kevin Liu, Leon Maksin, Tejal Patwardhan, Lilian Weng, Aleksander Mądry
- Training Verifiers to Solve Math Word Problems
- Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, John Schulman
- Generative Language Modeling for Automated Theorem Proving
- Stanislas Polu, Ilya Sutskever
- Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
- Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou
- Self-Consistency Improves Chain of Thought Reasoning in Language Models
- Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou
- Let's Verify Step by Step
- Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, Karl Cobbe
- LLM Critics Help Catch LLM Bugs
- Nat McAleese, Rai Michael Pokorny, Juan Felipe Ceron Uribe, Evgenia Nitishinskaya, Maja Trebacz, Jan Leike
- Self-critiquing models for assisting human evaluators
- William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, Jan Leike
- Scalable Online Planning via Reinforcement Learning Fine-Tuning
- Arnaud Fickinger, Hengyuan Hu, Brandon Amos, Stuart Russell, Noam Brown.
- Improving Policies via Search in Cooperative Partially Observable Games
- Adam Lerer, Hengyuan Hu, Jakob Foerster, Noam Brown.
- From Medprompt to o1: Exploration of Run-Time Strategies for Medical Challenge Problems and Beyond
- Scott McKinney
- Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters
- Charlie Snell, Jaehoon Lee, Kelvin Xu, Aviral Kumar
- An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models
- Yangzhen Wu, Zhiqing Sun, Shanda Li, Sean Welleck, Yiming Yang
- Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling
- Hritik Bansal, Arian Hosseini, Rishabh Agarwal, Vinh Q. Tran, Mehran Kazemi
- Large Language Monkeys: Scaling Inference Compute with Repeated Sampling
- Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V. Le, Christopher Ré, Azalia Mirhoseini
- Imitate, Explore, and Self-Improve: A Reproduction Report on Slow-thinking Reasoning Systems
- Yingqian Min, Zhipeng Chen, Jinhao Jiang, Jie Chen, Jia Deng, Yiwen Hu, Yiru Tang, Jiapeng Wang, Xiaoxue Cheng, Huatong Song, Wayne Xin Zhao, Zheng Liu, Zhongyuan Wang, Ji-Rong Wen
- Training Language Models to Self-Correct via Reinforcement Learning
- Aviral Kumar, Vincent Zhuang, Rishabh Agarwal, Yi Su, John D Co-Reyes, Avi Singh, Kate Baumli, Shariq Iqbal, Colton Bishop, Rebecca Roelofs, Lei M Zhang, Kay McKinney, Disha Shrivastava, Cosmin Paduraru, George Tucker, Doina Precup, Feryal Behbahani, Aleksandra Faust
- REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models
- Jian Hu
- Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement
- An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu, Xingzhang Ren, Zhenru Zhang
- Does RLHF Scale? Exploring the Impacts From Data, Model, and Method
- Zhenyu Hou, Pengfan Du, Yilin Niu, Zhengxiao Du, Aohan Zeng, Xiao Liu, Minlie Huang, Hongning Wang, Jie Tang, Yuxiao Dong
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering
- Xinyan Guan, Yanjiang Liu, Xinyu Lu, Boxi Cao, Ben He, Xianpei Han, Le Sun, Jie Lou, Bowen Yu, Yaojie Lu, Hongyu Lin
- Scaling of Search and Learning: A Roadmap to Reproduce o1 from Reinforcement Learning Perspective
- Zhiyuan Zeng, Qinyuan Cheng, Zhangyue Yin, Bo Wang, Shimin Li, Yunhua Zhou, Qipeng Guo, Xuanjing Huang, Xipeng Qiu
- Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking
- Eric Zelikman, Georges Harik, Yijia Shao, Varuna Jayasiri, Nick Haber, Noah D. Goodman
- https://github.com/ezelikman/quiet-star
- Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision
- Zhiheng Xi, Dingwen Yang, Jixuan Huang, Jiafu Tang, Guanyu Li, Yiwen Ding, Wei He, Boyang Hong, Shihan Do, Wenyu Zhan, Xiao Wang, Rui Zheng, Tao Ji, Xiaowei Shi, Yitao Zhai, Rongxiang Weng, Jingang Wang, Xunliang Cai, Tao Gui, Zuxuan Wu, Qi Zhang, Xipeng Qiu, Xuanjing Huang, Yu-Gang Jiang
- https://mathcritique.github.io/
- On Designing Effective RL Reward at Training Time for LLM Reasoning
- Jiaxuan Gao, Shusheng Xu, Wenjie Ye, Weilin Liu, Chuyi He, Wei Fu, Zhiyu Mei, Guangju Wang, Yi Wu
- Generative Verifiers: Reward Modeling as Next-Token Prediction
- Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, Rishabh Agarwal
- Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning
- Amrith Setlur, Chirag Nagpal, Adam Fisch, Xinyang Geng, Jacob Eisenstein, Rishabh Agarwal, Alekh Agarwal, Jonathan Berant, Aviral Kumar
- Improve Mathematical Reasoning in Language Models by Automated Process Supervision
- Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, Jiao Sun, Abhinav Rastogi
- Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations
- Peiyi Wang, Lei Li, Zhihong Shao, R.X. Xu, Damai Dai, Yifei Li, Deli Chen, Y.Wu, Zhifang Sui
- Planning In Natural Language Improves LLM Search For Code Generation
- Evan Wang, Federico Cassano, Catherine Wu, Yunfeng Bai, Will Song, Vaskar Nath, Ziwen Han, Sean Hendryx, Summer Yue, Hugh Zhang
- Agent Q: Advanced Reasoning and Learning for Autonomous AI Agents
- Pranav Putta, Edmund Mills, Naman Garg, Sumeet Motwani, Chelsea Finn, Divyansh Garg, Rafael Rafailov
- Mixture-of-Agents Enhances Large Language Model Capabilities
- Junlin Wang, Jue Wang, Ben Athiwaratkun, Ce Zhang, James Zou
- Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models
- Zhiyuan Hu, Chumin Liu, Xidong Feng, Yilun Zhao, See-Kiong Ng, Anh Tuan Luu, Junxian He, Pang Wei Koh, Bryan Hooi
- Advancing LLM Reasoning Generalists with Preference Trees
- Lifan Yuan, Ganqu Cui, Hanbin Wang, Ning Ding, Xingyao Wang, Jia Deng, Boji Shan et al.
- Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing
- Ye Tian, Baolin Peng, Linfeng Song, Lifeng Jin, Dian Yu, Haitao Mi, and Dong Yu.
- AlphaMath Almost Zero: Process Supervision Without Process
- Guoxin Chen, Minpeng Liao, Chengxi Li, Kai Fan.
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search
- Dan Zhang, Sining Zhoubian, Yisong Yue, Yuxiao Dong, and Jie Tang.
- MindStar: Enhancing Math Reasoning in Pre-trained LLMs at Inference Time
- Jikun Kang, Xin Zhe Li, Xi Chen, Amirreza Kazemi, Qianyi Sun, Boxing Chen, Dong Li, Xu He, Quan He, Feng Wen, Jianye Hao, Jun Yao.
- Monte Carlo Tree Search Boosts Reasoning via Iterative Preference Learning
- Yuxi Xie, Anirudh Goyal, Wenyue Zheng, Min-Yen Kan, Timothy P. Lillicrap, Kenji Kawaguchi, Michael Shieh.
- When is Tree Search Useful for LLM Planning? It Depends on the Discriminator
- Ziru Chen, Michael White, Raymond Mooney, Ali Payani, Yu Su, Huan Sun
- Chain of Thought Empowers Transformers to Solve Inherently Serial Problems
- Zhiyuan Li, Hong Liu, Denny Zhou, Tengyu Ma.
- To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
- Zayne Sprague, Fangcong Yin, Juan Diego Rodriguez, Dongwei Jiang, Manya Wadhwa, Prasann Singhal, Xinyu Zhao, Xi Ye, Kyle Mahowald, Greg Durrett
- Do Large Language Models Latently Perform Multi-Hop Reasoning?
- Sohee Yang, Elena Gribovskaya, Nora Kassner, Mor Geva, Sebastian Riedel
- Chain-of-Thought Reasoning Without Prompting
- Xuezhi Wang, Denny Zhou
- Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers
- Zhenting Qi, Mingyuan Ma, Jiahang Xu, Li Lyna Zhang, Fan Yang, Mao Yang
- Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs
- Xuan Zhang, Chao Du, Tianyu Pang, Qian Liu, Wei Gao, Min Lin
- ReFT: Reasoning with Reinforced Fine-Tuning
- Trung Quoc Luong, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, Hang Li
- VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment
- Amirhossein Kazemnejad, Milad Aghajohari, Eva Portelance, Alessandro Sordoni, Siva Reddy, Aaron Courville, Nicolas Le Roux
- Stream of Search (SoS): Learning to Search in Language
- Kanishk Gandhi, Denise Lee, Gabriel Grand, Muxin Liu, Winson Cheng, Archit Sharma, Noah D. Goodman
- GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models
- Iman Mirzadeh, Keivan Alizadeh, Hooman Shahrokhi, Oncel Tuzel, Samy Bengio, Mehrdad Farajtabar
- Evaluation of OpenAI o1: Opportunities and Challenges of AGI
- Tianyang Zhong, Zhengliang Liu, Yi Pan, Yutong Zhang, Yifan Zhou, Shizhe Liang, Zihao Wu, Yanjun Lyu, Peng Shu, Xiaowei Yu, Chao Cao, Hanqi Jiang, Hanxu Chen, Yiwei Li, Junhao Chen, etc.
- Evaluating LLMs at Detecting Errors in LLM Responses
- Ryo Kamoi, Sarkar Snigdha Sarathi Das, Renze Lou, Jihyun Janice Ahn, Yilun Zhao, Xiaoxin Lu, Nan Zhang, Yusen Zhang, Ranran Haoran Zhang, Sujeeth Reddy Vummanthala, Salika Dave, Shaobo Qin, Arman Cohan, Wenpeng Yin, Rui Zhang
- On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability
- Kevin Wang, Junbo Li, Neel P. Bhatt, Yihan Xi, Qiang Liu, Ufuk Topcu, Zhangyang Wang
- Not All LLM Reasoners Are Created Equal
- Arian Hosseini, Alessandro Sordoni, Daniel Toyama, Aaron Courville, Rishabh Agarwal
- LLMs Still Can't Plan; Can LRMs? A Preliminary Evaluation of OpenAI's o1 on PlanBench
- Karthik Valmeekam, Kaya Stechly, Subbarao Kambhampati
- A Comparative Study on Reasoning Patterns of OpenAI's o1 Model
- Siwei Wu, Zhongyuan Peng, Xinrun Du, Tuney Zheng, Minghao Liu, Jialong Wu, Jiachen Ma, Yizhi Li, Jian Yang, Wangchunshu Zhou, Qunshu Lin, Junbo Zhao, Zhaoxiang Zhang, Wenhao Huang, Ge Zhang, Chenghua Lin, J.H. Liu
- Thinking LLMs: General Instruction Following with Thought Generation
- Tianhao Wu, Janice Lan, Weizhe Yuan, Jiantao Jiao, Jason Weston, Sainbayar Sukhbaatar
- Exploring the Compositional Deficiency of Large Language Models in Mathematical Reasoning Through Trap Problems
- Jun Zhao, Jingqi Tong, Yurong Mou, Ming Zhang, Qi Zhang, Xuanjing Huang
- V-STaR: Training Verifiers for Self-Taught Reasoners
- Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron Courville, Alessandro Sordoni, Rishabh Agarwal
- CPL: Critical Plan Step Learning Boosts LLM Generalization in Reasoning Tasks
- Tianlong Wang, Junzhe Chen, Xueting Han, Jing Bai
- RLEF: Grounding Code LLMs in Execution Feedback with Reinforcement Learning
- Tianhao Wu, Janice Lan, Weizhe Yuan, Jiantao Jiao, Jason Weston, Sainbayar Sukhbaatar
- Q*: Improving Multi-step Reasoning for LLMs with Deliberative Planning
- Chaojie Wang, Yanchen Deng, Zhiyi Lyu, Liang Zeng, Jujie He, Shuicheng Yan, Bo An
- Training Chain-of-Thought via Latent-Variable Inference
- Du Phan, Matthew D. Hoffman, David Dohan, Sholto Douglas, Tuan Anh Le, Aaron Parisi, Pavel Sountsov, Charles Sutton, Sharad Vikram, Rif A. Saurous
- Alphazero-like Tree-Search can Guide Large Language Model Decoding and Training
- Xidong Feng, Ziyu Wan, Muning Wen, Stephen Marcus McAleer, Ying Wen, Weinan Zhang, Jun Wang
- OVM, Outcome-supervised Value Models for Planning in Mathematical Reasoning
- Fei Yu, Anningzhe Gao, Benyou Wang
- Reasoning with Language Model is Planning with World Model
- Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, Zhiting Hu
- Don’t throw away your value model! Generating more preferable text with Value-Guided Monte-Carlo Tree Search decoding
- Liu, Jiacheng, Andrew Cohen, Ramakanth Pasunuru, Yejin Choi, Hannaneh Hajishirzi, and Asli Celikyilmaz.
- Certified reasoning with language models
- Gabriel Poesia, Kanishk Gandhi, Eric Zelikman, Noah D. Goodman
- Large Language Models Cannot Self-Correct Reasoning Yet
- Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, Denny Zhou
- Chain of Thought Imitation with Procedure Cloning
- Mengjiao Yang, Dale Schuurmans, Pieter Abbeel, Ofir Nachum.
- STaR: Bootstrapping Reasoning With Reasoning
- Eric Zelikman, Yuhuai Wu, Jesse Mu, Noah D. Goodman
- Solving math word problems with processand outcome-based feedback
- Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, Irina Higgins
- Scaling Scaling Laws with Board Games
- Andy L. Jones.
- Show Your Work: Scratchpads for Intermediate Computation with Language Models
- Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, Augustus Odena
- Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm
- David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis.