Skip to content

SkyworkAI/Skywork-OR1

Repository files navigation

πŸ€” Skywork-OR1 (Open Reasoner 1)

✊ Unleashing the Power of Reinforcement Learning for Math and Code Reasoners πŸ€–

Models Data Github Notion

GitHub Stars GitHub Forks

πŸ”₯ News

  • April 15, 2025: We release our rl training dataset Skywork-OR1-RL-Data

  • April 13, 2025: We release the Skywork-OR1 (Open Reasoner 1) series of models, including Skywork-OR1-Math-7B, Skywork-OR1-32B-Preview, and Skywork-OR1-7B-Preview. We open-source

πŸ“– Overview

The AIME24 scores versus training steps of Skywork-OR1-Math-7B in our multi-stage training pipeline.

The Skywork-OR1 (Open Reasoner 1) model series consists of powerful math and code reasoning models trained using large-scale rule-based reinforcement learning with carefully designed datasets and training recipes. This series includes two general-purpose reasoning modelsβ€”Skywork-OR1-7B-Preview and Skywork-OR1-32B-Previewβ€”along with a math-specialized model, Skywork-OR1-Math-7B.

  • Skywork-OR1-Math-7B is specifically optimized for mathematical reasoning, scoring 69.8 on AIME24 and 52.3 on AIME25 β€” well ahead of all models of similar size.
  • Skywork-OR1-32B-Preview delivers the 671B-parameter Deepseek-R1 performance on math tasks (AIME24 and AIME25) and coding tasks (LiveCodeBench).
  • Skywork-OR1-7B-Preview outperforms all similarly sized models in both math and coding scenarios.

The final release version will be available in two weeks.

πŸ“Š Evaluation


We evaluate our models on AIME24, AIME25, and LiveCodeBench. Instead of using Pass@1, which is common in prior work, we introduce Avg@K as the primary metric. This metric robustly measures a model's average performance across K independent attempts, reducing the impact of randomness and enhancing the reliability of the results. We believe that Avg@K provides a better reflection of a model's stability and reasoning consistency.

We include the detailed results in the following table.

Model AIME24 (Avg@32) AIME25 (Avg@32) LiveCodeBench (8/1/24-2/1/25) (Avg@4)
DeepSeek-R1-Distill-Qwen-7B 55.5 39.2 37.6
Light-R1-7B-DS 59.1 44.3 39.5
DeepSeek-R1-Distill-Qwen-32B 72.9 59.0 57.2
TinyR1-32B-Preview 78.1 65.3 61.6
QwQ-32B 79.5 65.3 61.6
DeepSeek-R1 79.8 70.0 65.9
Skywork-OR1-Math-7B 69.8 52.3 43.6
Skywork-OR1-7B-Preview 63.6 45.8 43.9
Skywork-OR1-32B-Preview 79.7 69.0 63.9

🎯 Getting Started

Installation

Docker environment:

docker pull whatcanyousee/verl:vemlp-th2.4.0-cu124-vllm0.6.3-ray2.10-te2.0-megatron0.11.0-v0.0.6

# Launch the desired Docker image:
docker run --runtime=nvidia -it --rm --shm-size="10g" --cap-add=SYS_ADMIN -v <image:tag>

# Inside the container, install Skywork-OR1
git clone https://github.com/SkyworkAI/Skywork-OR1.git && cd Skywork-OR1 && pip3 install -e .

Conda environment:

# Installing Python 3.10 Environment.
conda create -n verl python==3.10
conda activate verl

# Installing RLLM dependencies.
pip3 install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu124
pip3 install flash-attn --no-build-isolation
git clone https://github.com/SkyworkAI/Skywork-OR1.git
cd Skywork-OR1
pip3 install -e .

Training βš™οΈ

We provide training scripts and data to reproduce the results of the β€œSkywork-OR1-Series”.

Training Data Preparation

To prepare the training data, we provide a script to download the data from Hugging Face and filter the problems based on the difficulty level with respect to a particular model (i.e., DeepSeek-R1-Distill-Qwen-{1.5,7,32}B).

model_size=32b  # 1p5b, 7b
python ./or1_scripts/data_preprocess/download_and_filter_data_${model_size}.py --local_dir ./or1_data/train

This will generate the training data in the following format:

./or1_data/train/train_${model_size}_math.pkl
./or1_data/train/train_${model_size}_code.pkl

Train Script

By default, we only provide evaluation on AIME datasets. If you would like to evaluate on LiveCodeBench, please refer to the section Evaluation Data Preparation and set LIVECODEBENCH_DATA_PATH to ./or1_data/eval/livecodebench/livecodebench_2408_2502.

# Note: You must provide CODE_PATH and MODEL_PATH
model_size=7b # or 32b
train_seq_len=8 # or 16, 32
export CODE_PATH=./
export MODEL_PATH=
bash ./or1_scripts/train/${model_size}_${train_seq_len}k.sh

Using Ray for Multi-Node Training

If you plan to perform multi-node training, you need to start and connect all nodes using Ray before launching the training script. Here's a quick guide to set up Ray across machines:

Step 1: Start Ray on the Head Node (node0)

On the first node (typically called node0), run:

ray start --head --dashboard-host=0.0.0.0

After running the command, you will see a message like:

Ray runtime started.
Next steps
To add another node to this Ray cluster, run
    ray start --address='10.94.16.4:6379'

Note down the IP address (in this example, 10.94.16.4).

Step 2: Connect Other Nodes (e.g., node1)

On each additional worker node (e.g., node1), run the following, replacing the IP with that of your head node:

ray start --address='10.94.16.4:6379'

Step 3: Check Cluster Status

On node0, run:

ray status

You should see output showing all connected nodes and available resources (e.g., CPUs, GPUs, memory). For example:

Resources
---------------------------------------------------------------
Usage:
 0.0/360.0 CPU
 0.0/16.0 GPU
...

Once the Ray cluster is up and running, you can launch the training script as usual. The script will automatically utilize the connected nodes.

Evaluation βš–οΈ

We provide evaluation scripts to reproduce the results of the Skywork-OR1-Series.

Evaluation Data Preparation

Evaluation data for AIME24 and AIME25 is already available in our GitHub repository.

For LiveCodeBench, please download the data from Hugging Face.

# Download LiveCodeBench
huggingface-cli download Skywork/LiveCodeBench --repo-type=dataset --local-dir ./or1_data/eval/livecodebench
unzip ./or1_data/eval/livecodebench/livecodebench.zip -d ./or1_data/eval/livecodebench/
mv ./or1_data/eval/livecodebench/livecodebench/* ./or1_data/eval/livecodebench/

Evaluation Start

bash ./or1_scripts/eval/eval_7b.sh

bash ./or1_scripts/eval/eval_32b.sh

The evaluation results will be automatically saved to outputs/evalation/pass.csv

πŸ“„ Technical Report

Our technical report will be released soon. Stay tuned!

πŸ™ Acknowledgements

πŸ“š Citation

We will update the citation once the technical report is released. In the meantime, please cite the following:

@misc{skywork-or1-2025,
  title={Skywork Open Reasoner Series},
  author = {He, Jujie and Liu, Jiacai and Liu, Chris Yuhao and Yan, Rui and Wang, Chaojie and Cheng, Peng and Zhang, Xiaoyu and Zhang, Fuxiang and Xu, Jiacheng and Shen, Wei and Li, Siyuan and Zeng, Liang and Wei, Tianwen and Cheng, Cheng and An, Bo and Liu, Yang and Zhou, Yahui},
  howpublished={\url{https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680}},
  note={Notion Blog},
  year={2025}
}

About

Unleashing the Power of Reinforcement Learning for Math and Code Reasoners

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published