Skip to content

Add Benchmax part 1 #1863

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions examples/eval_benchmax.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
from mmengine.config import read_base

with read_base():
from opencompass.configs.datasets.xlivecodebench.xlivecodebench_gen import \
LCB_datasets
from opencompass.configs.datasets.xgpqa.xgpqa_gen import \
gpqa_datasets
from opencompass.configs.datasets.xIFEval.xIFeval_gen import \
xifeval_datasets
from opencompass.configs.models.hf_llama.hf_llama3_8b_instruct import \
models as hf_llama3_8b_instruct_models

datasets = [
*LCB_datasets,
*gpqa_datasets,
*xifeval_datasets
]
models = [
*hf_llama3_8b_instruct_models
]
55 changes: 55 additions & 0 deletions opencompass/configs/datasets/xIFEval/IFEval.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# IFEval
## Introduction
The following introduction comes from the abstract in [Instruction Following Evaluation for Large Language Models](arxiv.org/abs/2311.07911)

```
IFEval is a straightforward and easy-to-reproduce evaluation benchmark. It focuses on a set of "verifiable instructions" such as "write in more than 400 words" and "mention the keyword of AI at least 3 times". We identified 25 types of those verifiable instructions and constructed around 500 prompts, with each prompt containing one or more verifiable instructions.
```

## Official link

### Paper

[Instruction Following Evaluation for Large Language Models](arxiv.org/abs/2311.07911)

### Repository

[instruction_following_eval](github.com/google-research/google-research/tree/master/instruction_following_eval)

## Examples
Input example I:
```
Write a 300+ word summary of the wikipedia page \"https://en.wikipedia.org/wiki/Raymond_III,_Count_of_Tripoli\". Do not use any commas and highlight at least 3 sections that has titles in markdown format, for example *highlighted section part 1*, *highlighted section part 2*, *highlighted section part 3*.
```
Output example I (from GPT-4):
```
Raymond III was the Count of Tripoli from 1152 to 1187 and Prince of Galilee and Tiberias in the Kingdom of Jerusalem. He was born in 1140 to Raymond II of Tripoli and Hodierna of Jerusalem. His mother was the daughter of Baldwin II of Jerusalem. \n\n*Early Life and Succession*\n\nRaymond III was only a child when his father was murdered. His mother Hodierna was regent until Raymond came of age. In 1155 Raymond married Eschiva the daughter of Walter I of Beirut. They had three children: Raymond IV Bertrand and a daughter who married Guy of Lusignan. \n\n*Reign*\n\nRaymond III's reign was marked by a series of conflicts and alliances with the Muslim world. He was involved in the defense of the Kingdom of Jerusalem against Nur ad-Din and later Saladin. He was also a key figure in the internal politics of the kingdom. He was a supporter of the queen mother Amalric of Jerusalem and opposed the succession of Guy of Lusignan. \n\n*Later Life and Death*\n\nIn 1187 Raymond III was part of the disastrous Battle of Hattin where the Christian forces were defeated by Saladin. He was one of the few to escape the battlefield but died later that year. His son Raymond IV succeeded him as Count of Tripoli. \n\nRaymond III's life and reign were marked by the complex politics and warfare of the Crusader states. He was a key figure in the defense of the Kingdom of Jerusalem and his death marked a significant loss for the Christian forces in the region. His legacy is a testament to the turbulent times in which he lived and the challenges faced by the Crusader states in their relations with the Muslim world.
```
Input example II:
```
"I am planning a trip to Japan, and I would like thee to write an itinerary for my journey in a Shakespearean style. You are not allowed to use any commas in your response."
```
Output example II (from GPT-4):
```
Hark! Hearken to the tale of thy journey to the land of the rising sun, Japan. \n\nUpon the morrow of thy arrival, thou shalt find thyself in Tokyo, the city of lights and towers. In the morn, thou shalt explore the grandeur of the Imperial Palace, a sight to behold. As the sun reaches its zenith, thou shalt partake in a feast of sushi, a delicacy of the land. As the day wanes, thou shalt wander through the electric streets of Akihabara, a realm of technology and animation.\n\nOn the second day, thou shalt journey to the ancient city of Kyoto. In the soft light of dawn, thou shalt visit the golden pavilion of Kinkaku-ji, a vision of beauty and tranquility. As the day unfolds, thou shalt wander through the bamboo grove of Arashiyama, a place of peace and serenity. As the sun sets, thou shalt partake in a traditional tea ceremony, a ritual of grace and harmony.\n\nOn the third day, thou shalt travel to the city of Hiroshima. In the morning, thou shalt pay thy respects at the Peace Memorial Park, a place of remembrance and hope. As the day progresses, thou shalt journey to the island of Miyajima, home to the floating torii gate, a sight of wonder and awe. As the day ends, thou shalt feast on okonomiyaki, a dish of delight and satisfaction.\n\nOn the fourth day, thou shalt return to Tokyo. In the morning, thou shalt ascend the Tokyo Skytree, a tower of height and grandeur. As the day unfolds, thou shalt explore the bustling streets of Shibuya, a realm of fashion and culture. As the sun sets, thou shalt dine in the vibrant district of Roppongi, a place of gastronomy and nightlife.\n\nOn the fifth and final day, thou shalt bid farewell to the land of the rising sun. With memories of beauty and wonder, thou shalt embark on thy journey home, carrying with thee the spirit of Japan. \n\nThus ends the tale of thy journey, a tale of exploration and discovery. May thy travels be filled with joy and adventure, and may the land of the rising sun forever hold a place in thy heart.
```
## Evaluation results

```
dataset version metric mode baichuan2-7b-chat-hf baichuan2-13b-chat-hf internlm2-chat-7b-hf internlm2-chat-20b-hf llama-2-7b-chat-hf llama-2-13b-chat-hf
--------- --------- ---------------------------- ------ ---------------------- ----------------------- ---------------------- ----------------------- -------------------- ---------------------
IFEval 3321a3 Prompt-level-strict-accuracy gen 36.04 35.49 38.26 33.09 33.46 33.64
IFEval 3321a3 Inst-level-strict-accuracy gen 46.76 46.76 49.16 45.32 45.68 45.44
IFEval 3321a3 Prompt-level-loose-accuracy gen 37.52 37.71 42.51 39.37 43.81 47.32
IFEval 3321a3 Inst-level-loose-accuracy gen 48.44 49.16 53.72 51.08 55.64 58.03
```

## Reference
```
@article{zhou2023instruction,
title={Instruction-Following Evaluation for Large Language Models},
author={Zhou, Jeffrey and Lu, Tianjian and Mishra, Swaroop and Brahma, Siddhartha and Basu, Sujoy and Luan, Yi and Zhou, Denny and Hou, Le},
journal={arXiv preprint arXiv:2311.07911},
year={2023}
}
```
4 changes: 4 additions & 0 deletions opencompass/configs/datasets/xIFEval/IFEval_gen.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
from mmengine.config import read_base

with read_base():
from .IFEval_gen_3321a3 import ifeval_datasets # noqa: F401, F403
34 changes: 34 additions & 0 deletions opencompass/configs/datasets/xIFEval/IFEval_gen_3321a3.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.datasets import xIFEvalDataset, xIFEvaluator

xifeval_reader_cfg = dict(
input_columns=['prompt'], output_column='reference')

xifeval_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template=dict(round=[
dict(
role='HUMAN',
prompt='{prompt}'),
])),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer, max_out_len=2048))

xifeval_eval_cfg = dict(
evaluator=dict(type=xIFEvaluator),
pred_role='BOT',
)
LANG = 'de'

xifeval_datasets = [
dict(
abbr='xIFEval',
type=xIFEvalDataset,
path=f'data/ifeval/input_data_google_{LANG}.jsonl',
reader_cfg=xifeval_reader_cfg,
infer_cfg=xifeval_infer_cfg,
eval_cfg=xifeval_eval_cfg)
]
31 changes: 31 additions & 0 deletions opencompass/configs/datasets/xIFEval/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# IFEval

```bash
python3 run.py --models hf_internlm2_chat_7b --datasets IFEval_gen_3321a3 --debug
```

## Chat Models

| model | Prompt-level-strict-accuracy | Inst-level-strict-accuracy | Prompt-level-loose-accuracy | Inst-level-loose-accuracy |
|:-----------------------------:|-------------------------------:|-----------------------------:|------------------------------:|----------------------------:|
| qwen1.5-0.5b-chat-hf | 13.12 | 23.26 | 15.71 | 26.38 |
| qwen1.5-1.8b-chat-hf | 16.08 | 26.26 | 18.30 | 29.02 |
| qwen1.5-4b-chat-hf | 25.51 | 35.97 | 28.84 | 39.81 |
| qwen1.5-7b-chat-hf | 38.82 | 50.00 | 42.70 | 53.48 |
| qwen1.5-14b-chat-hf | 42.51 | 54.20 | 49.17 | 59.95 |
| qwen1.5-32b-chat-hf | 49.54 | 60.43 | 53.97 | 64.39 |
| qwen1.5-72b-chat-hf | 51.02 | 61.99 | 57.12 | 67.27 |
| qwen1.5-110b-chat-hf | 55.08 | 65.59 | 61.18 | 70.86 |
| internlm2-chat-1.8b-hf | 18.30 | 28.78 | 21.44 | 32.01 |
| internlm2-chat-1.8b-sft-hf | 18.67 | 31.18 | 19.78 | 32.85 |
| internlm2-chat-7b-hf | 34.75 | 46.28 | 40.48 | 51.44 |
| internlm2-chat-7b-sft-hf | 39.19 | 50.12 | 42.33 | 52.76 |
| internlm2-chat-20b-hf | 36.41 | 48.68 | 40.67 | 53.24 |
| internlm2-chat-20b-sft-hf | 44.55 | 55.64 | 46.77 | 58.03 |
| llama-3-8b-instruct-hf | 68.02 | 76.74 | 75.42 | 82.85 |
| llama-3-70b-instruct-hf | 78.00 | 84.65 | 84.29 | 89.21 |
| llama-3-8b-instruct-lmdeploy | 69.13 | 77.46 | 77.26 | 83.93 |
| llama-3-70b-instruct-lmdeploy | 75.97 | 82.97 | 83.18 | 88.37 |
| mistral-7b-instruct-v0.1-hf | 40.30 | 50.96 | 41.96 | 53.48 |
| mistral-7b-instruct-v0.2-hf | 49.17 | 60.43 | 51.94 | 64.03 |
| mixtral-8x7b-instruct-v0.1-hf | 50.09 | 60.67 | 55.64 | 65.83 |
37 changes: 37 additions & 0 deletions opencompass/configs/datasets/xIFEval/xIFeval_gen.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.datasets import xIFEvalDataset, xIFEvaluator

xifeval_reader_cfg = dict(
input_columns=['prompt'], output_column='reference')

xifeval_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template=dict(round=[
dict(
role='HUMAN',
prompt='{prompt}'),
])),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer, max_out_len=4096))

xifeval_eval_cfg = dict(
evaluator=dict(type=xIFEvaluator),
pred_role='BOT',
)

xifeval_datasets = []

LANGS = ['ar', 'bn', 'cs', 'de', 'es', 'fr', 'hu', 'ja', 'ko', 'ru', 'sr', 'sw', 'te', 'th', 'vi', 'zh']
for LANG in LANGS:
path = f'data/xifeval/input_data_google_{LANG}.jsonl'
xifeval_datasets.append(dict(
type=xIFEvalDataset,
abbr=f'xIFEval_{LANG}',
path=path,
reader_cfg=xifeval_reader_cfg,
infer_cfg=xifeval_infer_cfg,
eval_cfg=xifeval_eval_cfg,
))
69 changes: 69 additions & 0 deletions opencompass/configs/datasets/xgpqa/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# GPQA

```bash
python3 run.py --models hf_internlm2_7b --datasets gpqa_ppl_6bf57a --debug
python3 run.py --models hf_internlm2_chat_7b --datasets gpqa_gen_4baadb --debug
```

## Base Models

| model | GPQA_diamond |
|:------------------------:|---------------:|
| llama-7b-turbomind | 24.24 |
| llama-13b-turbomind | 25.25 |
| llama-30b-turbomind | 22.73 |
| llama-65b-turbomind | 21.72 |
| llama-2-7b-turbomind | 25.25 |
| llama-2-13b-turbomind | 23.74 |
| llama-2-70b-turbomind | 28.28 |
| llama-3-8b-turbomind | 31.82 |
| llama-3-70b-turbomind | 40.91 |
| internlm2-1.8b-turbomind | 24.24 |
| internlm2-7b-turbomind | 28.28 |
| internlm2-20b-turbomind | 31.31 |
| qwen-1.8b-turbomind | 28.79 |
| qwen-7b-turbomind | 24.75 |
| qwen-14b-turbomind | 27.78 |
| qwen-72b-turbomind | 31.31 |
| qwen1.5-0.5b-hf | 23.74 |
| qwen1.5-1.8b-hf | 28.79 |
| qwen1.5-4b-hf | 23.23 |
| qwen1.5-7b-hf | 20.71 |
| qwen1.5-14b-hf | 32.32 |
| qwen1.5-32b-hf | 30.81 |
| qwen1.5-72b-hf | 31.82 |
| qwen1.5-moe-a2-7b-hf | 28.79 |
| mistral-7b-v0.1-hf | 24.75 |
| mistral-7b-v0.2-hf | 23.74 |
| mixtral-8x7b-v0.1-hf | 28.79 |
| mixtral-8x22b-v0.1-hf | 36.36 |
| yi-6b-hf | 28.28 |
| yi-34b-hf | 35.86 |
| deepseek-7b-base-hf | 20.71 |
| deepseek-67b-base-hf | 25.25 |

## Chat Models

| model | GPQA_diamond |
|:-----------------------------:|---------------:|
| qwen1.5-0.5b-chat-hf | 19.70 |
| qwen1.5-1.8b-chat-hf | 29.80 |
| qwen1.5-4b-chat-hf | 25.25 |
| qwen1.5-7b-chat-hf | 31.82 |
| qwen1.5-14b-chat-hf | 30.30 |
| qwen1.5-32b-chat-hf | 31.31 |
| qwen1.5-72b-chat-hf | 32.83 |
| qwen1.5-110b-chat-hf | 35.86 |
| internlm2-chat-1.8b-hf | 25.76 |
| internlm2-chat-1.8b-sft-hf | 26.26 |
| internlm2-chat-7b-hf | 28.28 |
| internlm2-chat-7b-sft-hf | 27.27 |
| internlm2-chat-20b-hf | 30.30 |
| internlm2-chat-20b-sft-hf | 29.29 |
| llama-3-8b-instruct-hf | 25.76 |
| llama-3-70b-instruct-hf | 37.88 |
| llama-3-8b-instruct-lmdeploy | 25.76 |
| llama-3-70b-instruct-lmdeploy | 37.88 |
| mistral-7b-instruct-v0.1-hf | 30.30 |
| mistral-7b-instruct-v0.2-hf | 25.25 |
| mixtral-8x7b-instruct-v0.1-hf | 30.30 |
52 changes: 52 additions & 0 deletions opencompass/configs/datasets/xgpqa/gpqa_0shot_nocot_gen_772ea0.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.datasets import GPQADataset, GPQA_Simple_Eval_postprocess, GPQAEvaluator

# openai_simple_eval prompt
align_prompt = """
Answer the following multiple choice question. The last line of your response should be of the following format: 'ANSWER: $LETTER' (without quotes) where LETTER is one of ABCD.

{question}

A) {A}
B) {B}
C) {C}
D) {D}
""".strip()

gpqa_reader_cfg = dict(
input_columns=['question', 'A', 'B', 'C', 'D'],
output_column='answer')

gpqa_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template=dict(
round=[
dict(role='HUMAN', prompt=align_prompt),
], )),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer))

gpqa_eval_cfg = dict(evaluator=dict(type=GPQAEvaluator),
pred_postprocessor=dict(type=GPQA_Simple_Eval_postprocess))

gpqa_datasets = []
gpqa_subsets = {
# 'extended': 'gpqa_extended.csv',
# 'main': 'gpqa_main.csv',
'diamond': 'gpqa_diamond.csv'
}

for split in list(gpqa_subsets.keys()):
gpqa_datasets.append(
dict(
abbr='GPQA_' + split,
type=GPQADataset,
path='./data/gpqa/',
name=gpqa_subsets[split],
reader_cfg=gpqa_reader_cfg,
infer_cfg=gpqa_infer_cfg,
eval_cfg=gpqa_eval_cfg)
)
53 changes: 53 additions & 0 deletions opencompass/configs/datasets/xgpqa/gpqa_few_shot_ppl_4b5a83.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import FixKRetriever
from opencompass.openicl.icl_inferencer import PPLInferencer
from opencompass.openicl.icl_evaluator import AccwithDetailsEvaluator
from opencompass.datasets import GPQADataset, GPQAEvaluator
from opencompass.utils import first_option_postprocess

gpqa_reader_cfg = dict(
input_columns=['question', 'A', 'B', 'C', 'D'],
output_column='answer')

hint = f'For the multiple choice question below, please provide the correct answer option directly.'
question_and_options = 'Question: {question}\n(A){A}\n(B){B}\n(C){C}\n(D){D}\n'
gpqa_infer_cfg = dict(
ice_template=dict(
type=PromptTemplate,
template={
opt: f'{question_and_options}\nAnswer: {opt}' for opt in ['A', 'B', 'C', 'D']},
),
prompt_template=dict(
type=PromptTemplate,
template={
opt: f'{hint}\n</E>{question_and_options}\nAnswer: {opt}' for opt in ['A', 'B', 'C', 'D']
},
ice_token='</E>'
),
retriever=dict(type=FixKRetriever, fix_id_list=[0, 1, 2, 3, 4]),
inferencer=dict(type=PPLInferencer))

gpqa_eval_cfg = dict(evaluator=dict(type=AccwithDetailsEvaluator))

gpqa_datasets = []
LANG = ['de']
gpqa_subsets = {
lang: f'gpqa_main_{lang}.csv' for lang in LANG
}
# gpqa_subsets = {
# # 'extended': 'gpqa_extended.csv',
# # 'main': 'gpqa_main.csv',
# 'diamond': 'gpqa_main_{}.csv'
# }

for split in list(gpqa_subsets.keys()):
gpqa_datasets.append(
dict(
abbr='GPQA_' + split,
type=GPQADataset,
path='./data/gpqa/',
name=gpqa_subsets[split],
reader_cfg=gpqa_reader_cfg,
infer_cfg=gpqa_infer_cfg,
eval_cfg=gpqa_eval_cfg)
)
Loading