Skip to content

Commit 777e9e4

Browse files
author
竹沥
committed
update
1 parent 25355f6 commit 777e9e4

File tree

6 files changed

+127
-66
lines changed

6 files changed

+127
-66
lines changed

.DS_Store

6 KB
Binary file not shown.

index.html

Lines changed: 127 additions & 66 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,28 @@
3737
<script src="static/js/index.js"></script>
3838
</head>
3939
<body>
40+
41+
<style>
42+
table {
43+
width: 100%;
44+
border-collapse: collapse;
45+
}
46+
th, td {
47+
border: 1px solid #ddd;
48+
padding: 8px;
49+
text-align: center;
50+
}
51+
th {
52+
background-color: #f2f2f2; /* Light color for headers */
53+
}
54+
.merged-row {
55+
background-color: #e0e0e0; /* Light color for merged row */
56+
}
57+
.link-block a {
58+
margin: 0 5px; /* 调整为适合的值 */
59+
}
60+
61+
</style>
4062

4163
<section class="hero">
4264
<div class="hero-body">
@@ -77,45 +99,52 @@ <h1 class="title is-1 publication-title">Chinese SimpleQA</h1>
7799
<span class="author-block">
78100
<span>Boren Zheng,</span>
79101
</span>
102+
<span class="author-block">
103+
<span>Xuepeng Liu,</span>
104+
</span>
105+
<span class="author-block">
106+
<span>Dekai Sun,</span>
107+
</span>
80108
<span class="author-block">
81109
<span>Wenbo Su,</span>
82110
</span>
83111
<span class="author-block">
84112
<span>Bo Zheng</span>
85113
</span>
114+
86115

87116
</div>
88117

89118
<div class="is-size-5 publication-authors">
90-
<span class="author-block">Taobao & Tmall Group of Alibaba<br> </span>
119+
<span class="author-block" style="color: rgb(181, 44, 44);">Taobao & Tmall Group of Alibaba<br> </span>
91120
<span class="eql-cntrb"><small><br><sup>*</sup>Indicates Equal Contribution</small></span>
92121
<span class="eql-cntrb"><small><br><sup>&dagger;</sup>Corresponding Author</small></span>
93122

94123
</div>
95124

125+
126+
96127
<div class="column has-text-centered">
97128
<div class="publication-links">
98-
<!-- Arxiv PDF link -->
129+
130+
<!-- ArXiv abstract Link -->
131+
<span class="link-block">
132+
<a href="https://arxiv.org/abs/<ARXIV PAPER ID>" target="_blank"
133+
class="external-link button is-normal is-rounded is-dark">
134+
<span class="icon">
135+
<i class="ai ai-arxiv"></i>
136+
</span>
137+
<span>arXiv</span>
138+
99139
<span class="link-block">
100-
<a href="https://arxiv.org/pdf/<ARXIV PAPER ID>.pdf" target="_blank"
140+
<a href="YOUR_HUGGING_FACE_DATASET_URL" target="_blank"
101141
class="external-link button is-normal is-rounded is-dark">
102142
<span class="icon">
103-
<i class="fas fa-file-pdf"></i>
143+
<img src="static/images/hf-logo.png" alt="Hugging Face Logo" style="width: 15px; height: 20px;"/>
104144
</span>
105-
<span>Paper</span>
106-
</a>
107-
</span>
108-
109-
<!-- Supplementary PDF link -->
110-
<span class="link-block">
111-
<a href="static/pdfs/supplementary_material.pdf" target="_blank"
112-
class="external-link button is-normal is-rounded is-dark">
113-
<span class="icon">
114-
<i class="fas fa-file-pdf"></i>
145+
<span>Dataset</span>
146+
</a>
115147
</span>
116-
<span>Supplementary</span>
117-
</a>
118-
</span>
119148

120149
<!-- Github link -->
121150
<span class="link-block">
@@ -128,14 +157,7 @@ <h1 class="title is-1 publication-title">Chinese SimpleQA</h1>
128157
</a>
129158
</span>
130159

131-
<!-- ArXiv abstract Link -->
132-
<span class="link-block">
133-
<a href="https://arxiv.org/abs/<ARXIV PAPER ID>" target="_blank"
134-
class="external-link button is-normal is-rounded is-dark">
135-
<span class="icon">
136-
<i class="ai ai-arxiv"></i>
137-
</span>
138-
<span>arXiv</span>
160+
139161
</a>
140162
</span>
141163
</div>
@@ -165,9 +187,8 @@ <h2 class="title is-3">Abstract</h2>
165187
<!-- End paper abstract -->
166188

167189

168-
169190
<!-- <div id="container" style="height: 100%"></div> -->
170-
<div id="container" style="width: 100%; height: 1200px;"></div>
191+
<div id="container" style="width: 100%; height: 1200px; margin-top: 50px;"></div>
171192
<script type="text/javascript" src="https://registry.npmmirror.com/echarts-nightly/5.6.0-dev.20241105/files/dist/echarts.min.js"></script>
172193

173194
<script type="text/javascript">
@@ -1203,7 +1224,7 @@ <h2 class="title is-3">Abstract</h2>
12031224

12041225
window.addEventListener('resize', myChart.resize);
12051226
</script>
1206-
1227+
12071228

12081229
<style>
12091230
.description2 p {
@@ -1222,34 +1243,56 @@ <h2 class="title is-3">Data Construction Pipline</h2>
12221243
<img src="static/images/data_construct.jpg" alt="An overview of the data construction, filtering, verification, and quality control processes of Chinese SimpleQA." style="max-width: 100%; height: auto;">
12231244
</div>
12241245
<div class="description2", style="margin-top: 30px;">
1225-
<p>The data construction process for Chinese SimpleQA includes both an automated process and a manual verification process. The automated part involves knowledge content extraction and filtering, automatic generation of question-answer pairs, LLM automatic validation based on criteria, answer factual correctness verification based on RAG (Retrieval-Augmented Generation), and question difficulty filtering.</p>
1246+
<!-- <p>The data construction process for Chinese SimpleQA includes both an automated process and a manual verification process. The automated part involves knowledge content extraction and filtering, automatic generation of question-answer pairs, LLM automatic validation based on criteria, answer factual correctness verification based on RAG (Retrieval-Augmented Generation), and question difficulty filtering.</p>
12261247
<p>Initially, we collected a large amount of knowledge-rich text content from various knowledge fields, primarily derived from Wikipedia. This content was then processed through a quality assessment model to filter out low-quality data. Based on this, we guided the LLM to generate question-answer pairs according to predefined criteria using these high-quality knowledge contents. To ensure that the generated question-answer pairs met these criteria, we utilized the LLM again for rule-based validation to remove non-conforming data. In this way, we obtained a large set of initially filtered knowledge question-answer pairs. However, relying on a single data source for generation can potentially lead to inaccurate answers. To mitigate this risk, we deployed external retrieval tools to gather more diverse information, guiding the LLM in evaluating the factual correctness of answers based on information from different sources. In this process, incorrect question-answer pairs were discarded. Specifically, we used LlamaIndex as the retrieval method, with search results from Google and Bing as data sources, further enhancing the quality of the dataset.</p>
1227-
<p>In addition, we filtered the dataset for difficulty to better probe the knowledge boundaries of the LLMs, removing overly simple questions. Specifically, if a question could be correctly answered by all four powerful models, Meta-Llama-3-70B-Instruct, Qwen2.5-72B-Instruct, and GLM-4-Plus, it was deemed too simple and thus discarded. Through this approach, Chinese SimpleQA becomes more challenging.</p>
1248+
<p>In addition, we filtered the dataset for difficulty to better probe the knowledge boundaries of the LLMs, removing overly simple questions. Specifically, if a question could be correctly answered by all four powerful models, Meta-Llama-3-70B-Instruct, Qwen2.5-72B-Instruct, and GLM-4-Plus, it was deemed too simple and thus discarded. Through this approach, Chinese SimpleQA becomes more challenging.</p> -->
1249+
<p style="font-weight: bold;color: rgb(200, 26, 151);">Chinese SimpleQA's Features</p>
1250+
<ul style="margin-left: 20px; text-align: left; text-indent: 2em;">
1251+
<li style="margin-bottom: 1em;">
1252+
<strong>Chinese</strong>: Our Chinese SimpleQA focuses on the Chinese language, which provides a comprehensive evaluation of the factuality abilities of existing LLMs in Chinese.
1253+
</li>
1254+
<li style="margin-bottom: 1em;">
1255+
<strong>Diverse</strong>: Chinese SimpleQA covers 6 topics (i.e., "Chinese Culture", "Humanities", "Engineering, Technology, and Applied Sciences", "Life, Art, and Culture", "Society", and "Natural Science"), and these topics include 99 fine-grained subtopics in total, which demonstrates the diversity of our Chinese SimpleQA.
1256+
</li>
1257+
<li style="margin-bottom: 1em;">
1258+
<strong>High-quality</strong>: We conduct a comprehensive and rigorous quality control process to ensure the quality and accuracy of our Chinese SimpleQA.
1259+
</li>
1260+
<li style="margin-bottom: 1em;">
1261+
<strong>Static</strong>: Following SimpleQA, to preserve the evergreen property of Chinese SimpleQA, all reference answers would not change over time.
1262+
</li>
1263+
<li style="margin-bottom: 1em;">
1264+
<strong>Easy-to-evaluate</strong>: Following SimpleQA, as the questions and answers are very short, the grading procedure is fast to run via existing LLMs (e.g., OpenAI API).
1265+
</li>
1266+
</ul>
1267+
<p style="font-weight: bold;color: rgb(200, 26, 151);">Key Observations</p>
1268+
<!-- <p style="margin-left: 20px; text-align: left; text-indent: 2em; color: rgb(200, 26, 151);font-weight: bold;">Key observations from our analysis:</p> -->
1269+
<ul style="margin-left: 20px; text-align: left; text-indent: 2em;">
1270+
<li style="margin-bottom: 1em;">
1271+
<strong>Chinese SimpleQA is challenging</strong>. Only o1-preview and Doubao-pro-32k achieve the passing score (63.8% and 61.9% on the correct metric), and there is a long way to improve for many closed-source and open-source LLMs.
1272+
</li>
1273+
<li style="margin-bottom: 1em;">
1274+
<strong>Larger models lead to better results</strong>. Based on the results of Qwen2.5 series, InternLM series, Yi-1.5 series, etc, we observe that better performance is obtained when the model is larger.
1275+
</li>
1276+
<li style="margin-bottom: 1em;">
1277+
<strong>Larger models are more calibrated</strong>. We observe that o1-preview is more calibrated than o1-mini, and GPT-4o is more calibrated than GPT-4o-mini.
1278+
</li>
1279+
<li style="margin-bottom: 1em;">
1280+
<strong>RAG matters</strong>. When introducing the RAG strategy into existing LLMs, the performance gaps between different LLMs decrease a lot. For example, for GPT-4o and Qwen2.5-3B, the performance gap decreases from 42.4% to 9.3% when using RAG.
1281+
</li>
1282+
<li style="margin-bottom: 1em;">
1283+
<strong>Alignment tax exists</strong>. Existing alignment or post-training strategies usually decrease the factuality of language models.
1284+
</li>
1285+
<li style="margin-bottom: 1em;">
1286+
<strong>Rankings of SimpleQA and Chinese SimpleQA are different</strong>. The performance of several LLMs focusing on Chinese (Doubao-pro-32k, and GLM-4-Plus) is close to the high-performance o1-preview. In particular, in the “Chinese Culture” topic, these Chinese community LLMs are significantly better than GPT or o1 series models.
1287+
</li>
1288+
</ul>
1289+
12281290
</div>
12291291
</div>
12301292
</div>
12311293
</div>
12321294
</section>
12331295

1234-
<style>
1235-
table {
1236-
width: 100%;
1237-
border-collapse: collapse;
1238-
}
1239-
th, td {
1240-
border: 1px solid #ddd;
1241-
padding: 8px;
1242-
text-align: center;
1243-
}
1244-
th {
1245-
background-color: #f2f2f2; /* Light color for headers */
1246-
}
1247-
.merged-row {
1248-
background-color: #e0e0e0; /* Light color for merged row */
1249-
}
1250-
</style>
1251-
1252-
12531296

12541297
<section class="section hero is-light">
12551298
<div class="container is-max-desktop">
@@ -1337,39 +1380,58 @@ <h2 class="title is-3">LeaderBoard</h2>
13371380
</div>
13381381
</section>
13391382

1383+
<section class="section hero is-light">
1384+
<div class="container is-max-desktop">
1385+
<div class="columns is-centered has-text-centered">
1386+
<div class="column is-four-fifths">
1387+
<h2 class="title is-3">Rankings on Chinese SimpleQA vs. SimpleQA</h2>
1388+
<div class="image-container">
1389+
<img src="static/images/exp6.jpg" alt="" style="max-width: 100%; height: auto;">
1390+
</div>
1391+
<div class="description2", style="margin-top: 30px;">
1392+
<p>There are significant ranking differences of various models between the SimpleQA and Chinese SimpleQA benchmarks. For example, Doubao-pro-32k rises from 12th to 2nd in the Chinese version, while GPT-4 drops from 3rd to 9th. This highlights the importance of evaluating models in multilingual environments. Notably, o1-preview consistently holds the top position across both datasets. Many Chinese community-developed models perform better on the Chinese SimpleQA than on the SimpleQA.</p>
1393+
</div>
1394+
</div>
1395+
</div>
1396+
</div>
1397+
</section>
13401398

13411399

13421400
<section class="section hero is-light">
13431401
<div class="container is-max-desktop">
13441402
<div class="columns is-centered has-text-centered">
13451403
<div class="column is-four-fifths">
1346-
<h2 class="title is-3">Detailed Results On Subtopics.</h2>
1404+
<h2 class="title is-3">Detailed Results on Subtopics</h2>
13471405
<div class="image-container">
1348-
<img src="static/images/exp4.jpg" alt="An overview of the data construction, filtering, verification, and quality control processes of Chinese SimpleQA." style="max-width: 100%; height: auto;">
1406+
<img src="static/images/exp4.jpg" alt="" style="max-width: 100%; height: auto;">
13491407
</div>
13501408
<div class="description2", style="margin-top: 30px;">
1351-
<p>As mentioned in our paper, the benchmark covers a total of 99 subtopics, which can comprehensively detect the knowledge level of the model in various fields. The upper figure illustrates the performance comparison between the o1 model and seven notable Chinese community models within several common domains. </p>
1352-
<p>Firstly, from an overall perspective, the o1-preview model exhibits the most comprehensive performance across these domains, with the Doubao model following closely. In contrast, the Moonshot model demonstrates the weakest overall performance.
1353-
Secondly, when examining specific domains, a significant disparity emerges between the Chinese community models and the o1 model in areas such as Computer Science and Medicine. However, this gap is minimal in domains like Education and Economics. Notably, in Education, some Chinese community models outperform the o1-preview, highlighting their potential for achieving success in specific vertical domains.
1354-
Lastly, when examining specific models, the Moonshot model is notably weaker in Mathematics, Law, and Entertainment, while the Baichuan model also underperforms in Entertainment. The Yi-Large model excels in Education, and the o1 model maintains the strongest performance across other domains. </p>
1355-
<p>Evaluating the performance of the models across diverse domains within the benchmark dataset enables users to identify the most suitable model for their specific needs.</p>
1356-
1409+
<p>The benchmark covers 99 subtopics to assess the model's knowledge across various fields. Overall, the o1-preview model performs most comprehensively, followed by Doubao, while Moonshot is the weakest. There is a noticeable gap between Chinese community models and the o1 model in Computer Science and Medicine, but less so in Education and Economics. Notably, some Chinese models outperform o1-preview in Education. Moonshot struggles in Mathematics, Law, and Entertainment, while Baichuan also underperforms in Entertainment. Yi-Large excels in Education, and o1 maintains strong performance in other domains. Evaluating models across diverse domains helps users choose the best fit for their needs.</p>
13571410
</div>
13581411
</div>
13591412
</div>
13601413
</div>
13611414
</section>
13621415

13631416

1364-
1365-
<!--BibTex citation -->
1366-
<section class="section" id="BibTeX">
1367-
<div class="container is-max-desktop content">
1368-
<h2 class="title">BibTeX</h2>
1369-
<pre><code>BibTex Code Here</code></pre>
1417+
<section class="section hero is-light">
1418+
<div class="container is-max-desktop">
1419+
<div class="columns is-centered has-text-centered">
1420+
<div class="column is-four-fifths">
1421+
<h2 class="title is-3">Calibration and Test-Time Compute</h2>
1422+
<div class="image-container">
1423+
<img src="static/images/calibration_and_inference.png" alt="" style="max-width: 100%; height: auto;">
1424+
</div>
1425+
<div class="description2", style="margin-top: 30px;">
1426+
<p style="font-weight: bold;color: rgb(200, 26, 151);">Calibration Analysis</p>
1427+
<p>We analyzed the calibration of different LLMs on Chinese SimpleQA. Models were instructed to provide a confidence level from 0 to 100 when answering questions. Ideally, confidence should match actual accuracy. Results show that GPT-4o aligns better than GPT-4o-mini, and o1-preview aligns better than o1-mini. In the Qwen2.5 series, larger models show better calibration. All models tend to be overconfident, especially when confidence is above 50.</p>
1428+
<p style="font-weight: bold;color: rgb(200, 26, 151);">Test-Time Compute Analysis</p>
1429+
<p>We evaluated the relationship between increased test-time compute and accuracy. Random samples from Chinese SimpleQA showed that as inference counts increase, response accuracy improves and eventually reaches a ceiling. This aligns with the dataset's purpose to probe model knowledge boundaries.</p>
1430+
</div>
1431+
</div>
13701432
</div>
1433+
</div>
13711434
</section>
1372-
<!--End BibTex citation -->
13731435

13741436

13751437
<footer class="footer">
@@ -1379,8 +1441,7 @@ <h2 class="title">BibTeX</h2>
13791441
<div class="content">
13801442

13811443
<p>
1382-
This page was built using the <a href="https://github.com/eliahuhorwitz/Academic-project-page-template" target="_blank">Chinese SimpleQA Template</a> which was adopted from the <a href="https://nerfies.github.io" target="_blank">Nerfies</a> project page.
1383-
You are free to borrow the source code of this website, we just ask that you link back to this page in the footer. <br> This website is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/" target="_blank">Creative
1444+
This site is created based on <a href="https://github.com/eliahuhorwitz/Academic-project-page-template" target="_blank">Academic Project Page Template</a> and is licensed under <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/" target="_blank">Creative
13841445
Commons Attribution-ShareAlike 4.0 International License</a>.
13851446
</p>
13861447

static/.DS_Store

6 KB
Binary file not shown.
1.09 MB
Loading

static/images/exp6.jpg

154 KB
Loading

static/images/hf-logo.png

181 KB
Loading

0 commit comments

Comments
 (0)