Add model-index with benchmark evaluations

#2
by davidlms - opened
Files changed (1) hide show
  1. README.md +271 -157
README.md CHANGED
@@ -1,157 +1,271 @@
1
- ---
2
- language:
3
- - zh
4
- - en
5
- library_name: transformers
6
- license: mit
7
- pipeline_tag: image-text-to-text
8
- ---
9
-
10
- # GLM-4.6V
11
-
12
- <div align="center">
13
- <img src=https://raw.githubusercontent.com/zai-org/GLM-V/refs/heads/main/resources/logo.svg width="40%"/>
14
- </div>
15
-
16
- This model is part of the GLM-V family of models, introduced in the paper [GLM-4.1V-Thinking and GLM-4.5V: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning](https://huggingface.co/papers/2507.01006).
17
-
18
- - **GLM-4.6V Blog**: [https://z.ai/blog/glm-4.6v](https://z.ai/blog/glm-4.6v)
19
- - **Paper**: [https://huggingface.co/papers/2507.01006](https://huggingface.co/papers/2507.01006)
20
- - **GitHub Repository**: [https://github.com/zai-org/GLM-V](https://github.com/zai-org/GLM-V)
21
- - **Online Demo**: [https://chat.z.ai/](https://chat.z.ai/)
22
- - **API Access**: [Z.ai Open Platform](https://docs.z.ai/guides/vlm/glm-4.6v)
23
- - **Desktop Assistant App**: [https://huggingface.co/spaces/zai-org/GLM-4.5V-Demo-App](https://huggingface.co/spaces/zai-org/GLM-4.5V-Demo-App)
24
-
25
- ## Introduction
26
-
27
- GLM-4.6V series model includes two versions: GLM-4.6V (106B), a foundation model designed for cloud and high-performance
28
- cluster scenarios,
29
- and GLM-4.6V-Flash (9B), a lightweight model optimized for local deployment and low-latency applications.
30
- GLM-4.6V scales its context window to 128k tokens in training,
31
- and achieves SoTA performance in visual understanding among models of similar parameter scales.
32
- Crucially, we integrate native Function Calling capabilities for the first time.
33
- This effectively bridges the gap between "visual perception" and "executable action"
34
- providing a unified technical foundation for multimodal agents in real-world business scenarios.
35
-
36
- ![GLM-4.6V Benchmarks](https://raw.githubusercontent.com/zai-org/GLM-V/refs/heads/main/resources/bench_46v.jpeg)
37
-
38
- Beyond achieves SoTA performance across major multimodal benchmarks at comparable model scales. GLM-4.6V introduces
39
- several key features:
40
-
41
- - **Native Multimodal Function Calling**
42
- Enables native vision-driven tool use. Images, screenshots, and document pages can be passed directly as tool inputs without text conversion, while visual outputs (charts, search images, rendered pages) are interpreted and integrated into the reasoning chain. This closes the loop from perception to understanding to execution.
43
-
44
- - **Interleaved Image-Text Content Generation**
45
- Supports high-quality mixed media creation from complex multimodal inputs. GLM-4.6V takes a multimodal context—spanning documents, user inputs, and tool-retrieved images—and synthesizes coherent, interleaved image-text content tailored to the task. During generation it can actively call search and retrieval tools to gather and curate additional text and visuals, producing rich, visually grounded content.
46
-
47
-
48
- - **Multimodal Document Understanding**
49
- GLM-4.6V can process up to 128K tokens of multi-document or long-document input, directly interpreting richly formatted pages as images. It understands text, layout, charts, tables, and figures jointly, enabling accurate comprehension of complex, image-heavy documents without requiring prior conversion to plain text.
50
-
51
- - **Frontend Replication & Visual Editing**
52
- Reconstructs pixel-accurate HTML/CSS from UI screenshots and supports natural-language-driven edits. It detects layout, components, and styles visually, generates clean code, and applies iterative visual modifications through simple user instructions.
53
-
54
-
55
- **This Hugging Face repository hosts the `GLM-4.6V-Flash` model, part of the `GLM-V` series.**
56
-
57
- ## Usage
58
-
59
- ### Environment Installation
60
-
61
- For `SGLang`:
62
-
63
- ```bash
64
- pip install sglang>=0.5.6post1
65
- pip install transformers>=5.0.0rc0
66
- ```
67
-
68
- For `vLLM`:
69
-
70
- ```bash
71
- pip install vllm>=0.12.0
72
- pip install transformers>=5.0.0rc0
73
- ```
74
-
75
- ### Quick Start with Transformers
76
-
77
- ```python
78
- from transformers import AutoProcessor, Glm4vMoeForConditionalGeneration
79
- import torch
80
-
81
- MODEL_PATH = "zai-org/GLM-4.6V-Flash"
82
- messages = [
83
- {
84
- "role": "user",
85
- "content": [
86
- {
87
- "type": "image",
88
- "url": "https://upload.wikimedia.org/wikipedia/commons/f/fa/Grayscale_8bits_palette_sample_image.png"
89
- },
90
- {
91
- "type": "text",
92
- "text": "describe this image"
93
- }
94
- ],
95
- }
96
- ]
97
- processor = AutoProcessor.from_pretrained(MODEL_PATH)
98
- model = Glm4vMoeForConditionalGeneration.from_pretrained(
99
- pretrained_model_name_or_path=MODEL_PATH,
100
- torch_dtype="auto",
101
- device_map="auto",
102
- )
103
- inputs = processor.apply_chat_template(
104
- messages,
105
- tokenize=True,
106
- add_generation_prompt=True,
107
- return_dict=True,
108
- return_tensors="pt"
109
- ).to(model.device)
110
- inputs.pop("token_type_ids", None)
111
- generated_ids = model.generate(**inputs, max_new_tokens=8192)
112
- output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False)
113
- print(output_text)
114
- ```
115
-
116
- ## Evaluation Settings
117
-
118
- We primarily use vLLM as the backend for model inference. For faster and more reliable performance on video tasks, we employ SGLang. To reproduce our leaderboard results, we recommend the following decoding parameters:
119
-
120
- + top_p: 0.6
121
- + top_k: 2
122
- + temperature: 0.8
123
- + repetition_penalty: 1.1
124
- + max_generate_tokens: 16K
125
-
126
- For more usage details, please refer to Our [Github](https://github.com/zai-org/GLM-V).
127
-
128
-
129
-
130
- ## Fixed and Remaining Issues
131
-
132
- Since the open-sourcing of GLM-4.1V, we have received extensive feedback from the community and are well aware that the model still has many shortcomings. In subsequent iterations, we attempted to address several common issues — such as repetitive thinking outputs and formatting errors — which have been mitigated to some extent in this new version.
133
-
134
- However, the model still has several limitations and issues that we will fix as soon as possible:
135
-
136
- 1. Pure text QA capabilities still have significant room for improvement. In this development cycle, our primary focus was on visual multimodal scenarios, and we will enhance pure text abilities in upcoming updates.
137
- 2. The model may still overthink or even repeat itself in certain cases, especially when dealing with complex prompts.
138
- 3. In some situations, the model may restate the answer again at the end.
139
- 4. There remain certain perception limitations, such as counting accuracy and identifying specific individuals, which still require improvement.
140
-
141
- Thank you for your patience and understanding. We also welcome feedback and suggestions in the issue section — we will respond and improve as much as we can!
142
-
143
- ## Citation
144
-
145
- If you use this model, please cite the following paper:
146
-
147
- ```bibtex
148
- @misc{vteam2025glm45vglm41vthinkingversatilemultimodal,
149
- title={GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning},
150
- author={V Team and Wenyi Hong and Wenmeng Yu and Xiaotao Gu and Guo Wang and Guobing Gan and Haomiao Tang and Jiale Cheng and Ji Qi and Junhui Ji and Lihang Pan and Shuaiqi Duan and Weihan Wang and Yan Wang and Yean Cheng and Zehai He and Zhe Su and Zhen Yang and Ziyang Pan and Aohan Zeng and Baoxu Wang and Bin Chen and Boyan Shi and Changyu Pang and Chenhui Zhang and Da Yin and Fan Yang and Guoqing Chen and Jiazheng Xu and Jiale Zhu and Jiali Chen and Jing Chen and Jinhao Chen and Jinghao Lin and Jinjiang Wang and Junjie Chen and Leqi Lei and Letian Gong and Leyi Pan and Mingdao Liu and Mingde Xu and Mingzhi Zhang and Qinkai Zheng and Sheng Yang and Shi Zhong and Shiyu Huang and Shuyuan Zhao and Siyan Xue and Shangqin Tu and Shengbiao Meng and Tianshu Zhang and Tianwei Luo and Tianxiang Hao and Tianyu Tong and Wenkai Li and Wei Jia and Xiao Liu and Xiaohan Zhang and Xin Lyu and Xinyue Fan and Xuancheng Huang and Yanling Wang and Yadong Xue and Yanfeng Wang and Yanzi Wang and Yifan An and Yifan Du and Yiming Shi and Yiheng Huang and Yilin Niu and Yuan Wang and Yuanchang Yue and Yuchen Li and Yutao Zhang and Yuting Wang and Yu Wang and Yuxuan Zhang and Zhao Xue and Zhenyu Hou and Zhengxiao Du and Zihan Wang and Peng Zhang and Debing Liu and Bin Xu and Juanzi Li and Minlie Huang and Yuxiao Dong and Jie Tang},
151
- year={2025},
152
- eprint={2507.01006},
153
- archivePrefix={arXiv},
154
- primaryClass={cs.CV},
155
- url={https://arxiv.org/abs/2507.01006},
156
- }
157
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ - en
5
+ library_name: transformers
6
+ license: mit
7
+ pipeline_tag: image-text-to-text
8
+ model-index:
9
+ - name: GLM-4.6V-Flash
10
+ results:
11
+ - task:
12
+ type: image-text-to-text
13
+ dataset:
14
+ name: Multimodal Benchmarks
15
+ type: benchmark
16
+ metrics:
17
+ - name: MMBench V1.1
18
+ type: mmbench_v1.1
19
+ value: 86.9
20
+ - name: MMBench V1.1 (CN)
21
+ type: mmbench_v1.1_cn
22
+ value: 85.9
23
+ - name: MMStar
24
+ type: mmstar
25
+ value: 74.7
26
+ - name: BLINK (Val)
27
+ type: blink_val
28
+ value: 65.5
29
+ - name: MUIRBENCH
30
+ type: muirbench
31
+ value: 75.7
32
+ - name: MMMU (Val)
33
+ type: mmmu_val
34
+ value: 71.1
35
+ - name: MMMU_Pro
36
+ type: mmmu_pro
37
+ value: 60.6
38
+ - name: VideoMMU
39
+ type: videommu
40
+ value: 70.1
41
+ - name: MathVista
42
+ type: mathvista
43
+ value: 82.7
44
+ - name: AI2D
45
+ type: ai2d
46
+ value: 89.2
47
+ - name: DynaMath
48
+ type: dynamath
49
+ value: 43.7
50
+ - name: WeMath
51
+ type: wemath
52
+ value: 60.0
53
+ - name: ZeroBench (sub)
54
+ type: zerobench_sub
55
+ value: 22.5
56
+ - name: MMBrowseComp
57
+ type: mmbrowsecomp
58
+ value: 7.1
59
+ - name: Design2Code
60
+ type: design2code
61
+ value: 69.8
62
+ - name: Flame-React-Eval
63
+ type: flame_react_eval
64
+ value: 78.8
65
+ - name: OSWorld
66
+ type: osworld
67
+ value: 21.1
68
+ - name: AndroidWorld
69
+ type: androidworld
70
+ value: 42.7
71
+ - name: WebVoyager
72
+ type: webvoyager
73
+ value: 71.8
74
+ - name: Webquest-SingleQA
75
+ type: webquest_singleqa
76
+ value: 75.1
77
+ - name: Webquest-MultiQA
78
+ type: webquest_multiqa
79
+ value: 53.4
80
+ - name: MMLongBench-Doc
81
+ type: mmlongbench_doc
82
+ value: 53.0
83
+ - name: MMLongBench-128K
84
+ type: mmlongbench_128k
85
+ value: 63.4
86
+ - name: LVBench
87
+ type: lvbench
88
+ value: 49.5
89
+ - name: OCRBench
90
+ type: ocrbench
91
+ value: 84.7
92
+ - name: OCR-Bench_v2 (EN)
93
+ type: ocr_bench_v2_en
94
+ value: 63.5
95
+ - name: OCR-Bench_v2 (CN)
96
+ type: ocr_bench_v2_cn
97
+ value: 59.5
98
+ - name: ChartQAPro
99
+ type: chartqapro
100
+ value: 62.6
101
+ - name: ChartMuseum
102
+ type: chartmuseum
103
+ value: 49.8
104
+ - name: CharXiv_Val-Reasoning
105
+ type: charxiv_val_reasoning
106
+ value: 59.6
107
+ - name: OmniSpatial
108
+ type: omnispatial
109
+ value: 50.6
110
+ - name: RefCOCO-avg (val)
111
+ type: refcoco_avg_val
112
+ value: 85.6
113
+ - name: TreeBench
114
+ type: treebench
115
+ value: 45.7
116
+ - name: Ref-L4-test
117
+ type: ref_l4_test
118
+ value: 87.7
119
+ source:
120
+ name: Model Card
121
+ url: https://huggingface.co/zai-org/GLM-4.6V-Flash
122
+ ---
123
+
124
+ # GLM-4.6V
125
+
126
+ <div align="center">
127
+ <img src=https://raw.githubusercontent.com/zai-org/GLM-V/refs/heads/main/resources/logo.svg width="40%"/>
128
+ </div>
129
+
130
+ This model is part of the GLM-V family of models, introduced in the paper [GLM-4.1V-Thinking and GLM-4.5V: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning](https://huggingface.co/papers/2507.01006).
131
+
132
+ - **GLM-4.6V Blog**: [https://z.ai/blog/glm-4.6v](https://z.ai/blog/glm-4.6v)
133
+ - **Paper**: [https://huggingface.co/papers/2507.01006](https://huggingface.co/papers/2507.01006)
134
+ - **GitHub Repository**: [https://github.com/zai-org/GLM-V](https://github.com/zai-org/GLM-V)
135
+ - **Online Demo**: [https://chat.z.ai/](https://chat.z.ai/)
136
+ - **API Access**: [Z.ai Open Platform](https://docs.z.ai/guides/vlm/glm-4.6v)
137
+ - **Desktop Assistant App**: [https://huggingface.co/spaces/zai-org/GLM-4.5V-Demo-App](https://huggingface.co/spaces/zai-org/GLM-4.5V-Demo-App)
138
+
139
+ ## Introduction
140
+
141
+ GLM-4.6V series model includes two versions: GLM-4.6V (106B), a foundation model designed for cloud and high-performance
142
+ cluster scenarios,
143
+ and GLM-4.6V-Flash (9B), a lightweight model optimized for local deployment and low-latency applications.
144
+ GLM-4.6V scales its context window to 128k tokens in training,
145
+ and achieves SoTA performance in visual understanding among models of similar parameter scales.
146
+ Crucially, we integrate native Function Calling capabilities for the first time.
147
+ This effectively bridges the gap between "visual perception" and "executable action"
148
+ providing a unified technical foundation for multimodal agents in real-world business scenarios.
149
+
150
+ ![GLM-4.6V Benchmarks](https://raw.githubusercontent.com/zai-org/GLM-V/refs/heads/main/resources/bench_46v.jpeg)
151
+
152
+ Beyond achieves SoTA performance across major multimodal benchmarks at comparable model scales. GLM-4.6V introduces
153
+ several key features:
154
+
155
+ - **Native Multimodal Function Calling**
156
+ Enables native vision-driven tool use. Images, screenshots, and document pages can be passed directly as tool inputs without text conversion, while visual outputs (charts, search images, rendered pages) are interpreted and integrated into the reasoning chain. This closes the loop from perception to understanding to execution.
157
+
158
+ - **Interleaved Image-Text Content Generation**
159
+ Supports high-quality mixed media creation from complex multimodal inputs. GLM-4.6V takes a multimodal context—spanning documents, user inputs, and tool-retrieved images—and synthesizes coherent, interleaved image-text content tailored to the task. During generation it can actively call search and retrieval tools to gather and curate additional text and visuals, producing rich, visually grounded content.
160
+
161
+
162
+ - **Multimodal Document Understanding**
163
+ GLM-4.6V can process up to 128K tokens of multi-document or long-document input, directly interpreting richly formatted pages as images. It understands text, layout, charts, tables, and figures jointly, enabling accurate comprehension of complex, image-heavy documents without requiring prior conversion to plain text.
164
+
165
+ - **Frontend Replication & Visual Editing**
166
+ Reconstructs pixel-accurate HTML/CSS from UI screenshots and supports natural-language-driven edits. It detects layout, components, and styles visually, generates clean code, and applies iterative visual modifications through simple user instructions.
167
+
168
+
169
+ **This Hugging Face repository hosts the `GLM-4.6V-Flash` model, part of the `GLM-V` series.**
170
+
171
+ ## Usage
172
+
173
+ ### Environment Installation
174
+
175
+ For `SGLang`:
176
+
177
+ ```bash
178
+ pip install sglang>=0.5.6post1
179
+ pip install transformers>=5.0.0rc0
180
+ ```
181
+
182
+ For `vLLM`:
183
+
184
+ ```bash
185
+ pip install vllm>=0.12.0
186
+ pip install transformers>=5.0.0rc0
187
+ ```
188
+
189
+ ### Quick Start with Transformers
190
+
191
+ ```python
192
+ from transformers import AutoProcessor, Glm4vMoeForConditionalGeneration
193
+ import torch
194
+
195
+ MODEL_PATH = "zai-org/GLM-4.6V-Flash"
196
+ messages = [
197
+ {
198
+ "role": "user",
199
+ "content": [
200
+ {
201
+ "type": "image",
202
+ "url": "https://upload.wikimedia.org/wikipedia/commons/f/fa/Grayscale_8bits_palette_sample_image.png"
203
+ },
204
+ {
205
+ "type": "text",
206
+ "text": "describe this image"
207
+ }
208
+ ],
209
+ }
210
+ ]
211
+ processor = AutoProcessor.from_pretrained(MODEL_PATH)
212
+ model = Glm4vMoeForConditionalGeneration.from_pretrained(
213
+ pretrained_model_name_or_path=MODEL_PATH,
214
+ torch_dtype="auto",
215
+ device_map="auto",
216
+ )
217
+ inputs = processor.apply_chat_template(
218
+ messages,
219
+ tokenize=True,
220
+ add_generation_prompt=True,
221
+ return_dict=True,
222
+ return_tensors="pt"
223
+ ).to(model.device)
224
+ inputs.pop("token_type_ids", None)
225
+ generated_ids = model.generate(**inputs, max_new_tokens=8192)
226
+ output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False)
227
+ print(output_text)
228
+ ```
229
+
230
+ ## Evaluation Settings
231
+
232
+ We primarily use vLLM as the backend for model inference. For faster and more reliable performance on video tasks, we employ SGLang. To reproduce our leaderboard results, we recommend the following decoding parameters:
233
+
234
+ + top_p: 0.6
235
+ + top_k: 2
236
+ + temperature: 0.8
237
+ + repetition_penalty: 1.1
238
+ + max_generate_tokens: 16K
239
+
240
+ For more usage details, please refer to Our [Github](https://github.com/zai-org/GLM-V).
241
+
242
+
243
+
244
+ ## Fixed and Remaining Issues
245
+
246
+ Since the open-sourcing of GLM-4.1V, we have received extensive feedback from the community and are well aware that the model still has many shortcomings. In subsequent iterations, we attempted to address several common issues — such as repetitive thinking outputs and formatting errors — which have been mitigated to some extent in this new version.
247
+
248
+ However, the model still has several limitations and issues that we will fix as soon as possible:
249
+
250
+ 1. Pure text QA capabilities still have significant room for improvement. In this development cycle, our primary focus was on visual multimodal scenarios, and we will enhance pure text abilities in upcoming updates.
251
+ 2. The model may still overthink or even repeat itself in certain cases, especially when dealing with complex prompts.
252
+ 3. In some situations, the model may restate the answer again at the end.
253
+ 4. There remain certain perception limitations, such as counting accuracy and identifying specific individuals, which still require improvement.
254
+
255
+ Thank you for your patience and understanding. We also welcome feedback and suggestions in the issue section — we will respond and improve as much as we can!
256
+
257
+ ## Citation
258
+
259
+ If you use this model, please cite the following paper:
260
+
261
+ ```bibtex
262
+ @misc{vteam2025glm45vglm41vthinkingversatilemultimodal,
263
+ title={GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning},
264
+ author={V Team and Wenyi Hong and Wenmeng Yu and Xiaotao Gu and Guo Wang and Guobing Gan and Haomiao Tang and Jiale Cheng and Ji Qi and Junhui Ji and Lihang Pan and Shuaiqi Duan and Weihan Wang and Yan Wang and Yean Cheng and Zehai He and Zhe Su and Zhen Yang and Ziyang Pan and Aohan Zeng and Baoxu Wang and Bin Chen and Boyan Shi and Changyu Pang and Chenhui Zhang and Da Yin and Fan Yang and Guoqing Chen and Jiazheng Xu and Jiale Zhu and Jiali Chen and Jing Chen and Jinhao Chen and Jinghao Lin and Jinjiang Wang and Junjie Chen and Leqi Lei and Letian Gong and Leyi Pan and Mingdao Liu and Mingde Xu and Mingzhi Zhang and Qinkai Zheng and Sheng Yang and Shi Zhong and Shiyu Huang and Shuyuan Zhao and Siyan Xue and Shangqin Tu and Shengbiao Meng and Tianshu Zhang and Tianwei Luo and Tianxiang Hao and Tianyu Tong and Wenkai Li and Wei Jia and Xiao Liu and Xiaohan Zhang and Xin Lyu and Xinyue Fan and Xuancheng Huang and Yanling Wang and Yadong Xue and Yanfeng Wang and Yanzi Wang and Yifan An and Yifan Du and Yiming Shi and Yiheng Huang and Yilin Niu and Yuan Wang and Yuanchang Yue and Yuchen Li and Yutao Zhang and Yuting Wang and Yu Wang and Yuxuan Zhang and Zhao Xue and Zhenyu Hou and Zhengxiao Du and Zihan Wang and Peng Zhang and Debing Liu and Bin Xu and Juanzi Li and Minlie Huang and Yuxiao Dong and Jie Tang},
265
+ year={2025},
266
+ eprint={2507.01006},
267
+ archivePrefix={arXiv},
268
+ primaryClass={cs.CV},
269
+ url={https://arxiv.org/abs/2507.01006},
270
+ }
271
+ ```