English
File size: 1,488 Bytes
2b6fa89
 
af5f80b
 
 
 
2b6fa89
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
license: mit
datasets:
- mjjung/ActivityNet-VTune
language:
- en
---

# TimeChat-7B-ActivityNet-VTune Model

## Model details

We trained [VideoLLaMA](https://arxiv.org/abs/2306.02858) using VTune, a developed instruction-tuning method specifically designed to account for consistency. 

For the tuning, we utilized 10K training videos from ActivityNet-Captions with 205K automatically generated annotations.

## Evaluation
We evaluated the model on ActivtyNet-CON and ActivtyNet-Captions.

- ActivityNet-CON
| Metric          | Value       |
|-----------------|-------------|
| Ground          | 33.0        |
| R-Ground        | 24.7 (74.8) |
| S-Ground        | 10.0 (30.2) |
| H-Verify        | 20.2 (61.1) |
| C-Verify        | 17.7 (53.7) |

- ActivityNet-Captions
| Metric          | Value   |
|-----------------|---------|
| R@1 IoU=0.3     | 51.58   |
| R@1 IoU=0.5     | 34.38   |
| R@1 IoU=0.7     | 19.18  |
| mIoU            | 36.16   |

**Paper and Code for more information:**
[Paper](https://arxiv.org/abs/2411.12951), [Code](https://github.com/minjoong507/consistency-of-video-llm)

## Citation
If you find our research and codes useful, please consider starring our repository and citing our paper:

```
@article{jung2024consistency,
  title={On the Consistency of Video Large Language Models in Temporal Comprehension},
  author={Jung, Minjoon and Xiao, Junbin and Zhang, Byoung-Tak and Yao, Angela},
  journal={arXiv preprint arXiv:2411.12951},
  year={2024}
}
```