Are Vision-Language Models Safe in the Wild? A Meme-Based Benchmark Study Paper • 2505.15389 • Published May 21 • 8
REFIND: Retrieval-Augmented Factuality Hallucination Detection in Large Language Models Paper • 2502.13622 • Published Feb 19 • 4