Neural Uncertainty Principle: A Unified View of Adversarial Fragility and LLM Hallucination (arxiv 2026) 2026-03-28 #深度学习 #大模型
DynHD: Hallucination Detection for Diffusion Large Language Models via Denoising Dynamics Deviation Learnig (arxiv 2026) 2026-03-27 #深度学习 #大模型 #扩散模型
Locate-then-Sparsify: Attribution Guided Sparse Strategy for Visual Hallucination Mitigation (CVPR 2026) 2026-03-22 #深度学习 #多模态 #大模型
Fighting Hallucinations with Counterfactuals: Diffusion-Guided Perturbations for LVLM Hallucination Suppression (CVPR 2026) 2026-03-17 #深度学习 #大模型
Efficient Refusal Ablation in LLM through Optimal Transport (ICLR 2026 Trustworthy AI) 2026-03-16 #深度学习 #大模型
The Assistant Axis: Situating and Stabilizing the Default Persona of Language Model (arxiv 2026) anthropic 出品。 2026-03-13 #深度学习 #大模型
Why LVLMs Are More Prone to Hallucinations in Longer Responses: The Role of Context (ICCV 2025) 2026-03-12 #深度学习 #多模态 #大模型
HALP: Detecting Hallucinations in Vision-Language Models without Generating a Single Token (arxiv 2026) 2026-03-09 #深度学习 #多模态 #大模型
Lyapunov Probes for Hallucination Detection in Large Foundation Models (CVPR 2026) 2026-03-08 #深度学习 #多模态 #大模型