共计 175 篇文章
2026
从零开始构建一个极简的 AI agent
Learning to Reason in 13 Parameters
Look Closer! An Adversarial Parametric Editing Framework for Hallucination Mitigation in VLMs
CRoPS: A Training-Free Hallucination Mitigation Framework for Vision-Language Models
REVIS:Sparse Latent Steering to Mitigate Object Hallucination in Large Vision-Language Models
Light Alignment Improves LLM Safety via Model Self-Reflection with a Single Neuron
FraudShield: Knowledge Graph Empowered Defense for LLMs against Fraud Attacks
PoisonedEye:Knowledge Poisoning Attack on Retrieval-Augmented Generation based Large Vision-Language Models
TraceDet: Hallucination Detection from the Decoding Trace of Diffusion Large Language Models
Hallucination Begins Where Saliency Drops