AAAI Conference 2026 Conference Paper
Injection Without Distortion: Geometrically Constrained Knowledge Enhancement for Vision-Language Models
- Zhongze Wu
- Xiu Su
- Feng Yang
- Shan You
- Jun Long
- Yueyi Luo
Vision-Language Models (VLMs) are widely used in tasks like Open-Vocabulary Object Detection and zero-shot Classification, owing to their powerful generalization. However, recent research reveals that VLMs exhibit significant performance instability when tasked with recognizing concepts at varying granularities (e.g., ``animal'' vs. ``dog''). Prevailing methods inject external knowledge from Large Language Models, but this unconstrained approach distorts the VLM's inherent hierarchical orthogonal geometry, leading to performance collapse on general concepts. To address this, we introduce GeCoin, an innovative Geometrically Constrained framework that safely enhances existing VLMs with external knowledge for improved hierarchical understanding, without additional training. By projecting knowledge into the null-space of a query concept's feature space, GeCoin mathematically guarantees the preservation of general knowledge while integrating specialized information. Extensive experiments across large-scale benchmarks, diverse VLMs, and knowledge from various LLMs (e.g., GPT-3.5, Claude-3, Gemini-Pro) show that GeCoin boosts performance by an average of 3.9% over the strongest baseline—crucially eradicating performance collapse on general concepts.