Arrow Research search
Back to ECAI

ECAI 2025

Latent Knowledge Scalpel: Precise and Massive Knowledge Editing for Large Language Models

Conference Paper Accepted Paper Artificial Intelligence

Abstract

Large Language Models (LLMs) often retain inaccurate or outdated information from pre-training, leading to incorrect predictions or biased outputs during inference. While existing model editing methods can address this challenge, they struggle with editing large amounts of factual information simultaneously and may compromise the general capabilities of the models. In this paper, our empirical study demonstrates that it is feasible to edit the internal representations of LLMs and replace the entities in a manner similar to editing natural language inputs. Based on this insight, we introduce the Latent Knowledge Scalpel (LKS), an LLM editor that manipulates the latent knowledge of specific entities via a lightweight hypernetwork to enable precise and large-scale editing. Experiments conducted on Llama-2 and Mistral show even with the number of simultaneous edits reaching 10, 000, LKS effectively performs knowledge editing while preserving the general abilities of the edited LLMs. Code is available at: https: //github. com/Linuxin-xxx/LKS.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
European Conference on Artificial Intelligence
Archive span
1982-2025
Indexed papers
5223
Paper id
567062350657200459