Arrow Research search

Author name cluster

Junseo Park

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
2 author rows

Possible papers

2

ICLR Conference 2025 Conference Paper

I2AM: Interpreting Image-to-Image Latent Diffusion Models via Bi-Attribution Maps

  • Junseo Park
  • Hyeryung Jang

Large-scale diffusion models have made significant advances in image generation, particularly through cross-attention mechanisms. While cross-attention has been well-studied in text-to-image tasks, their interpretability in image-to-image (I2I) diffusion models remains underexplored. This paper introduces Image-to-Image Attribution Maps $(\textbf{I}^2\textbf{AM})$, a method that enhances the interpretability of I2I models by visualizing bidirectional attribution maps, from the reference image to the generated image and vice versa. $\text{I}^2\text{AM}$ aggregates cross-attention scores across time steps, attention heads, and layers, offering insights into how critical features are transferred between images. We demonstrate the effectiveness of $\text{I}^2\text{AM}$ across object detection, inpainting, and super-resolution tasks. Our results demonstrate that $\text{I}^2\text{AM}$ successfully identifies key regions responsible for generating the output, even in complex scenes. Additionally, we introduce the Inpainting Mask Attention Consistency Score (IMACS) as a novel evaluation metric to assess the alignment between attribution maps and inpainting masks, which correlates strongly with existing performance metrics. Through extensive experiments, we show that $\text{I}^2\text{AM}$ enables model debugging and refinement, providing practical tools for improving I2I model's performance and interpretability.

NeurIPS Conference 2025 Conference Paper

Layer-Wise Modality Decomposition for Interpretable Multimodal Sensor Fusion

  • Jaehyun Park
  • Konyul Park
  • Daehun Kim
  • Junseo Park
  • Jun Won Choi

In autonomous driving, transparency in the decision-making of perception models is critical, as even a single misperception can be catastrophic. Yet with multi-sensor inputs, it is difficult to determine how each modality contributes to a prediction because sensor information becomes entangled within the fusion network. We introduce Layer-Wise Modality Decomposition (LMD), a post-hoc, model-agnostic interpretability method that disentangles modality-specific information across all layers of a pretrained fusion model. To our knowledge, LMD is the first approach to attribute the predictions of a perception model to individual input modalities in a sensor-fusion system for autonomous driving. We evaluate LMD on pretrained fusion models under camera–radar, camera–LiDAR, and camera–radar–LiDAR settings for autonomous driving. Its effectiveness is validated using structured perturbation-based metrics and modality-wise visual decompositions, demonstrating practical applicability to interpreting high-capacity multimodal architectures. Code is available at https: //github. com/detxter-jvb/Layer-Wise-Modality-Decomposition.