Arrow Research search

Author name cluster

Hongwei Yan

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
2 author rows

Possible papers

2

AAAI Conference 2025 Conference Paper

MindPainter: Efficient Brain-Conditioned Painting of Natural Images via Cross-Modal Self-Supervised Learning

  • Muzhou Yu
  • Shuyun Lin
  • Hongwei Yan
  • Kaisheng Ma

Despite significant advancements in image and text conditional image editing, the exploration of using brain signals, which are more direct and personalized to reflect user intentions, remains limited. An intuitive method is to convert implicit brain signals into explicit representations such as images, which can then serve as prompts for editing. However, such two-stage method suffers from low inference efficiency, inaccurate brain interpretation, and unnatural editing results. In this paper, we apply brain signals of visual perception as prompts and propose a cross-modal self-supervised learning for natural image painting (MindPainter). This method achieves efficient and natural brain-conditioned image editing in a straightforward manner. MindPainter is trained for reconstruction from masked images directly with pseudo-brain signals, which is simulated by the proposed Pseudo Brain Generator. It facilitates efficient cross-modal integration. The proposed Brain Adapter further eliminates the gap in implicit space between modalities, ensuring accurate semantic interpretation of brain signals and coherent consolidation. Besides, the designed Multi-Mask Generation Policy enhances the generalization, realizing high-quality editing in various painting scenarios, including inpainting and outpainting. To the best of our knowledge, MindPainter is the first work to achieve efficient brain-conditioned image painting, providing potential for direct brain control in creative AI. The code and the link to the extended version will be available on GitHub.

ICML Conference 2025 Conference Paper

Right Time to Learn: Promoting Generalization via Bio-inspired Spacing Effect in Knowledge Distillation

  • Guanglong Sun
  • Hongwei Yan
  • Liyuan Wang
  • Qian Li 0040
  • Bo Lei
  • Yi Zhong

Knowledge distillation (KD) is a powerful strategy for training deep neural networks (DNNs). While it was originally proposed to train a more compact “student” model from a large “teacher” model, many recent efforts have focused on adapting it as an effective way to promote generalization of the model itself, such as online KD and self KD. Here, we propose an easy-to-use and compatible strategy named Spaced KD to improve the effectiveness of both online KD and self KD, in which the student model distills knowledge from a teacher model trained with a space interval ahead. This strategy is inspired by a prominent theory named spacing effect in the field of biological learning and memory, positing that appropriate intervals between learning trials can significantly enhance learning performance. We provide an in-depth theoretical and empirical analysis showing that the benefits of the proposed spacing effect in KD stem from seeking a flat minima during stochastic gradient descent (SGD). We perform extensive experiments to demonstrate the effectiveness of our Spaced KD in improving the learning performance of DNNs (e. g. , the additional performance gain is up to 2. 31% and 3. 34% on Tiny-ImageNet over online KD and self KD, respectively). Our codes have been released on github https: //github. com/SunGL001/Spaced-KD.