Arrow Research search

Author name cluster

Yiming Hu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

IS Journal 2026 Journal Article

A vehicle lateral stability criterion fusing phase plane and RBF neural network

  • Dequan Zeng
  • Lixiong Rao
  • Yiming Hu
  • Peizhi Zhang
  • Lu Xiong
  • Jun Lu
  • Giuseppe Carbone
  • Yinquan Yu

Precise stability criteria are essential for vehicle handling control, but conventional methods based on tire adhesion limits or linear models often lack robustness across diverse scenarios. To address this issue, this paper proposes a novel lateral stability criterion fusing phase plane analysis and RBF neural networks. The approach begins with an analysis of the vehicle’s stable state using the phase plane, followed by the division of the vehicle stability region employing the diamond method to generate a phase plane stability region database. Subsequently, the proposed phase plane-RBF stability criterion is constructed by leveraging the RBF neural network for nonlinear fitting of the stability region data, which is further refined through multiple rounds of optimization. Compared to traditional tire force and linear single-track model criteria, the proposed criterion demonstrates superior accuracy in identifying extreme conditions and enhanced adaptability across operational scenarios.

AAAI Conference 2026 Conference Paper

Where and What Matters: Sensitivity-Aware Task Vectors for Many-Shot Multimodal In-Context Learning

  • Ziyu Ma
  • Chenhui Gou
  • Yiming Hu
  • Yong Wang
  • Bohan Zhuang
  • Jianfei Cai

Large Multimodal Models (LMMs) have shown promising in-context learning (ICL) capabilities, but scaling to many-shot settings remains difficult due to limited context length and high inference cost. To address these challenges, task-vector-based methods have been explored by inserting compact representations of many-shot in-context demonstrations into model activations. However, existing task-vector-based methods either overlook the importance of where to insert task vectors or struggle to determine suitable values for each location. To this end, we propose a novel Sensitivity-aware Task Vector insertion framework (STV) to figure out where and what to insert. Our key insight is that activation deltas across query-context pairs exhibit consistent structural patterns, providing a reliable cue for insertion. Based on the identified sensitive-aware locations, we construct a pre-clustered activation bank for each location by clustering the activation values, and then apply reinforcement learning to choose the most suitable one to insert. We evaluate STV across a range of multimodal models (e.g., Qwen-VL, Idefics-2) and tasks (e.g., VizWiz, OK-VQA), demonstrating its effectiveness and showing consistent improvements over previous task-vector-based methods with strong generalization.

ICRA Conference 2020 Conference Paper

Natural Scene Facial Expression Recognition with Dimension Reduction Network

  • Shenhua Hu
  • Yiming Hu
  • Jianquan Li
  • Xianlei Long
  • Mengjuan Chen
  • Qingyi Gu

As an external manifestation of human emotions, expression recognition plays an important role in human-computer interaction. Although existing expression recognition methods performs perfectly on constrained frontal faces, there are still many challenges in expression recognition in natural scenes due to different unrestricted conditions. Expression classification belongs to a pattern recognition problem where intra-class distance is greater than the inter-class distance, which leads to severe over-fitting when using neural networks for expression recognition. This paper proposes a novel net-work structure called Dimension Reduction Network which can effectively reduce generalization error. By adding a data dimension reduction module before the general classification network, a lot of redundant information is filtered, and only useful information is left. This can reduce the interference by irrelevant information when performing classification tasks and reduce generalization error. The proposed method does not require any modification to the classification network, only a small dimension reduction module needs to be added in front of the classification network. However, it can effectively reduce generalization error. We designed big and tiny versions of Dimension Reduction Network, both exceeds our baseline on AffectNet data set. The big version of our proposed method surpassed the state-of-the-art methods by more than 1. 2% on AffectNet data set. Our code will open source 3 when the paper is accepted.