Arrow Research search

Author name cluster

Xinyu Luo

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
2 author rows

Possible papers

5

ICRA Conference 2025 Conference Paper

MonoLDP: LED Assisted Indoor Mobile Bot Monocular Depth Prediction and Pose Estimation System

  • Chenxin Liang
  • Jingyang Wang
  • Shoujie Li
  • Kit Wa Sou
  • Xinyu Luo
  • Wenbo Ding 0001

Multi-robot clusters are increasingly deployed in indoor environments, where effective communication and 3D perception are critical for coordinated operations. Monocular cameras, known for their lightweight design, cost-effectiveness, and versatility, present a promising solution for these tasks. However, relying solely on monocular cameras for comprehensive perception and communication presents significant challenges. To address this, we introduce MonoLDP, a novel system that leverages monocular cameras for depth estimation, mutual pose estimation, and visible light communication in indoor environments, providing an integrated framework to overcome these limitations. MonoLDP features a two-stage network: (1) a depth estimation module that infers depth from monocular images, and (2) a depth-guided 3D object recognition network for agent-relative localization and pose estimation. We created a custom dataset to validate the accuracy of MonoLDP. On our indoor dataset, MonoLDP outperforms the baseline by 43. 39% in 3D detection and 42. 39% in bird's-eye view detection, with an average localization error of 0. 104 m and an orientation error of 1. 66 degrees. Moreover, the depth estimation network demonstrates excellent performance on the NYU v2 dataset. Additionally, the system achieves a communication rate of 1. 2 Kbps with a bit error rate below 10 -2 at a distance of up to 4 m using LED arrays. Our code will be released at https://github.com/RavenLiang1005/MonoLDP.git.

NeurIPS Conference 2025 Conference Paper

PSMBench: A Benchmark and Dataset for Evaluating LLMs Extraction of Protocol State Machines from RFC Specifications

  • Zilin Shen
  • Xinyu Luo
  • Imtiaz Karim
  • Elisa Bertino

Accurately extracting protocol-state machines (PSMs) from the long, densely written Request-for-Comments (RFC) standards that govern Internet‐scale communication remains a bottleneck for automated security analysis and protocol testing. In this paper, we introduce RFC2PSM, the first large-scale dataset that pairs 1, 580 pages of cleaned RFC text with 108 manually validated states and 297 transitions covering 14 widely deployed protocols spanning the data-link, transport, session, and application layers. Built on this corpus, we propose PsmBench, a benchmark that (i) feeds chunked RFC to an LLM, (ii) prompts the model to emit a machine-readable PSM, and (iii) scores the output with structure-aware, semantic fuzzy-matching metrics that reward partially correct graphs. A comprehensive baseline study of nine state-of-the-art open and commercial LLMs reveals a persistent state–transition gap: models identify many individual states (up to $0. 82$ F1) but struggle to assemble coherent transition graphs ($\leq 0. 38$ F1), highlighting challenges in long-context reasoning, alias resolution, and action/event disambiguation. We release the dataset, evaluation code, and all model outputs as open-sourced, providing a fully reproducible starting point for future work on reasoning over technical prose and generating executable graph structures. RFC2PSM and PsmBench aim to catalyze cross-disciplinary progress toward LLMs that can interpret and verify the protocols that keep the Internet safe.

NeurIPS Conference 2025 Conference Paper

SPACE: SPike-Aware Consistency Enhancement for Test-Time Adaptation in Spiking Neural Networks

  • Xinyu Luo
  • Kecheng Chen
  • Pao-Sheng Sun
  • Chris Xing TIAN
  • Arindam Basu
  • Haoliang Li

Spiking Neural Networks (SNNs), as a biologically plausible alternative to Artificial Neural Networks (ANNs), have demonstrated advantages in terms of energy efficiency, temporal processing, and biological plausibility. However, SNNs are highly sensitive to distribution shifts, which can significantly degrade their performance in real-world scenarios. Traditional test-time adaptation (TTA) methods designed for ANNs often fail to address the unique computational dynamics of SNNs, such as sparsity and temporal spiking behavior. To address these challenges, we propose SPike-Aware Consistency Enhancement (SPACE), the first source-free and single-instance TTA method specifically designed for SNNs. SPACE leverages the inherent spike dynamics of SNNs to maximize the consistency of spike-behavior-based local feature maps across augmented versions of a single test sample, enabling robust adaptation without requiring source data. We evaluate SPACE on multiple datasets. Furthermore, SPACE exhibits robust generalization across diverse network architectures, consistently enhancing the performance of SNNs on CNNs, Transformer, and ConvLSTM architectures. Experimental results show that SPACE outperforms state-of-the-art ANN methods while maintaining lower computational cost, highlighting its effectiveness and robustness for SNNs in real-world settings. The code will be available at https: //github. com/ethanxyluo/SPACE.

ICML Conference 2025 Conference Paper

Stacey: Promoting Stochastic Steepest Descent via Accelerated ℓp-Smooth Nonconvex Optimization

  • Xinyu Luo
  • Site Bai
  • Bolian Li
  • Petros Drineas
  • Ruqi Zhang
  • Brian Bullins

While popular optimization methods such as SGD, AdamW, and Lion depend on steepest descent updates in either $\ell_2$ or $\ell_\infty$ norms, there remains a critical gap in handling the non-Euclidean structure observed in modern deep networks training. In this work, we address this need by introducing a new accelerated $\ell_p$ steepest descent algorithm, called Stacey, which uses interpolated primal-dual iterate sequences to effectively navigate non-Euclidean smooth optimization tasks. In addition to providing novel theoretical guarantees for the foundations of our algorithm, we empirically compare our approach against these popular methods on tasks including image classification and language model (LLM) pretraining, demonstrating both faster convergence and higher final accuracy. We further evaluate different values of $p$ across various models and datasets, underscoring the importance and efficiency of non-Euclidean approaches over standard Euclidean methods. Code can be found at https: //github. com/xinyuluo8561/Stacey.

ICML Conference 2023 Conference Paper

Dimensionality Reduction for General KDE Mode Finding

  • Xinyu Luo
  • Christopher Musco
  • Cas Widdershoven

Finding the mode of a high dimensional probability distribution $\mathcal{D}$ is a fundamental algorithmic problem in statistics and data analysis. There has been particular interest in efficient methods for solving the problem when $\mathcal{D}$ is represented as a mixture model or kernel density estimate, although few algorithmic results with worst-case approximation and runtime guarantees are known. In this work, we significantly generalize a result of (LeeLiMusco: 2021) on mode approximation for Gaussian mixture models. We develop randomized dimensionality reduction methods for mixtures involving a broader class of kernels, including the popular logistic, sigmoid, and generalized Gaussian kernels. As in Lee et al. ’s work, our dimensionality reduction results yield quasi-polynomial algorithms for mode finding with multiplicative accuracy $(1-\epsilon)$ for any $\epsilon > 0$. Moreover, when combined with gradient descent, they yield efficient practical heuristics for the problem. In addition to our positive results, we prove a hardness result for box kernels, showing that there is no polynomial time algorithm for finding the mode of a kernel density estimate, unless $\mathit{P} = \mathit{NP}$. Obtaining similar hardness results for kernels used in practice (like Gaussian or logistic kernels) is an interesting future direction.