Arrow Research search

Author name cluster

Sha Lu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

7 papers
2 author rows

Possible papers

7

JBHI Journal 2025 Journal Article

Leveraging Channel Coherence in Long-Term iEEG Data for Seizure Prediction

  • Sha Lu
  • Lin Liu
  • Jiuyong Li
  • Jordan Chambers
  • Mark J. Cook
  • David B. Grayden

Epilepsy affects millions worldwide, posing significant challenges due to the erratic and unexpected nature of seizures. Despite advancements, existing seizure prediction techniques remain limited in their ability to forecast seizures with high accuracy, impacting the quality of life for those with epilepsy. This research introduces the Coherence-based Seizure Prediction (CoSP) method, which integrates coherence analysis with deep learning to enhance seizure prediction efficacy. In CoSP, electroencephalography (EEG) recordings are divided into 10-second segments to extract channel pairwise coherence. This coherence data is then used to train a four-layer convolutional neural network to predict the probability of being in a preictal state. The predicted probabilities are then processed to issue seizure warnings. CoSP was evaluated in a pseudo-prospective setting using long-term iEEG data from ten patients in the NeuroVista seizure advisory system. CoSP demonstrated promising predictive performance across a range of preictal intervals (4 to 180 minutes). CoSP achieved a median Seizure Sensitivity (SS) of 0. 79, a median false alarm rate of 0. 15 per hour, and a median Time in Warning (TiW) of 27%, highlighting its potential for accurate and reliable seizure prediction. Statistical analysis confirmed that CoSP significantly outperformed chance (p = 0. 001) and other baseline methods (p <0. 05) under similar evaluation configurations.

IROS Conference 2024 Conference Paper

BEV-ODOM: Reducing Scale Drift in Monocular Visual Odometry with BEV Representation

  • Yufei Wei
  • Sha Lu
  • Fuzhang Han
  • Rong Xiong
  • Yue Wang 0020

Monocular visual odometry (MVO) is vital in autonomous navigation and robotics, providing a cost-effective and flexible motion tracking solution, but the inherent scale ambiguity in monocular setups often leads to cumulative errors over time. In this paper, we present BEV-ODOM, a novel MVO framework leveraging the Bird’s Eye View (BEV) Representation to address scale drift. Unlike existing approaches, BEV-ODOM integrates a depth-based perspective-view (PV) to BEV encoder, a correlation feature extraction neck, and a CNN-MLP-based decoder, enabling it to estimate motion across three degrees of freedom without the need for depth supervision or complex optimization techniques. Our framework reduces scale drift in long-term sequences and achieves accurate motion estimation across various datasets, including NCLT, Oxford, and KITTI. The results indicate that BEV-ODOM outperforms current MVO methods, demonstrating reduced scale drift and higher accuracy.

ICRA Conference 2024 Conference Paper

RGBD-based Image Goal Navigation with Pose Drift: A Topo-metric Graph based Approach

  • Shuhao Ye
  • Yuxiang Cui
  • Hao Sha 0002
  • Sha Lu
  • Yu Zhang 0018
  • Rong Xiong
  • Yue Wang 0020

Image-goal navigation in unknown environments with sensor error is of considerable difficulty for autonomous robots. In this paper, we propose a drift-resisting topo-metric graph to map the environment and localize the robot using only relative poses. The error-sharing mechanism under this representation effectively reduces the impact of accumulated drifts commonly encountered in navigation tasks. A Reinforcement Learning based policy was proposed for sub-goal selection on this topo-metric graph, which improves navigation efficiency by handling task-driven features taking both image correlation and topological layout into account. We adopt a modular system design with this map representation and graph policy, leaving the low-level motion planning problems to classical controllers for better stability and generalizability. Experimental results demonstrate that our method can achieve robust navigation performance in a variety of unknown environments and even 50% higher success rate over existing methods in complex environments with odometry drift.

ICRA Conference 2023 Conference Paper

DeepRING: Learning Roto-translation Invariant Representation for LiDAR based Place Recognition

  • Sha Lu
  • Xuecheng Xu
  • Li Tang 0006
  • Rong Xiong
  • Yue Wang 0020

LiDAR based place recognition is popular for loop closure detection and re-localization. In recent years, deep learning brings improvements to place recognition by learnable feature extraction. However, these methods degenerate when the robot re-visits previous places with a large perspective difference. To address the challenge, we propose DeepRING to learn the roto-translation invariant representation from LiDAR scan, so that robot visiting the same place with a different perspective can have similar representations. There are two keys in DeepRING: the feature is extracted from sinogram, and the feature is aggregated by magnitude spectrum. The two steps keep the final representation with both discrimination and roto-translation invariance. Moreover, we state place recognition as a one-shot learning problem with each place being a class, leveraging relation learning to build representation similarity. Substantial experiments are carried out on public datasets, validating the effectiveness of each proposed component, and showing that DeepRING outperforms the comparative methods, especially in dataset level generalization.

IROS Conference 2022 Conference Paper

One RING to Rule Them All: Radon Sinogram for Place Recognition, Orientation and Translation Estimation

  • Sha Lu
  • Xuecheng Xu
  • Huan Yin
  • Zexi Chen
  • Rong Xiong
  • Yue Wang 0020

LiDAR-based global localization is a fundamental problem for mobile robots. It consists of two stages, place recognition and pose estimation, which yields the current orientation and translation, using only the current scan as query and a database of map scans. Inspired by the definition of a recognized place, we consider that a good global localization solution should keep the pose estimation accuracy with a lower place density. Following this idea, we propose a novel framework towards sparse place-based global localization, which utilizes a unified and learning-free representation, Radon sinogram (RING), for all sub-tasks. Based on the theoretical derivation, a translation invariant descriptor and an orientation invariant metric are proposed for place recognition, achieving certifiable robustness against arbitrary orientation and large translation between query and map scan. In addition, we also utilize the property of RING to propose a global convergent solver for both orientation and translation estimation, arriving at global localization. Evaluation of the proposed RING based framework validates the feasibility and demonstrates a superior performance even under a lower place density.

ICRA Conference 2022 Conference Paper

Translation Invariant Global Estimation of Heading Angle Using Sinogram of LiDAR Point Cloud

  • Xiaqing Ding
  • Xuecheng Xu
  • Sha Lu
  • Yanmei Jiao
  • Mengwen Tan
  • Rong Xiong
  • Huanjun Deng
  • Mingyang Li 0001

Global point cloud registration is an essential module for localization, of which the main difficulty exists in estimating the rotation globally without initial value. With the aid of gravity alignment, the degree of freedom in point cloud registration could be reduced to 4DoF, in which only the heading angle is required for rotation estimation. In this paper, we propose a fast and accurate global heading angle estimation method for gravity-aligned point clouds. Our key idea is that we generate a translation invariant representation based on Radon Transform, allowing us to solve the decoupled heading angle globally with circular cross-correlation. Besides, for heading angle estimation between point clouds with different distributions, we implement this heading angle estimator as a differentiable module to train a feature extraction network end-to-end. The experimental results validate the effectiveness of the proposed method in heading angle estimation and show better performance compared with other methods.

AAAI Conference 2018 Conference Paper

Energy-Efficient Automatic Train Driving by Learning Driving Patterns

  • Jin Huang
  • Yue Gao
  • Sha Lu
  • Xibin Zhao
  • Yangdong Deng
  • Ming Gu

Railway is regarded as the most sustainable means of modern transportation. With the fast-growing of fleet size and the railway mileage, the energy consumption of trains is becoming a serious concern globally. The nature of railway offers a unique opportunity to optimize the energy efficiency of locomotives by taking advantage of the undulating terrains along a route. The derivation of an energy-optimal train driving solution, however, proves to be a significant challenge due to the high dimension, nonlinearity, complex constraints, and timevarying characteristic of the problem. An optimized solution can only be attained by considering both the complex environmental conditions of a given route and the inherent characteristics of a locomotive. To tackle the problem, this paper employs a high-order correlation learning method for online generation of the energy optimized train driving solutions. Based on the driving data of experienced human drivers, a hypergraph model is used to learn the optimal embedding from the specified features for the decision of a driving operation. First, we design a feature set capturing the driving status. Next all the training data are formulated as a hypergraph and an inductive learning process is conducted to obtain the embedding matrix. The hypergraph model can be used for real-time generation of driving operation. We also proposed a reinforcement updating scheme, which offers the capability of sustainable enhancement on the hypergraph model in industrial applications. The learned model can be used to determine an optimized driving operation in real-time tested on the Hardware-in-Loop platform. Validation experiments proved that the energy consumption of the proposed solution is around 10% lower than that of average human drivers.