Arrow Research search

Author name cluster

Wei Yin

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
2 author rows

Possible papers

6

EAAI Journal 2025 Journal Article

A novel approach of causality matrix embedded into the Graph Neural Network for forecasting the price of Bitcoin

  • Xinxin Luo
  • Wei Yin
  • Bo Xiao
  • Jia Cao

Accurately forecasting Bitcoin prices presents significant challenges due to its high volatility and the complex interactions among macroeconomic and crypto-specific variables. Traditional forecasting models often rely on correlations, which fail to capture the intrinsic causal relationships that drive price fluctuations. In this paper, we propose a novel method that integrates a Cause & Effect (C&E) Matrix within a Graph Neural Network (GNN) to explicitly model these causal dependencies. Unlike correlations, causal relationships remain relatively stable even under changing market conditions, making them more reliable for robust and interpretable forecasting. Our approach begins with causal analysis to identify the key variables influencing Bitcoin’s price, after which these causal links are translated into directed graph structures. These structures allow for the extraction of spatio-temporal features via GNN, capturing the underlying dynamics of Bitcoin’s price movements. Experimental results demonstrate that our C&E embedded GNN significantly improves short-term Bitcoin price forecasts compared to baseline models, highlighting the critical role of causality in enhancing prediction accuracy and model interpretability in volatile markets.

EAAI Journal 2025 Journal Article

Dynamic correlation graph convolution network with embedded temporal correlation extraction for stock price forecasting

  • Fang He
  • Wei Yin
  • Yilun Jin
  • Zhengyang Chen

Stock price forecasting has become a significant and complex research area within financial technology. The dynamic correlations among stocks and the inherent noise in price volatility present considerable challenges in accurately forecasting stock prices and enhancing investment returns. This paper introduces a novel Dynamic Correlation Graph Convolution Network (DyCGCN) with embedded temporal correlation extraction. First, we propose a dual-scale dynamic graph generation method to capture the topological relationships among stocks. Second, we develop a dynamic correlation-temporal convolution module that extracts high-level temporal correlations. Third, we introduce a prospect theory-guided multi-strategy loss function that accommodates the diverse risk preferences of investors. Furthermore, we present a joint regression-classification learning method to extract and leverage stock trend information. Experiments conducted on four real-world datasets demonstrate the superiority of DyCGCN, achieving an average 24. 7% reduction in prediction error and a 10. 5% improvement in predictive accuracy over baseline models, underscoring its strong potential for practical stock price forecasting.

NeurIPS Conference 2024 Conference Paper

DC-Gaussian: Improving 3D Gaussian Splatting for Reflective Dash Cam Videos

  • Linhan Wang
  • Kai Cheng
  • Shuo Lei
  • Shengkun Wang
  • Wei Yin
  • Chenyang Lei
  • Xiaoxiao Long
  • Chang-Tien Lu

We present DC-Gaussian, a new method for generating novel views from in-vehicle dash cam videos. While neural rendering techniques have made significant strides in driving scenarios, existing methods are primarily designed for videos collected by autonomous vehicles. However, these videos are limited in both quantity and diversity compared to dash cam videos, which are more widely used across various types of vehicles and capture a broader range of scenarios. Dash cam videos often suffer from severe obstructions such as reflections and occlusions on the windshields, which significantly impede the application of neural rendering techniques. To address this challenge, we develop DC-Gaussian based on the recent real-time neural rendering technique 3D Gaussian Splatting (3DGS). Our approach includes an adaptive image decomposition module to model reflections and occlusions in a unified manner. Additionally, we introduce illumination-aware obstruction modeling to manage reflections and occlusions under varying lighting conditions. Lastly, we employ a geometry-guided Gaussian enhancement strategy to improve rendering details by incorporating additional geometry priors. Experiments on self-captured and public dash cam videos show that our method not only achieves state-of-the-art performance in novel view synthesis, but also accurately reconstructing captured scenes getting rid of obstructions.

IROS Conference 2024 Conference Paper

SDGE: Stereo Guided Depth Estimation for 360°Camera Sets

  • Jialei Xu
  • Wei Yin
  • Dong Gong
  • Junjun Jiang
  • Xianming Liu 0005

Depth estimation is a critical technology in autonomous driving, and multi-camera systems are often used to achieve a 360° perception. These 360° camera sets often have limited or low-quality overlap regions, making multi-view stereo methods infeasible for the entire image. Alternatively, monocular methods may not produce consistent cross-view predictions. To address these issues, we propose the Stereo Guided Depth Estimation (SGDE) method, which enhances depth estimation of the full image by explicitly utilizing multi-view stereo results on the overlap. We suggest building virtual pinhole cameras to resolve the distortion problem of fisheye cameras and unify the processing for the two types of 360° cameras. For handling the varying noise on camera poses caused by unstable movement, the approach employs a self-calibration method to obtain highly accurate relative poses of the adjacent cameras with minor overlap. These enable the use of robust stereo methods to obtain a high-quality depth prior in the overlap region. This prior serves not only as an additional input but also as pseudo-labels that enhance the accuracy of depth estimation methods and improve cross-view prediction consistency. The effectiveness of SGDE is evaluated on one fisheye camera dataset, Synthetic Urban, and two pinhole camera datasets, DDAD and nuScenes. Our experiments demonstrate that SGDE is effective for both supervised and self-supervised depth estimation, and highlight the potential of our method for advancing autonomous driving technology. Our project page is at https://github.com/JialeiXu/SGDE.

NeurIPS Conference 2022 Conference Paper

Hierarchical Normalization for Robust Monocular Depth Estimation

  • Chi Zhang
  • Wei Yin
  • Billzb Wang
  • Gang Yu
  • Bin Fu
  • Chunhua Shen

In this paper, we address monocular depth estimation with deep neural networks. To enable training of deep monocular estimation models with various sources of datasets, state-of-the-art methods adopt image-level normalization strategies to generate affine-invariant depth representations. However, learning with the image-level normalization mainly emphasizes the relations of pixel representations with the global statistic in the images, such as the structure of the scene, while the fine-grained depth difference may be overlooked. In this paper, we propose a novel multi-scale depth normalization method that hierarchically normalizes the depth representations based on spatial information and depth distributions. Compared with previous normalization strategies applied only at the holistic image level, the proposed hierarchical normalization can effectively preserve the fine-grained details and improve accuracy. We present two strategies that define the hierarchical normalization contexts in the depth domain and the spatial domain, respectively. Our extensive experiments show that the proposed normalization strategy remarkably outperforms previous normalization methods, and we set new state-of-the-art on five zero-shot transfer benchmark datasets.

AAAI Conference 2020 Conference Paper

Task-Aware Monocular Depth Estimation for 3D Object Detection

  • Xinlong Wang
  • Wei Yin
  • Tao Kong
  • Yuning Jiang
  • Lei Li
  • Chunhua Shen

Monocular depth estimation enables 3D perception from a single 2D image, thus attracting much research attention for years. Almost all methods treat foreground and background regions (“things and stuff”) in an image equally. However, not all pixels are equal. Depth of foreground objects plays a crucial role in 3D object recognition and localization. To date how to boost the depth prediction accuracy of foreground objects is rarely discussed. In this paper, we first analyze the data distributions and interaction of foreground and background, then propose the foreground-background separated monocular depth estimation (ForeSeE) method, to estimate the foreground and background depth using separate optimization objectives and decoders. Our method significantly improves the depth estimation performance on foreground objects. Applying ForeSeE to 3D object detection, we achieve 7. 5 AP gains and set new state-of-the-art results among other monocular methods. Code will be available at: https: //github. com/WXinlong/ForeSeE.