Arrow Research search

Author name cluster

Yanli Liu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

9 papers
1 author row

Possible papers

9

IJCAI Conference 2024 Conference Paper

Structure-Aware Spatial-Temporal Interaction Network for Video Shadow Detection

  • Housheng Wei
  • Guanyu Xing
  • Jingwei Liao
  • Yanci Zhang
  • Yanli Liu

Video shadow detection faces significant challenges due to ambiguous semantics and variable shapes. Existing video shadow detection algorithms typically overlook the fine shadow details, resulting in inconsistent detection between consecutive frames in complex real-world video scenarios. To address this issue, we propose a spatial-temporal feature interaction strategy, which refines and enhances global shadow semantics with local prior features in the modeling of shadow relations between frames. Moreover, a structure-aware shadow prediction module is proposed, which focuses on modeling the distance relation between local shadow edges and regions. Quantitative experimental results demonstrate that our approach significantly outperforms the state-of-the-art methods, providing stable and consistent shadow detection results in complex video shadow scenarios.

EAAI Journal 2024 Journal Article

Visual Tracking based on deformable Transformer and spatiotemporal information

  • Ruixu Wu
  • Xianbin Wen
  • Liming Yuan
  • Haixia Xu
  • Yanli Liu

At present, the Transformer-based Siamese network visual tracking method has shown strong influence and achieved remarkable results on various experimental sets. Especially on the premise of training on large-scale datasets, attention-based Transformer structures have been widely used. However, many trackers ignore the fusion enhancement of local and global features, and lack the extraction of spatiotemporal information. At the same time, the original Transformer structural features are redundant and will be affected by irrelevant parts beyond the region of interest. To address these issues, we propose a new method (DTS) based on deformable Transformer and spatiotemporal information. As a Siamese structure, it contains multiple modules. The template frame gets local to global important features through 2D CNN and Self-Attention. The search frame gets spatiotemporal information of interest through 3D CNN and SFM and DAM and then uses Cross-Attention to establish the correlation between them, and finally predict the bounding box of the target through the corner points. In order to verify the effectiveness of our method, we conduct experiments on the LaSOT, GOT-10K, TrackingNet, VOT2018, OTB100 and VAU123 benchmark datasets, the result index is 2%–3% higher than the baseline method. The model structure is simplified without affecting the performance, and the FPS reaches 50. The results show that our proposed tracker is very competitive compared with other state-of-the-art methods.

AAAI Conference 2023 Conference Paper

Hybrid Learning with New Value Function for the Maximum Common Induced Subgraph Problem

  • Yanli Liu
  • Jiming Zhao
  • Chu-Min Li
  • Hua Jiang
  • Kun He

Maximum Common Induced Subgraph (MCIS) is an important NP-hard problem with wide real-world applications. An efficient class of MCIS algorithms uses Branch-and-Bound (BnB), consisting in successively selecting vertices to match and pruning when it is discovered that a solution better than the best solution found so far does not exist. The method of selecting the vertices to match is essential for the performance of BnB. In this paper, we propose a new value function and a hybrid selection strategy used in reinforcement learning to define a new vertex selection method, and propose a new BnB algorithm, called McSplitDAL, for MCIS. Extensive experiments show that McSplitDAL significantly improves the current best BnB algorithms, McSplit+LL and McSplit+RL. An empirical analysis is also performed to illustrate why the new value function and the hybrid selection strategy are effective.

IJCAI Conference 2022 Conference Paper

A Strengthened Branch and Bound Algorithm for the Maximum Common (Connected) Subgraph Problem

  • Jianrong Zhou
  • Kun He
  • Jiongzhi Zheng
  • Chu-Min Li
  • Yanli Liu

We propose a new and strengthened Branch-and-Bound (BnB) algorithm for the maximum common (connected) induced subgraph problem based on two new operators, Long-Short Memory (LSM) and Leaf vertex Union Match (LUM). Given two graphs for which we search for the maximum common (connected) induced subgraph, the first operator of LSM maintains a score for the branching node using the short-term reward of each vertex of the first graph and the long-term reward of each vertex pair of the two graphs. In this way, the BnB process learns to reduce the search tree size significantly and boost the algorithm performance. The second operator of LUM further improves the performance by simultaneously matching the leaf vertices connected to the current matched vertices, and allows the algorithm to match multiple vertex pairs without affecting the optimality of solution. We incorporate the two operators into the state-of-the-art BnB algorithm McSplit, and denote the resulting algorithm as McSplit+LL. Experiments show that McSplit+LL outperforms McSplit+RL, a more recent variant of McSplit using reinforcement learning that is superior than McSplit.

AAAI Conference 2020 Conference Paper

A Learning Based Branch and Bound for Maximum Common Subgraph Related Problems

  • Yanli Liu
  • Chu-Min Li
  • Hua Jiang
  • Kun He

The performance of a branch-and-bound (BnB) algorithm for maximum common subgraph (MCS) problem and its related problems, like maximum common connected subgraph (MCCS) and induced Subgraph Isomorphism (SI), crucially depends on the branching heuristic. We propose a branching heuristic inspired from reinforcement learning with a goal of reaching a tree leaf as early as possible to greatly reduce the search tree size. Experimental results show that the proposed heuristic consistently and significantly improves the current best BnB algorithm for the MCS, MCCS and SI problems. An analysis is carried out to give insight on why and how reinforcement learning is useful in the new branching heuristic.

NeurIPS Conference 2020 Conference Paper

An Improved Analysis of (Variance-Reduced) Policy Gradient and Natural Policy Gradient Methods

  • Yanli Liu
  • Kaiqing Zhang
  • Tamer Basar
  • Wotao Yin

In this paper, we revisit and improve the convergence of policy gradient (PG), natural PG (NPG) methods, and their variance-reduced variants, under general smooth policy parametrizations. More specifically, with the Fisher information matrix of the policy being positive definite: i) we show that a state-of-the-art variance-reduced PG method, which has only been shown to converge to stationary points, converges to the globally optimal value up to some inherent function approximation error due to policy parametrization; ii) we show that NPG enjoys a lower sample complexity; iii) we propose SRVR-NPG, which incorporates variance-reduction into the NPG update. Our improvements follow from an observation that the convergence of (variance-reduced) PG and NPG methods can improve each other: the stationary convergence analysis of PG can be applied on NPG as well, and the global convergence analysis of NPG can help to establish the global convergence of (variance-reduced) PG methods. Our analysis carefully integrates the advantages of these two lines of works. Thanks to this improvement, we have also made variance-reduction for NPG possible for the first time, with both global convergence and an efficient finite-sample complexity.

NeurIPS Conference 2020 Conference Paper

An Improved Analysis of Stochastic Gradient Descent with Momentum

  • Yanli Liu
  • Yuan Gao
  • Wotao Yin

SGD with momentum (SGDM) has been widely applied in many machine learning tasks, and it is often applied with dynamic stepsizes and momentum weights tuned in a stagewise manner. Despite of its empirical advantage over SGD, the role of momentum is still unclear in general since previous analyses on SGDM either provide worse convergence bounds than those of SGD, or assume Lipschitz or quadratic objectives, which fail to hold in practice. Furthermore, the role of dynamic parameters has not been addressed. In this work, we show that SGDM converges as fast as SGD for smooth objectives under both strongly convex and nonconvex settings. We also prove that multistage strategy is beneficial for SGDM compared to using fixed parameters. Finally, we verify these theoretical claims by numerical experiments.

AAAI Conference 2018 Conference Paper

A Two-Stage MaxSAT Reasoning Approach for the Maximum Weight Clique Problem

  • Hua Jiang
  • Chu-Min Li
  • Yanli Liu
  • Felip Manyà

MaxSAT reasoning is an effective technology used in modern branch-and-bound (BnB) algorithms for the Maximum Weight Clique problem (MWC) to reduce the search space. However, the current MaxSAT reasoning approach for MWC is carried out in a blind manner and is not guided by any relevant strategy. In this paper, we describe a new BnB algorithm for MWC that incorporates a novel two-stage MaxSAT reasoning approach. In each stage, the MaxSAT reasoning is specialised and guided for different tasks. Experiments on an extensive set of graphs show that the new algorithm implementing this approach significantly outperforms relevant exact and heuristic MWC algorithms in both small/medium and massive real-world graphs.

NeurIPS Conference 2018 Conference Paper

Breaking the Span Assumption Yields Fast Finite-Sum Minimization

  • Robert Hannah
  • Yanli Liu
  • Daniel O'Connor
  • Wotao Yin

In this paper, we show that SVRG and SARAH can be modified to be fundamentally faster than all of the other standard algorithms that minimize the sum of $n$ smooth functions, such as SAGA, SAG, SDCA, and SDCA without duality. Most finite sum algorithms follow what we call the ``span assumption'': Their updates are in the span of a sequence of component gradients chosen in a random IID fashion. In the big data regime, where the condition number $\kappa=O(n)$, the span assumption prevents algorithms from converging to an approximate solution of accuracy $\epsilon$ in less than $n\ln(1/\epsilon)$ iterations. SVRG and SARAH do not follow the span assumption since they are updated with a hybrid of full-gradient and component-gradient information. We show that because of this, they can be up to $\Omega(1+(\ln(n/\kappa))_+)$ times faster. In particular, to obtain an accuracy $\epsilon = 1/n^\alpha$ for $\kappa=n^\beta$ and $\alpha, \beta\in(0, 1)$, modified SVRG requires $O(n)$ iterations, whereas algorithms that follow the span assumption require $O(n\ln(n))$ iterations. Moreover, we present lower bound results that show this speedup is optimal, and provide analysis to help explain why this speedup exists. With the understanding that the span assumption is a point of weakness of finite sum algorithms, future work may purposefully exploit this to yield faster algorithms in the big data regime.