Arrow Research search

Author name cluster

Qing Yang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

22 papers
2 author rows

Possible papers

22

EAAI Journal 2026 Journal Article

A lightweight framework with adaptive feature enhancement for accurate pavement distress evaluation

  • Yi Liang
  • Jueqiang Tao
  • Qing Yang
  • Xin Qiu
  • Tingfeng Zhang
  • Yafang Liu
  • Heng Zhou

Timely pavement condition survey ensures optimal pavement performance and extends its service life. However, existing lightweight object detection models for pavement distress identification often struggle with a trade-off between computational efficiency and fine-grained feature extraction, fail to adapt to the irregular, elongated morphologies of cracks using fixed-grid convolutions, and are hindered by class imbalance and complex backgrounds that lead to misclassifications. To address these gaps, this study proposes the Lightweight Pavement Distress Network (LPD-Net), a crack-feature enhanced framework based on You Only Look Once version 11 (YOLOv11) for accurate pavement distress detection. Firstly, a large-scale dataset comprising depth images was constructed using a three-dimensional (3D) laser imaging sensor. Secondly, Dynamic Snake Convolution (DySConv) was integrated into the Cross Stage Partial with kernel size 2 (C3k2) module to adaptively adjust kernel sampling for better capturing crack contours and edges. Thirdly, a Bi-level Routing Attention (BRA) module was embedded to dynamically filter background noise and focus on sparse distress features, alleviating class imbalance. Lastly, a Lightweight Asymmetric Detection Head (LADH) incorporating Depthwise Separable Convolution (DSConv) was designed to reduce computational overhead while maintaining localization precision. Experimental results demonstrate that LPD-Net achieves a superior balance, reducing computational cost by 15. 9 % to 5. 3 Giga Floating Point Operations (GFLOPs) compared to the baseline while increasing mean Average Precision at 50 % intersection over union (mAP@50) by 6. 5 % to 0. 506. Measurement-oriented evaluation via Pavement Condition Index (PCI) further confirms its reliability, with 40. 72 % agreement within ± 5 PCI, aligning well with metrological standards.

IJCAI Conference 2025 Conference Paper

Beyond Fixed Length: Bucket Pre-training is All You Need

  • Qing Yang
  • Qiyao Peng
  • Hongtao Liu
  • Kai Liu
  • Bing Qin
  • Ting Liu

Large Language Models (LLMs) have demonstrated exceptional performance across various tasks, with pre-training stage serving as the cornerstone of their capabilities. However, the conventional fixed-length data composition strategy for pre-training presents several practical challenges. When using shorter sequences, documents are often truncated, potentially leading to information loss and affecting the model's ability to capture long-range dependencies. Conversely, longer sequences require concatenation of multiple documents, which can introduce noise and affect the natural document boundaries and semantic coherence as well as require substantial computational overhead. To address these challenges, we first establish three quantitative metrics for evaluating data composition quality: padding ratio, truncation ratio, and concatenation ratio. Building upon these metrics, we propose a novel multi-bucket data composition method that transcends the fixed-length paradigm. Our approach adaptively organizes training data to achieve optimal composition quality as measured by the proposed metrics, offering a more flexible and efficient approach for pre-training. We conduct extensive experiments and the results demonstrate that our proposed method significantly enhances both the efficiency and effectiveness of LLM pre-training. Our proposed method has been adopted in the Du Xiaoman–XuanYuan series of financial large language models at https: //github. com/Duxiaoman-DI/XuanYuan.

ICML Conference 2025 Conference Paper

Eigen Analysis of Conjugate Kernel and Neural Tangent Kernel

  • Xiangchao Li
  • Xiao Han
  • Qing Yang

In this paper, we investigate deep feedforward neural networks with random weights. The input data matrix $\boldsymbol{X}$ is drawn from a Gaussian mixture model. We demonstrate that certain eigenvalues of the conjugate kernel and neural tangent kernel may lie outside the support of their limiting spectral measures in the high-dimensional regime. The existence and asymptotic positions of such isolated eigenvalues are rigorously analyzed. Furthermore, we provide a precise characterization of the entrywise limit of the projection matrix onto the eigenspace associated with these isolated eigenvalues. Our findings reveal that the eigenspace captures inherent group features present in $\boldsymbol{X}$. This study offers a quantitative analysis of how group features from the input data evolve through hidden layers in randomly weighted neural networks.

AAAI Conference 2025 Conference Paper

LRM-LLaVA: Overcoming the Modality Gap of Multilingual Large Language-Vision Model for Low-Resource Languages

  • Junchen Li
  • Qing Yang
  • Bojian Jiang
  • Shaolin Zhu
  • Qingxuan Sun

Multilingual large language-vision models (LVLMs), which understand and generate both text and images across multiple languages, have achieved remarkable performance on English-centric multimodal generation tasks. However, their performance on non-English tasks has been underwhelming. One major challenge with multilingual LVLMs is the modality gap between visual inputs and multilingual textual inputs/outputs due to the lack of high-quality multilingual training data. In this paper, we propose LRM-LLaVA, a multilingual large language-vision model designed for low-resource languages to overcome the modality gap. It is composed of four components: a visual encoder, a multilingual large language model, a vision-text representation projector, and a cross-modal regularizer. Both the projector and regularizer aim at reducing the modality gap and improving multilingual performance. To train LRM-LLaVA, we employ a two-stage training strategy including pre-training and instruction fine-tuning. Meanwhile, we construct a multilingual visual question answering dataset based on English open-source datasets and adopt multiple task instructions. To evaluate the performance of LVLMs across various languages, we construct four multilingual benchmarks for 10 languages, based on English open-source benchmarks. Experimental results show that LRM-LLaVA achieves competitive performance compared to other multilingual LVLMs of similar parameters.

NeurIPS Conference 2025 Conference Paper

Reinforcement Learning for Reasoning in Large Language Models with One Training Example

  • Yiping Wang
  • Qing Yang
  • Zhiyuan Zeng
  • Liliang Ren
  • Liyuan Liu
  • Baolin Peng
  • Hao Cheng
  • Xuehai He

We show that reinforcement learning with verifiable reward using one training example (1-shot RLVR) is effective in incentivizing the math reasoning capabilities of large language models (LLMs). Applying RLVR to the base model Qwen2. 5-Math-1. 5B, we identify a single example that elevates model performance on MATH500 from 36. 0\% to 73. 6\% (8. 6\% improvement beyond format correction), and improves the average performance across six common mathematical reasoning benchmarks from 17. 6\% to 35. 7\% (7. 0\% non-format gain). This result matches the performance obtained using the 1. 2k DeepScaleR subset (MATH500: 73. 6\%, average: 35. 9\%), which contains the aforementioned example. Furthermore, RLVR with only two examples even slightly exceeds these results (MATH500: 74. 8\%, average: 36. 6\%). Similar substantial improvements are observed across various models (Qwen2. 5-Math-7B, Llama3. 2-3B-Instruct, DeepSeek-R1-Distill-Qwen-1. 5B), RL algorithms (GRPO and PPO), and different math examples. In addition, we identify some interesting phenomena during 1-shot RLVR, including cross-category generalization, increased frequency of self-reflection, and sustained test performance improvement even after the training accuracy has saturated, a phenomenon we term \textit{post-saturation generalization}. Moreover, we verify that the effectiveness of 1-shot RLVR primarily arises from the policy gradient loss, distinguishing it from the "grokking" phenomenon. We also show the critical role of promoting exploration (e. g. , by incorporating entropy loss with an appropriate coefficient) in 1-shot RLVR training. We also further discuss related observations about format correction, label robustness and prompt modification. These findings can inspire future work on RLVR efficiency and encourage a re-examination of recent progress and the underlying mechanisms in RLVR. Our code, models, and data are open source at https: //github. com/ypwang61/One-Shot-RLVR.

JBHI Journal 2025 Journal Article

scSwinTNet: A Cell Type Annotation Method for Large-Scale Single-Cell RNA-Seq Data Based on Shifted Window Attention

  • Huanhuan Dai
  • Xiangyu Meng
  • Zhiyi Pan
  • Qing Yang
  • Haonan Song
  • Yuan Gao
  • Xun Wang

The annotation of cell types based on single-cell RNA sequencing (scRNA-seq) data is a critical downstream task in single-cell analysis, with significant implications for a deeper understanding of biological processes. Most analytical methods cluster cells by unsupervised clustering, which requires manual annotation for cell type determination. This procedure is time-overwhelming and non-repeatable. To accommodate the exponential growth of sequencing cells, reduce the impact of data bias, and integrate large-scale datasets for further improvement of type annotation accuracy, we proposed scSwinTNet. It is a pre-trained tool for annotating cell types in scRNA-seq data, which uses self-attention based on shifted windows and enables intelligent information extraction from gene data. We demonstrated the effectiveness and robustness of scSwinTNet by using 399 760 cells from human and mouse tissues. To the best of our knowledge, scSwinTNet is the first model to annotate cell types in scRNA-seq data using a pre-trained shifted window attention-based model. It does not require a priori knowledge and accurately annotates cell types without manual annotation.

NeurIPS Conference 2024 Conference Paper

How does Architecture Influence the Base Capabilities of Pre-trained Language Models? A Case Study Based on FFN-Wider and MoE Transformers

  • Xin Lu
  • Yanyan Zhao
  • Bing Qin
  • Liangyu Huo
  • Qing Yang
  • Dongliang Xu

Pre-trained language models have been proven to possess strong base capabilities, which not only excel in in-distribution language modeling but also show powerful abilities in out-of-distribution language modeling, transfer learning and few-shot learning. Unlike existing work focusing on the influence of scale on base capabilities, our work examines the influence of architecture on those. Specifically, our concern is: How does architecture influence the base capabilities of pre-trained language models? In this work, we attempt to explain and reverse the decline in base capabilities caused by the architecture of FFN-Wider Transformers, seeking to provide some insights. Through analysis, we found the contribution ratio of Multi-Head Attention (a combination function) to pre-trained language modeling is a key factor affecting base capabilities. FFN-Wider Transformers reduce the contribution ratio of this combination function, leading to a decline in base capabilities. We confirmed this by experiments and proposed Combination Enhanced Architecture (CEA) to address the decline in base capabilities of such models. Significantly, we extended our explanation and CEA to Mixture of Experts (MoE) Transformers. We successfully achieved significant improvements in base capabilities on a 14B parameter MoE model, demonstrating the practical application value of our work. This also indicates that our analysis has a certain guiding significance for architecture analysis, architecture improvement and architecture design.

JMLR Journal 2024 Journal Article

Individual-centered Partial Information in Social Networks

  • Xiao Han
  • Y. X. Rachel Wang
  • Qing Yang
  • Xin Tong

In statistical network analysis, we often assume either the full network is available or multiple subgraphs can be sampled to estimate various global properties of the network. However, in a real social network, people frequently make decisions based on their local view of the network alone. Here, we consider a partial information framework that characterizes the local network centered at a given individual by path length $L$ and gives rise to a partial adjacency matrix. Under $L=2$, we focus on the problem of (global) community detection using the popular stochastic block model (SBM) and its degree-corrected variant (DCSBM). We derive theoretical properties of the eigenvalues and eigenvectors from the signal term of the partial adjacency matrix and propose new spectral-based community detection algorithms that achieve consistency under appropriate conditions. Our analysis also allows us to propose a new centrality measure that assesses the importance of an individual's partial information in determining global community structure. Using simulated and real networks, we demonstrate the performance of our algorithms and compare our centrality measure with other popular alternatives to show it captures unique nodal information. Our results illustrate that the partial information framework enables us to compare the viewpoints of different individuals regarding the global structure. [abs] [ pdf ][ bib ] &copy JMLR 2024. ( edit, beta )

NeurIPS Conference 2024 Conference Paper

Meaningful Learning: Enhancing Abstract Reasoning in Large Language Models via Generic Fact Guidance

  • Kai Xiong
  • Xiao Ding
  • Ting Liu
  • Bing Qin
  • Dongliang Xu
  • Qing Yang
  • Hongtao Liu
  • Yixin Cao

Large language models (LLMs) have developed impressive performance and strong explainability across various reasoning scenarios, marking a significant stride towards mimicking human-like intelligence. Despite this, when tasked with several simple questions supported by a generic fact, LLMs often struggle to abstract and apply the generic fact to provide consistent and precise answers, revealing a deficiency in abstract reasoning abilities. This has sparked a vigorous debate about whether LLMs are genuinely reasoning or merely memorizing. In light of this, we design a preliminary study to quantify and delve into the abstract reasoning abilities of existing LLMs. Our findings reveal a substantial discrepancy between their general reasoning and abstract reasoning performances. To relieve this problem, we tailor an abstract reasoning dataset (AbsR) together with a meaningful learning paradigm to teach LLMs how to leverage generic facts for reasoning purposes. The results show that our approach not only boosts the general reasoning performance of LLMs but also makes considerable strides towards their capacity for abstract reasoning, moving beyond simple memorization or imitation to a more nuanced understanding and application of generic facts. The code is available at https: //github. com/Waste-Wood/MeanLearn.

NeurIPS Conference 2024 Conference Paper

MoGU: A Framework for Enhancing Safety of LLMs While Preserving Their Usability

  • Yanrui Du
  • Sendong Zhao
  • Danyang Zhao
  • Ming Ma
  • Yuhan Chen
  • Liangyu Huo
  • Qing Yang
  • Dongliang Xu

Large Language Models (LLMs) are increasingly deployed in various applications. As their usage grows, concerns regarding their safety are rising, especially in maintaining harmless responses when faced with malicious instructions. Many defense strategies have been developed to enhance the safety of LLMs. However, our research finds that existing defense strategies lead LLMs to predominantly adopt a rejection-oriented stance, thereby diminishing the usability of their responses to benign instructions. To solve this problem, we introduce the MoGU framework, designed to enhance LLMs' safety while preserving their usability. Our MoGU framework transforms the base LLM into two variants: the usable LLM and the safe LLM, and further employs dynamic routing to balance their contribution. When encountering malicious instructions, the router will assign a higher weight to the safe LLM to ensure that responses are harmless. Conversely, for benign instructions, the router prioritizes the usable LLM, facilitating usable and helpful responses. On various open-sourced LLMs, we compare multiple defense strategies to verify the superiority of our MoGU framework. Besides, our analysis provides key insights into the effectiveness of MoGU and verifies that our designed routing mechanism can effectively balance the contribution of each variant by assigning weights. Our work released the safer Llama2, Vicuna, Falcon, Dolphin, and Baichuan2.

EAAI Journal 2024 Journal Article

Research on dynamic multi-level warning method for thermal runaway charging of electric vehicles

  • Dexin Gao
  • Yurong Du
  • Yuanming Cheng
  • Qing Yang

The high-power direct current (DC) charging method for electric vehicles (EVs) can easily lead to overheating during the charging process, potentially resulting in thermal runaway accidents. Addressing this safety issue, accurately and reliably predicting and providing multi-level warnings for thermal accidents during the charging of EVs becomes an urgent problem to be solved. Therefore, this paper proposes the construction of a composite prediction model called QCNB (Q learning- CNN- BiNLSTM- BiGRU) by incorporating Convolutional Neural Networks (CNN), Bidirectional Nested Long Short-Term Memory Networks (BiNLSTM), Bidirectional Gated Recurrent Units (BiGRU), and Q learning algorithm. First, taking into account the influence of ambient temperature, three sets of charging historical data from spring/autumn, summer, and winter are selected for model training. Second, the sliding window analysis method is employed to establish the multi-level warning thresholds and rules using the historical charging data, enabling safety multi-level warnings. Lastly, the effectiveness of the QCNB model is validated using actual operating data from EVs. The experimental results demonstrate that the QCNB’s prediction accuracy outperforms other models, and the abnormal characteristics of temperature and voltage can be used to identify the charging thermal runaway accident in advance and adopt the multi-level warning for effective protection. This achievement emphasizes the importance of employing a combination of predictive models to address the complex characteristics of charging data and implementing multi-level warnings for protection. It effectively mitigates the occurrence of thermal accidents, offering promising possibilities for practical applications.

AAAI Conference 2024 Conference Paper

Self-Supervised Disentangled Representation Learning for Robust Target Speech Extraction

  • Zhaoxi Mu
  • Xinyu Yang
  • Sining Sun
  • Qing Yang

Speech signals are inherently complex as they encompass both global acoustic characteristics and local semantic information. However, in the task of target speech extraction, certain elements of global and local semantic information in the reference speech, which are irrelevant to speaker identity, can lead to speaker confusion within the speech extraction network. To overcome this challenge, we propose a self-supervised disentangled representation learning method. Our approach tackles this issue through a two-phase process, utilizing a reference speech encoding network and a global information disentanglement network to gradually disentangle the speaker identity information from other irrelevant factors. We exclusively employ the disentangled speaker identity information to guide the speech extraction network. Moreover, we introduce the adaptive modulation Transformer to ensure that the acoustic representation of the mixed signal remains undisturbed by the speaker embeddings. This component incorporates speaker embeddings as conditional information, facilitating natural and efficient guidance for the speech extraction network. Experimental results substantiate the effectiveness of our meticulously crafted approach, showcasing a substantial reduction in the likelihood of speaker confusion.

JBHI Journal 2024 Journal Article

TBCA: Prediction of Transcription Factor Binding Sites Using a Deep Neural Network With Lightweight Attention Mechanism

  • Xun Wang
  • Qiao Lian
  • Peng Qu
  • Qing Yang

The identification of transcription factor binding sites (TFBSs) is crucial for understanding the regulatory mechanisms of gene expression, which contributes to unraveling cellular functions and disease development. Currently, the most common approach involves the use of deep learning techniques to predict TFBSs by combining sequence and shape features. Although significant progress has been made with these methods, the integration of local features extracted from DNA sequences and shapes with global features has not yet reached a sufficient level, and there is still significant room for improvement in the accuracy of prediction results. In this paper, we propose a novel framework based on convolution and attention mechanisms, referred to as TBCA, which combines DNA sequence information and shape information for predicting transcription factor binding sites. In this work, we employ a two-layer convolutional neural network (CNNs) and self-attention mechanism to extract complex sequence features from DNA. What's more, we utilize a Fourier-transform-enhanced multi-head attention along with channel attention to extract high-order shape features of DNA. Finally, these high-order sequence and shape features are integrated into the channel dimension to achieve accurate TFBSs prediction. Our research results demonstrate that TBCA exhibits superior predictive performance in 165 validated ChIP-seq datasets. Furthermore, the employed attention mechanisms can automatically learn important features at different positions and scales, enhancing the accuracy and robustness of feature representation. We also conduct an in-depth analysis of the contributions of five different shapes to site prediction, revealing that shape features can enhance the prediction of transcription factor DNA binding.

YNIMG Journal 2023 Journal Article

Homotopic local-global parcellation of the human cerebral cortex from resting-state functional connectivity

  • Xiaoxuan Yan
  • Ru Kong
  • Aihuiping Xue
  • Qing Yang
  • Csaba Orban
  • Lijun An
  • Avram J. Holmes
  • Xing Qian

Resting-state fMRI is commonly used to derive brain parcellations, which are widely used for dimensionality reduction and interpreting human neuroscience studies. We previously developed a model that integrates local and global approaches for estimating areal-level cortical parcellations. The resulting local-global parcellations are often referred to as the Schaefer parcellations. However, the lack of homotopic correspondence between left and right Schaefer parcels has limited their use for brain lateralization studies. Here, we extend our previous model to derive homotopic areal-level parcellations. Using resting-fMRI and task-fMRI across diverse scanners, acquisition protocols, preprocessing and demographics, we show that the resulting homotopic parcellations are as homogeneous as the Schaefer parcellations, while being more homogeneous than five publicly available parcellations. Furthermore, weaker correlations between homotopic parcels are associated with greater lateralization in resting network organization, as well as lateralization in language and motor task activation. Finally, the homotopic parcellations agree with the boundaries of a number of cortical areas estimated from histology and visuotopic fMRI, while capturing sub-areal (e.g., somatotopic and visuotopic) features. Overall, these results suggest that the homotopic local-global parcellations represent neurobiologically meaningful subdivisions of the human cerebral cortex and will be a useful resource for future studies. Multi-resolution parcellations estimated from 1479 participants are publicly available (https://github.com/ThomasYeoLab/CBIG/tree/master/stable_projects/brain_parcellation/Yan2023_homotopic).

JBHI Journal 2023 Journal Article

Individualized Assessment of Brain Aβ Deposition With fMRI Using Deep Learning

  • Chaolin Li
  • Mianxin Liu
  • Jing Xia
  • Lang Mei
  • Qing Yang
  • Feng Shi
  • Han Zhang
  • Dinggang Shen

PET-based Alzheimer's disease (AD) assessment has many limitations in large-scale screening. Non-invasive techniques such as resting-state functional magnetic resonance imaging (rs-fMRI) have been proven valuable in early AD diagnosis. This study investigated feasibility of using rs-fMRI, especially functional connectivity (FC), for individualized assessment of brain amyloid-β deposition derived from PET. We designed a graph convolutional networks (GCNs) and random forest (RF) based integrated framework for using rs-fMRI-derived multi-level FC networks to predict amyloid-β PET patterns with the OASIS-3 (N = 258) and ADNI-2 (N = 291) datasets. Our method achieved satisfactory accuracy not only in Aβ-PET grade classification (for negative, intermediate, and positive grades, with accuracy in the three-class classification as 62. 8% and 64. 3% on two datasets, respectively), but also in prediction of whole-brain region-level Aβ-PET standard uptake value ratios (SUVRs) (with the mean square errors as 0. 039 and 0. 074 for two datasets, respectively). Model interpretability examination also revealed the contributive role of the limbic network. This study demonstrated high feasibility and reproducibility of using low-cost, more accessible magnetic resonance imaging (MRI) to approximate PET-based diagnosis.

JBHI Journal 2023 Journal Article

Outcome Prediction of Unconscious Patients Based on Weighted Sparse Brain Network Construction

  • Renping Yu
  • Han Zhang
  • Xuehai Wu
  • Xuan Fei
  • Qing Yang
  • Zhiwei Ma
  • Zengxin Qi
  • Di Zang

It is quite challenging to establish a prompt and reliable prognosis assessment for acquired brain injury (ABI) patients with persistent severe disorders of consciousness (DOC) like unconscious comatose and unresponsive wakefulness syndrome (a. k. a. , vegetative state). Recent advances in brain functional imaging and functional net-work analysis have demonstrated its potential in determining the consciousness level and prognostic outcome for ABI patients with DOC. However, the diagnostic and prognostic usefulness of the whole-brain functional connectome based on advanced machine learning techniques has not been fully evaluated. The first aim of this study is to predict the outcome of individual unconscious ABI patients during a three-month follow-up. The second aim is to conduct precise individualized differentiation among different consciousness levels for exploring the neurobiological mechanisms underlying DOC. Based on resting-state fMRI, we construct large-scale functional networks by using a weighted sparse model, which ensures sparsity and interpretability by preserving strong functional connections. The functional connection strengths are exploited as features for outcome prediction and consciousness level differentiation. We achieve significantly improved consciousness level classification (accuracy: 84. 78%) and recovery outcome prediction (accuracy: 89. 74%) compared to other network construction methods. More importantly, we reveal the contributive connections across the entire brain in both tasks. These connections could serve as the potential biomarkers for better understanding of consciousness and further provide new insight into the development of diagnostic, prognostic, and effective therapeutic guidelines for ABI patients with DOC.

NeurIPS Conference 2022 Conference Paper

An Empirical Study on Disentanglement of Negative-free Contrastive Learning

  • Jinkun Cao
  • Ruiqian Nai
  • Qing Yang
  • Jialei Huang
  • Yang Gao

Negative-free contrastive learning methods have attracted a lot of attention with simplicity and impressive performances for large-scale pretraining. However, its disentanglement property remains unexplored. In this paper, we examine negative-free contrastive learning methods to study the disentanglement property empirically. We find that existing disentanglement metrics fail to make meaningful measurements for high-dimensional representation models, so we propose a new disentanglement metric based on Mutual Information between latent representations and data factors. With this proposed metric, we benchmark the disentanglement property of negative-free contrastive learning on both popular synthetic datasets and a real-world dataset CelebA. Our study shows that the investigated methods can learn a well-disentangled subset of representation. As far as we know, we are the first to extend the study of disentangled representation learning to high-dimensional representation space and introduce negative-free contrastive learning methods into this area. The source code of this paper is available at https: //github. com/noahcao/disentanglement lib med.

JMLR Journal 2022 Journal Article

Power Iteration for Tensor PCA

  • Jiaoyang Huang
  • Daniel Z. Huang
  • Qing Yang
  • Guang Cheng

In this paper, we study the power iteration algorithm for the asymmetric spiked tensor model, as introduced in Richard and Montanari (2014). We give necessary and sufficient conditions for the convergence of the power iteration algorithm. When the power iteration algorithm converges, for the rank one spiked tensor model, we show the estimators for the spike strength and linear functionals of the signal are asymptotically Gaussian; for the multi-rank spiked tensor model, we show the estimators are asymptotically mixtures of Gaussian. This new phenomenon is different from the spiked matrix model. Using these asymptotic results of our estimators, we construct valid and efficient confidence intervals for spike strengths and linear functionals of the signals. [abs] [ pdf ][ bib ] &copy JMLR 2022. ( edit, beta )

IROS Conference 2010 Conference Paper

Micro manipulation based on adhesion control with compound vibration

  • Tao Chen 0010
  • Liguo Chen
  • Lining Sun
  • Weibin Rong
  • Qing Yang

Due to scale effects, the releasing of micro objects has been a long-standing challenge in micromanipulation applications. In this paper a micromanipulation system is presented based on the adhesion control with compound vibration. This adhesion control technique employs inertia force to overcome adhesion force achieving 100% repeatability with releasing accuracy of 4±0. 5μm, which was experimentally quantified through the manipulation of 20-100μm polystyrene spheres under an optical microscope. The micromanipulation system consists of a microgripper and a piezoelectric ceramics module. The compound vibration comes from the electrostatic actuator and the piezoelectrically driven actuator. Surface and bulk micromachining technology is employed to fabricate the microgripper used in the system from a single crystal silicon wafer. Experimental results confirmed that this adhesion control technique is independent of substrate. Theoretical analyses were conducted to understand the releasing mechanism. Based on this preliminary study, the micromanipulation system proves to be an effective solution for active releasing of micromanipulation.

IS Journal 2009 Journal Article

A Collaborative Multiagent System for Mining Transcriptional Regulatory Elements

  • Yun Xiong
  • Guangyong Zheng
  • Qing Yang
  • Yangyong Zhu

Identification of transcriptional regulatory elements offers a key means of insight into regulation mechanisms. However, the number of known regulatory elements is inadequate and state-of-the-art identification methods are inaccurate. Moreover, it is difficult for a biologist to select interdependent tools, and existing systems ignore overall performance issues. Agent technology can provide solutions through its information integration and coordination capabilities. TREMAgent is the first multiagent-based system for mining transcriptional regulatory elements. It uses novel algorithms combined with biological domain knowledge (for example, protein functional site information) to achieve superior accuracy and collaborate with existing tools using agent technology. The autonomous problem-solving capability of agents enables the system to provide the appropriate workflow rather than having users select interdependent tools. Experiments on the real data sets show that TREMAgent can provide superior accuracy and flexible services, promising excellent potential for bioinformatics.

YNIMG Journal 2007 Journal Article

MR diffusion changes correlate with ultra-structurally defined axonal degeneration in murine optic nerve

  • Qizhu Wu
  • Helmut Butzkueven
  • Melissa Gresle
  • Frank Kirchhoff
  • Anna Friedhuber
  • Qing Yang
  • Hong Wang
  • Ke Fang

Diffusion weighted imaging (DWI) and diffusion tensor imaging (DTI) are widely used to investigate central nervous system (CNS) white matter structure and pathology. Changes in principal diffusivities parallel and perpendicular to nerve fibers or axonal tracts have been associated with axonal pathology and de/dysmyelination respectively. However, the ultra-structural properties and the pathological alterations of white matter responsible for diffusivity changes have not been fully elucidated. We examined the relationship between the directional diffusivities and ultra-structural properties in mouse optic nerve using healthy animals, and mice with optic neuritis (ON) that exhibited marked inflammatory changes and moderately severe axonal pathology. Progressive axonal degeneration in ON resulted in a 23% reduction of parallel diffusivity as detected by diffusion MRI (P <10−5), but no change in perpendicular diffusivity. Parallel diffusion changes were highly correlated with the total axolemmal cross-sectional area in the pre-chiasmal portion of the optic nerve (r =0. 86, P <0. 001). This study provides quantitative evidence that reduced parallel diffusivity in the optic nerve correlates significantly with axolemmal cross-sectional area reductions. MRI-based assessment of axonal degeneration in murine ON is feasible and potentially useful for monitoring of neuro-protective therapies in preclinical trials in animals.