Arrow Research search

Author name cluster

Mark Gerstein

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
2 author rows

Possible papers

6

JBHI Journal 2026 Journal Article

D-Flow: Multi-modality Flow Matching for D-peptide Design

  • Fang Wu
  • Shuting Jin
  • Xiangru Tang
  • Junlin Xu
  • Mark Gerstein
  • Li Erran Li
  • James Zou

Proteins are crucial to biological processes, and therapeutic peptides are emerging as promising pharmaceutical agents. Among these, D-peptides are resistant to proteolysis, exhibit greater in vivo stability, and are easier to synthesize. Despite advances in deep learning for peptide discovery, the scarcity of natural D-protein data limits the transfer of existing generative models to the D-peptide chemical space. We propose D-Flow, a full-atom flow-based framework for de novo D-peptide design. Conditioned on receptor binding, D-Flow uses structural representations incorporating backbone frames, side-chain angles, and discrete amino acid types. A mirror-image algorithm is implemented to address the lack of training data for D-proteins by converting the chirality of L-receptors. Furthermore, we enhance D-Flow's capacity by integrating protein language models (PLMs) with structural awareness through a lightweight structural adapter that injects structural representations into PLM embeddings. This enables D-Flow to learn conformational priors in the D-peptide chemical space and to accommodate the chiral selectivity of binding sites, thereby mitigating the scarcity of D-peptide data. A two-stage training pipeline and a control toolkit enable D-Flow to transition from general protein design to targeted binder design while preserving pre-training knowledge. Results on the PepMerge benchmark show D-Flow's effectiveness. D-peptides generated by D-Flow align more closely with native sequences and structures, with sequence identity improving by 10. 2% over the best baseline and the top affinity score reaching 24. 31%. Overall, D-Flow shows potential for D-peptide design, facilitating the development of bioorthogonal and stable molecular tools and diagnostics. Code is available at https://github.com/smiles724/PeptideDesign.

ICLR Conference 2025 Conference Paper

ChemAgent: Self-updating Memories in Large Language Models Improves Chemical Reasoning

  • Xiangru Tang
  • Tianyu Hu
  • Muyang Ye
  • Yanjun Shao
  • Xunjian Yin
  • Siru Ouyang
  • Wangchunshu Zhou
  • Pan Lu

Chemical reasoning usually involves complex, multi-step processes that demand precise calculations, where even minor errors can lead to cascading failures. Furthermore, large language models (LLMs) encounter difficulties handling domain-specific formulas, executing reasoning steps accurately, and integrating code ef- effectively when tackling chemical reasoning tasks. To address these challenges, we present ChemAgent, a novel framework designed to improve the performance of LLMs through a dynamic, self-updating library. This library is developed by decomposing chemical tasks into sub-tasks and compiling these sub-tasks into a structured collection that can be referenced for future queries. Then, when presented with a new problem, ChemAgent retrieves and refines pertinent information from the library, which we call memory, facilitating effective task decomposition and the generation of solutions. Our method designs three types of memory and a library-enhanced reasoning component, enabling LLMs to improve over time through experience. Experimental results on four chemical reasoning datasets from SciBench demonstrate that ChemAgent achieves performance gains of up to 46% (GPT-4), significantly outperforming existing methods. Our findings suggest substantial potential for future applications, including tasks such as drug discovery and materials science. Our code can be found at https://github.com/gersteinlab/ChemAgent.

NeurIPS Conference 2025 Conference Paper

E2Former: An Efficient and Equivariant Transformer with Linear-Scaling Tensor Products

  • Yunyang Li
  • Lin Huang
  • Zhihao Ding
  • Xinran Wei
  • Chu Wang
  • Han Yang
  • Zun Wang
  • Chang Liu

Equivariant Graph Neural Networks (EGNNs) have demonstrated significant success in modeling microscale systems, including those in chemistry, biology and materials science. However, EGNNs face substantial computational challenges due to the high cost of constructing edge features via spherical tensor products, making them almost impractical for large-scale systems. To address this limitation, we introduce E2Former, an equivariant and efficient transformer architecture that incorporates a Wigner $6j$ convolution (Wigner $6j$ Conv). By shifting the computational burden from edges to nodes, Wigner $6j$ Conv reduces the complexity from $O(| \mathcal{E}|)$ to $O(| \mathcal{V}|)$ while preserving both the model's expressive power and rotational equivariance. We show that this approach achieves a 7x–30x speedup compared to conventional $\mathrm{SO}(3)$ convolutions. Furthermore, our empirical results demonstrate that the derived E2Former mitigates the computational challenges of existing approaches without compromising the ability to capture detailed geometric information. This development could suggest a promising direction for scalable molecular modeling.

ICLR Conference 2025 Conference Paper

Enhancing the Scalability and Applicability of Kohn-Sham Hamiltonians for Molecular Systems

  • Yunyang Li
  • Zaishuo Xia
  • Lin Huang
  • Xinran Wei
  • Samuel Harshe
  • Han Yang
  • Erpai Luo
  • Zun Wang

Density Functional Theory (DFT) is a pivotal method within quantum chemistry and materials science, with its core involving the construction and solution of the Kohn-Sham Hamiltonian. Despite its importance, the application of DFT is frequently limited by the substantial computational resources required to construct the Kohn-Sham Hamiltonian. In response to these limitations, current research has employed deep-learning models to efficiently predict molecular and solid Hamiltonians, with roto-translational symmetries encoded in their neural networks. However, the scalability of prior models may be problematic when applied to large molecules, resulting in non-physical predictions of ground-state properties. In this study, we generate a substantially larger training set (PubChemQH) than used previously and use it to create a scalable model for DFT calculations with physical accuracy. For our model, we introduce a loss function derived from physical principles, which we call Wavefunction Alignment Loss (WALoss). WALoss involves performing a basis change on the predicted Hamiltonian to align it with the observed one; thus, the resulting differences can serve as a surrogate for orbital energy differences, allowing models to make better predictions for molecular orbitals and total energies than previously possible. WALoss also substantially accelerates self-consistent-field (SCF) DFT calculations. Here, we show it achieves a reduction in total energy prediction error by a factor of 1347 and an SCF calculation speed-up by a factor of 18\%. These substantial improvements set new benchmarks for achieving accurate and applicable predictions in larger molecular systems.

ICLR Conference 2024 Conference Paper

ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs

  • Yujia Qin
  • Shihao Liang
  • Yining Ye
  • Kunlun Zhu
  • Lan Yan
  • Yaxi Lu
  • Yankai Lin 0001
  • Xin Cong

Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.

NeurIPS Conference 2023 Conference Paper

Disentangled Wasserstein Autoencoder for T-Cell Receptor Engineering

  • Tianxiao Li
  • Hongyu Guo
  • Filippo Grazioli
  • Mark Gerstein
  • Martin Renqiang Min

In protein biophysics, the separation between the functionally important residues (forming the active site or binding surface) and those that create the overall structure (the fold) is a well-established and fundamental concept. Identifying and modifying those functional sites is critical for protein engineering but computationally non-trivial, and requires significant domain knowledge. To automate this process from a data-driven perspective, we propose a disentangled Wasserstein autoencoder with an auxiliary classifier, which isolates the function-related patterns from the rest with theoretical guarantees. This enables one-pass protein sequence editing and improves the understanding of the resulting sequences and editing actions involved. To demonstrate its effectiveness, we apply it to T-cell receptors (TCRs), a well-studied structure-function case. We show that our method can be used to alter the function of TCRs without changing the structural backbone, outperforming several competing methods in generation quality and efficiency, and requiring only 10\% of the running time needed by baseline models. To our knowledge, this is the first approach that utilizes disentangled representations for TCR engineering.