Arrow Research search

Author name cluster

Wing-Kin Ma

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

AAAI Conference 2026 Conference Paper

A Scalable and Exact Relaxation for Densest k-Subgraph via Error Bounds

  • Ya Liu
  • Junbin Liu
  • Wing-Kin Ma
  • Aritra Konar

Given an undirected graph and a size parameter k, the Densest k-Subgraph (DkS) problem extracts the subgraph on k vertices with the largest number of induced edges. While DkS is NP--hard and difficult to approximate, penalty-based continuous relaxations of the problem have recently enjoyed practical success for real-world instances of DkS. In this work, we propose a scalable and exact continuous penalization approach for DkS using the error bound principle, which enables the design of suitable penalty functions. Notably, we develop new theoretical guarantees ensuring that both the global and local optima of the penalized problem match those of the original problem. The proposed penalized reformulation enables the use of first-order continuous optimization methods. In particular, we develop a non-convex proximal gradient algorithm, where the non-convex proximal operator can be computed in closed form, resulting in low per-iteration complexity. We also provide convergence analysis of the algorithm. Experiments on large-scale instances of the DkS problem and one of its variants, the Densest (k1, k2) Bipartite Subgraph (Dk1k2BS) problem, demonstrate that our method achieves a favorable balance between computation cost and solution quality.

ICML Conference 2025 Conference Paper

Multilayer Matrix Factorization via Dimension-Reducing Diffusion Variational Inference

  • Junbin Liu
  • Farzan Farnia
  • Wing-Kin Ma

Multilayer matrix factorization (MMF) has recently emerged as a generalized model of, and potentially a more expressive approach than, the classic matrix factorization. This paper considers MMF under a probabilistic formulation, and our focus is on inference methods under variational inference. The challenge in this context lies in determining a variational process that leads to a computationally efficient and accurate approximation of the maximum likelihood inference. One well-known example is the variational autoencoder (VAE), which uses neural networks for the variational process. In this work, we take insight from variational diffusion models in the context of generative models to develop variational inference for MMF. We propose a dimension-reducing diffusion process that results in a new way to interact with the layered structures of the MMF model. Experimental results demonstrate that the proposed diffusion variational inference method leads to improved performance scores compared to several existing methods, including the VAE.

AAAI Conference 2021 Conference Paper

Federated Block Coordinate Descent Scheme for Learning Global and Personalized Models

  • Ruiyuan Wu
  • Anna Scaglione
  • Hoi-To Wai
  • Nurullah Karakoc
  • Kari Hreinsson
  • Wing-Kin Ma

In federated learning, models are learned from users’ data that are held private in their edge devices, by aggregating them in the service provider’s “cloud” to obtain a global model. Such global model is of great commercial value in, e. g. , improving the customers’ experience. In this paper we focus on two possible areas of improvement of the state of the art. First, we take the difference between user habits into account and propose a quadratic penalty-based formulation, for efficient learning of the global model that allows to personalize local models. Second, we address the latency issue associated with the heterogeneous training time on edge devices, by exploiting a hierarchical structure modeling communication not only between the cloud and edge devices, but also within the cloud. Specifically, we devise a tailored block coordinate descentbased computation scheme, accompanied with communication protocols for both the synchronous and asynchronous cloud settings. We characterize the theoretical convergence rate of the algorithm, and provide a variant that performs empirically better. We also prove that the asynchronous protocol, inspired by multi-agent consensus technique, has the potential for large gains in latency compared to a synchronous setting when the edge-device updates are intermittent. Finally, experimental results are provided that corroborate not only the theory, but also show that the system leads to faster convergence for personalized models on the edge devices, compared to the state of the art.