Arrow Research search

Author name cluster

Malcolm Egan

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

NeurIPS Conference 2025 Conference Paper

Streaming Federated Learning with Markovian Data

  • Khiem HUYNH
  • Malcolm Egan
  • Giovanni Neglia
  • Jean-Marie Gorce

Federated learning (FL) is now recognized as a key framework for communication-efficient collaborative learning. Most theoretical and empirical studies, however, rely on the assumption that clients have access to pre-collected data sets, with limited investigation into scenarios where clients continuously collect data. In many real-world applications, particularly when data is generated by physical or biological processes, client data streams are often modeled by non-stationary Markov processes. Unlike standard i. i. d. sampling, the performance of FL with Markovian data streams remains poorly understood due to the statistical dependencies between client samples over time. In this paper, we investigate whether FL can still support collaborative learning with Markovian data streams. Specifically, we analyze the performance of Minibatch SGD, Local SGD, and a variant of Local SGD with momentum. We answer affirmatively under standard assumptions and smooth non-convex client objectives: the sample complexity is proportional to the inverse of the number of clients with a communication complexity comparable to the i. i. d. scenario. However, the sample complexity for Markovian data streams remains higher than for i. i. d. sampling. Our analysis is validated via experiments with real pollution monitoring time series data.

AAAI Conference 2021 Conference Paper

Asynchronous Optimization Methods for Efficient Training of Deep Neural Networks with Guarantees

  • Vyacheslav Kungurtsev
  • Malcolm Egan
  • Bapi Chatterjee
  • Dan Alistarh

Asynchronous distributed algorithms are a popular way to reduce synchronization costs in large-scale optimization, and in particular for neural network training. However, for nonsmooth and nonconvex objectives, few convergence guarantees exist beyond cases where closed-form proximal operator solutions are available. As training most popular deep neural networks corresponds to optimizing nonsmooth and nonconvex objectives, there is a pressing need for such convergence guarantees. In this paper, we analyze for the first time the convergence of stochastic asynchronous optimization for this general class of objectives. In particular, we focus on stochastic subgradient methods allowing for block variable partitioning, where the shared model is asynchronously updated by concurrent processes. To this end, we use a probabilistic model which captures key features of real asynchronous scheduling between concurrent processes. Under this model, we establish convergence with probability one to an invariant set for stochastic subgradient methods with momentum. From a practical perspective, one issue with the family of algorithms that we consider is that they are not efficiently supported by machine learning frameworks, which mostly focus on distributed data-parallel strategies. To address this, we propose a new implementation strategy for shared-memory based training of deep neural networks for a partitioned but shared model in single- and multi-GPU settings. Based on this implementation, we achieve on average about 1. 2x speedup in comparison to state-of-the-art training methods for popular image classification tasks, without compromising accuracy.

ECAI Conference 2014 Conference Paper

A Profit-Aware Negotiation Mechanism for On-Demand Transport Services

  • Malcolm Egan
  • Michal Jakob

As new markets for transportion arise, on-demand transport services are set to grow as more passengers seek affordable personalized journeys. To reduce passenger prices and increase provider revenue, these journeys will often be shared with other passengers. As such, new negotiation mechanisms between passengers and the service provider are required to plan and price journeys. In this paper, we propose a novel profit-aware negotiation mechanism: a multiagent approach that accounts for both passenger and service provider preferences. Our negotiation mechanism prices each passenger's journey, in addition to providing vehicle routing and scheduling. We prove a stability property of our negotiation mechanism using a connection to hedonic games. This connection yields new insights into the link between vehicle routing and passenger pricing. We also show via simulations the dependence of the service provider profit and passenger prices on the number of passengers as well as passenger demographics. In particular, our key observation is that increasing the number of passengers has the effect of increasing passenger diversity, which in turn increases the service provider's profit.