Arrow Research search

Author name cluster

Tiancheng Yuan

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

AAAI Conference 2024 Short Paper

Digital Twin-Driven Teat Localization and Shape Identification for Dairy Cow (Student Abstract)

  • Aarushi Gupta
  • Yuexing Hao
  • Yuting Yang
  • Tiancheng Yuan
  • Matthias Wieland
  • Parminder S. Basran
  • Ken Birman

Dairy owners invest heavily to keep their animals healthy. There is good reason to hope that technologies such as computer vision and artificial intelligence (AI) could reduce costs, yet obstacles arise when adapting these advanced tools to farming environments. In this work, we applied AI tools to dairy cow teat localization and teat shape classification, obtaining a model that achieves a mean average precision of 0.783. This digital twin-driven approach is intended as a first step towards automating and accelerating the detection and treatment of hyperkeratosis, mastitis, and other medical conditions that significantly burden the dairy industry.

NeurIPS Conference 2023 Conference Paper

Coordinating Distributed Example Orders for Provably Accelerated Training

  • A. Feder Cooper
  • Wentao Guo
  • Duc Khiem Pham
  • Tiancheng Yuan
  • Charlie Ruan
  • Yucheng Lu
  • Christopher M. De Sa

Recent research on online Gradient Balancing (GraB) has revealed that there exist permutation-based example orderings for SGD that are guaranteed to outperform random reshuffling (RR). Whereas RR arbitrarily permutes training examples, GraB leverages stale gradients from prior epochs to order examples -- achieving a provably faster convergence rate than RR. However, GraB is limited by design: while it demonstrates an impressive ability to scale-up training on centralized data, it does not naturally extend to modern distributed ML workloads. We therefore propose Coordinated Distributed GraB (CD-GraB), which uses insights from prior work on kernel thinning to translate the benefits of provably faster permutation-based example ordering to distributed settings. With negligible overhead, CD-GraB exhibits a linear speedup in convergence rate over centralized GraB and outperforms distributed RR on a variety of benchmark tasks.