Arrow Research search

Author name cluster

Danny Vainstein

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

AAAI Conference 2023 Conference Paper

Tree Learning: Optimal Sample Complexity and Algorithms

  • Dmitrii Avdiukhin
  • Grigory Yaroslavtsev
  • Danny Vainstein
  • Orr Fischer
  • Sauman Das
  • Faraz Mirza

We study the problem of learning a hierarchical tree representation of data from labeled samples, taken from an arbitrary (and possibly adversarial) distribution. Consider a collection of data tuples labeled according to their hierarchical structure. The smallest number of such tuples required in order to be able to accurately label subsequent tuples is of interest for data collection in machine learning. We present optimal sample complexity bounds for this problem in several learning settings, including (agnostic) PAC learning and online learning. Our results are based on tight bounds of the Natarajan and Littlestone dimensions of the associated problem. The corresponding tree classifiers can be constructed efficiently in near-linear time.

ICML Conference 2021 Conference Paper

Hierarchical Clustering of Data Streams: Scalable Algorithms and Approximation Guarantees

  • Anand Rajagopalan
  • Fabio Vitale
  • Danny Vainstein
  • Gui Citovsky
  • Cecilia Magdalena Procopiuc
  • Claudio Gentile

We investigate the problem of hierarchically clustering data streams containing metric data in R^d. We introduce a desirable invariance property for such algorithms, describe a general family of hyperplane-based methods enjoying this property, and analyze two scalable instances of this general family against recently popularized similarity/dissimilarity-based metrics for hierarchical clustering. We prove a number of new results related to the approximation ratios of these algorithms, improving in various ways over the literature on this subject. Finally, since our algorithms are principled but also very practical, we carry out an experimental comparison on both synthetic and real-world datasets showing competitive results against known baselines.