Arrow Research search

Author name cluster

Maya Fuchs

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

NeurIPS Conference 2024 Conference Paper

Assemblage: Automatic Binary Dataset Construction for Machine Learning

  • Chang Liu
  • Rebecca Saul
  • Yihao Sun
  • Edward Raff
  • Maya Fuchs
  • Townsend Southard Pantano
  • James Holt
  • Kristopher Micinski

Binary code is pervasive, and binary analysis is a key task in reverse engineering, malware classification, and vulnerability discovery. Unfortunately, while there exist large corpuses of malicious binaries, obtaining high-quality corpuses of benign binaries for modern systems has proven challenging (e. g. , due to licensing issues). Consequently, machine learning based pipelines for binary analysis utilize either costly commercial corpuses (e. g. , VirusTotal) or open-source binaries (e. g. , coreutils) available in limited quantities. To address these issues, we present Assemblage: an extensible cloud-based distributed system that crawls, configures, and builds Windows PE binaries to obtain high-quality binary corpuses suitable for training state-of-the-art models in binary analysis. We have run Assemblage on AWS over the past year, producing 890k Windows PE and 428k Linux ELF binaries across 29 configurations. Assemblage is designed to be both reproducible and extensible, enabling users to publish "recipes" for their datasets, and facilitating the extraction of a wide array of features. We evaluated Assemblage by using its data to train modern learning-based pipelines for compiler provenance and binary function similarity. Our results illustrate the practical need for robust corpuses of high-quality Windows PE binaries in training modern learning-based binary analyses.

NeurIPS Conference 2022 Conference Paper

A General Framework for Auditing Differentially Private Machine Learning

  • Fred Lu
  • Joseph Munoz
  • Maya Fuchs
  • Tyler LeBlond
  • Elliott Zaresky-Williams
  • Edward Raff
  • Francis Ferraro
  • Brian Testa

We present a framework to statistically audit the privacy guarantee conferred by a differentially private machine learner in practice. While previous works have taken steps toward evaluating privacy loss through poisoning attacks or membership inference, they have been tailored to specific models or have demonstrated low statistical power. Our work develops a general methodology to empirically evaluate the privacy of differentially private machine learning implementations, combining improved privacy search and verification methods with a toolkit of influence-based poisoning attacks. We demonstrate significantly improved auditing power over previous approaches on a variety of models including logistic regression, Naive Bayes, and random forest. Our method can be used to detect privacy violations due to implementation errors or misuse. When violations are not present, it can aid in understanding the amount of information that can be leaked from a given dataset, algorithm, and privacy specification.