Arrow Research search

Author name cluster

Hyungrok Do

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
2 author rows

Possible papers

4

NeurIPS Conference 2025 Conference Paper

Adaptive Time Encoding for Irregular Multivariate Time-Series Classification

  • Sangho Lee
  • Kyeongseo Min
  • Youngdoo Son
  • Hyungrok Do

Time series are often irregularly sampled with uneven time intervals. In multivariate cases, such irregularities may lead to misaligned observations across variables and varying observation counts, making it difficult to extract intrinsic patterns and degrading the classification performance of deep learning models. In this study, we propose an adaptive time encoding approach to address the challenge of irregular sampling in multivariate time-series classification. Our approach generates latent representations at learnable reference points that capture missingness patterns in irregular sequences, enhancing classification performance. We also introduce consistency regularization techniques to incorporate intricate temporal and intervariable information into the learned representations. Extensive experiments demonstrate that our method achieves state-of-the-art performance with high computational efficiency in irregular multivariate time-series classification tasks.

ICLR Conference 2023 Conference Paper

Domain Generalization via Heckman-type Selection Models

  • Hyungu Kahng
  • Hyungrok Do
  • Judy Zhong

The domain generalization (DG) setup considers the problem where models are trained on data sampled from multiple domains and evaluated on test domains unseen during training. In this paper, we formulate DG as a sample selection problem where each domain is sampled from a common underlying population through non-random sampling probabilities that correlate with both the features and the outcome. Under this setting, the fundamental iid assumption of the empirical risk minimization (ERM) is violated, so it often performs worse on test domains whose non-random sampling probabilities differ from the domains in the training dataset. We propose a Selection-Guided DG (SGDG) framework to learn the selection probability of each domain and the joint distribution of the outcome and domain selection variables. The proposed SGDG is domain generalizable as it intends to minimize the risk under the population distribution. We theoretically proved that, under certain regular conditions, SGDG can achieve smaller risk than ERM. Furthermore, we present a class of parametric SGDG (HeckmanDG) estimators applicable to continuous, binary, and multinomial outcomes. We also demonstrated its efficacy empirically through simulations and experiments on a set of benchmark datasets comparing with other well-known DG methods.

ICML Conference 2022 Conference Paper

Fair Generalized Linear Models with a Convex Penalty

  • Hyungrok Do
  • Preston Putzel
  • Axel S. Martin
  • Padhraic Smyth
  • Judy Zhong

Despite recent advances in algorithmic fairness, methodologies for achieving fairness with generalized linear models (GLMs) have yet to be explored in general, despite GLMs being widely used in practice. In this paper we introduce two fairness criteria for GLMs based on equalizing expected outcomes or log-likelihoods. We prove that for GLMs both criteria can be achieved via a convex penalty term based solely on the linear components of the GLM, thus permitting efficient optimization. We also derive theoretical properties for the resulting fair GLM estimator. To empirically demonstrate the efficacy of the proposed fair GLM, we compare it with other well-known fair prediction methods on an extensive set of benchmark datasets for binary classification and regression. In addition, we demonstrate that the fair GLM can generate fair predictions for a range of response variables, other than binary and continuous outcomes.