Arrow Research search

Author name cluster

Ye Seul Sim

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
2 author rows

Possible papers

4

AAAI Conference 2026 System Paper

AEGIS: Toward Expert-in-the-loop Industrial Anomaly Detection

  • Dongmin Kim
  • Ye Seul Sim
  • Suhee Yoon
  • Sanghyu Yoon
  • Seungdong Yoa
  • Soonyoung Lee
  • Woohyung Lim

Anomaly detection platforms in real-world environments require continuous interaction between automated systems and domain experts, as anomalies evolve dynamically and their definitions vary across contexts. Therefore, an effective platform must collaborate with experts and incorporate their feedback to update the system. This paper introduces AEGIS, an anomaly detection platform that aims to support interaction between domain experts and data-driven agents through three core capabilities: (1) data-driven insights through real-time monitoring, explanations, and distribution shift detection, which invoke customized tools to generate appropriate responses, (2) an expert feedback interface for labeling and direct updates via chat-based interaction, and (3) autonomous model construction that leverages expert-labeled data with LLM-driven hyperparameter optimization. Through this design, AEGIS fosters continuous interaction in which the platform provides insights while experts guide model improvement, ensuring user intent is reflected and robustness is maintained under evolving data distributions.

AAAI Conference 2025 Conference Paper

Diffusion-based Semantic Outlier Generation via Nuisance Awareness for Out-of-Distribution Detection

  • Suhee Yoon
  • Sanghyu Yoon
  • Ye Seul Sim
  • Sungik Choi
  • Kyungeun Lee
  • Hye-Seung Cho
  • Hankook Lee
  • Woohyung Lim

Out-of-distribution (OOD) detection, determining whether a given sample is part of the in-distribution (ID) or not, has been newly explored by a generative model-based outlier synthesizing approach, especially with diffusion models. Nonetheless, existing diffusion models often produce outliers that are considerably distant from the ID in pixel-space, showing limited efficacy for capturing subtle distinctions between ID and OOD. To address these issues, we propose a novel framework, Semantic Outlier generation via Nuisance Awareness (SONA), which directly utilizes informative pixel-space ID images in diffusion models. Thereby, the generated outliers achieve two crucial properties: (i) they closely resemble the ID mainly in nuisances, while (ii) represent discriminative semantic information. To facilitate the separate effect on semantics and nuisances, we introduce SONA guidance, providing region-specific guidance. Extensive experiments demonstrate the effectiveness of our framework, achieving an impressive AUROC of 87% on near-OOD datasets, which surpasses the performance of baseline methods by a significant margin of approximately 6%.

AAAI Conference 2025 Conference Paper

Representation Space Augmentation for Effective Self-Supervised Learning on Tabular Data

  • Moonjung Eo
  • Kyungeun Lee
  • Hye-Seung Cho
  • Dongmin Kim
  • Ye Seul Sim
  • Woohyung Lim

Tabular data, widely used across industries, remains underexplored in deep learning. Self-supervised learning (SSL) shows promise for pre-training deep neural networks (DNNs) on tabular data, but its potential is hindered by challenges in designing suitable augmentations. Unlike image and text data, where SSL leverages inherent spatial or semantic structures, tabular data lacks such explicit structure. This makes traditional input-level augmentations, like modifying or removing features, less effective due to difficulties in balancing critical information preservation with variability. To address these challenges, we propose RaTab, a novel method that shifts augmentation from input-level to representation-level using matrix factorization, specifically truncated SVD. This approach preserves essential data structures while generating diverse representations by applying dropout at various stages of the representation, thereby significantly enhancing SSL performance for tabular data.

ICML Conference 2024 Conference Paper

Binning as a Pretext Task: Improving Self-Supervised Learning in Tabular Domains

  • Kyungeun Lee
  • Ye Seul Sim
  • Hye-Seung Cho
  • Moonjung Eo
  • Suhee Yoon
  • Sanghyu Yoon
  • Woohyung Lim

The ability of deep networks to learn superior representations hinges on leveraging the proper inductive biases, considering the inherent properties of datasets. In tabular domains, it is critical to effectively handle heterogeneous features (both categorical and numerical) in a unified manner and to grasp irregular functions like piecewise constant functions. To address the challenges in the self-supervised learning framework, we propose a novel pretext task based on the classical binning method. The idea is straightforward: reconstructing the bin indices (either orders or classes) rather than the original values. This pretext task provides the encoder with an inductive bias to capture the irregular dependencies, mapping from continuous inputs to discretized bins, and mitigates the feature heterogeneity by setting all features to have category-type targets. Our empirical investigations ascertain several advantages of binning: capturing the irregular function, compatibility with encoder architecture and additional modifications, standardizing all features into equal sets, grouping similar values within a feature, and providing ordering information. Comprehensive evaluations across diverse tabular datasets corroborate that our method consistently improves tabular representation learning performance for a wide range of downstream tasks. The codes are available in https: //github. com/kyungeun-lee/tabularbinning.