Arrow Research search

Author name cluster

Ohyun Jo

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

8 papers
1 author row

Possible papers

8

AAAI Conference 2026 Short Paper

C2R-KD: Complex to Real Knowledge Distillation (Student Abstract)

  • Byunghyuk Youn
  • Ohyun Jo

In this work, C2R-KD is proposed, applying a Complex-to-Real projection to map complex domain features into the real domain. C2R-KD mitigates complex-real domain mismatch to strengthen the representational capacity of the student model and further improves the knowledge distillation model performance through the hybrid distillation of features and logits simultaneously. Experimental result demonstrates higher accuracy than the conventional KD across all test environments.

AAAI Conference 2026 Short Paper

CompRestacking: Capturing Channel Dependency in Highly Correlated Multivariate Time Series Data (Student Abstract)

  • Min Kim
  • Ohyun Jo

The consideration of channel correlation is crucial for improving the performance of multivariate time series forecasting. However, existing models fail to capture it in homogeneous and highly correlated channels. In this work, we introduce CompRestacking (Compression Restacking), a strikingly intuitive and effective method to address this problem. The approach consists of three main components: (1) PCC-Restacking for correlation-aware channel ordering, (2) Temporal embedding for time encoding, and (3) Aggregation compression for compact token generation. CompRestacking consistently outperforms in experiment results. The results demonstrate that CompRestacking leverages strong channel correlations for improved performance.

AAAI Conference 2025 Short Paper

Augmented Lagrangian Risk-constrained Reinforcement Learning for Portfolio Optimization (Student Abstract)

  • Bayaraa Enkhsaikhan
  • Ohyun Jo

We applied Risk-averse Reinforcement Learning (RL) to optimize investment portfolios while incorporating risk constraints. Given that portfolios must adhere to risk constraints set by investors and regulators, enforcing hard constraints is essential for practical portfolio optimization. Traditional techniques often lack the flexibility to model the complexities of dynamic financial markets. To address this, we used the Augmented Lagrangian Multiplier (ALM) to impose constraints on the agent, reducing risk during decision-making. Our risk-constrained RL algorithm demonstrated no constraint violations during testing and outperformed other Risk-averse RL methods, indicating its potential for optimizing portfolios for risk-averse investors.

AAAI Conference 2025 Short Paper

Imitation Learning Backoff: Reinforcement Learning-based Channel Access for Guaranteeing Fairness (Student Abstract)

  • Taegyeom Lee
  • Ohyun Jo

This paper addresses contention window optimization for multi-access scenarios. Our investigation into state-of-the-art models revealed that a limited number of nodes dominate the communication channels. Such monopolization issues are critical in networks as they can lead to significant disruptions. To mitigate this monopolization problem, we propose an imitation learning-based backoff mechanism. The proposed model is a reinforcement learning-based contention window optimization method. It imitates the expert's policy to ensure fair policy convergence for the agent and includes opportunities for weight adjustment to boost performance. The proposed model shows a fairness improvement of approximately 20% to 41% across various scenarios.

AAAI Conference 2024 Short Paper

IncepSeqNet: Advancing Signal Classification with Multi-Shape Augmentation (Student Abstract)

  • Jongseok Kim
  • Ohyun Jo

This work proposes and analyzes IncepSeqNet which is a new model combining the Inception Module with the innovative Multi-Shape Augmentation technique. IncepSeqNet excels in feature extraction from sequence signal data consisting of a number of complex numbers to achieve superior classification accuracy across various SNR(Signal-to-Noise Ratio) environments. Experimental results demonstrate IncepSeqNet’s outperformance of existing models, particularly at low SNR levels. Furthermore, we have confirmed its applicability in practical 5G systems by using real-world signal data.

AAAI Conference 2024 Short Paper

Multivariate Time-Series Imagification with Time Embedding in Constrained Environments (Student Abstract)

  • Seung Woo Kang
  • Ohyun Jo

We present an imagification approach for multivariate time-series data tailored to constrained NN-based forecasting model training environments. Our imagification process consists of two key steps: Re-stacking and time embedding. In the Re-stacking stage, time-series data are arranged based on high correlation, forming the first image channel using a sliding window technique. The time embedding stage adds two additional image channels by incorporating real-time information. We evaluate our method by comparing it with three benchmark imagification techniques using a simple CNN-based model. Additionally, we conduct a comparison with LSTM, a conventional time-series forecasting model. Experimental results demonstrate that our proposed approach achieves three times faster model training termination while maintaining forecasting accuracy.

AAAI Conference 2020 Short Paper

Iterative Learning for Reliable Underwater Link Adaptation (Student Abstract)

  • Junghun Byun
  • Yong-Ho Cho
  • Tae-Ho Im
  • Hak-Lim Ko
  • Kyung-Seop Shin
  • Ohyun Jo

This paper describes an iterative learning framework consisting of multi-layer prediction processes for underwater link adaptation. To obtain a dataset in real underwater environments, we implemented OFDM (Orthogonal Frequency Division Multiplexing)-based acoustic communications testbeds for the first time. Actual underwater data measured in Yellow Sea, South Korea, were used for training the iterative learning model. Remarkably, the iterative learning model achieves up to 25% performance improvement over the conventional benchmark model.