Arrow Research search

Author name cluster

Kamil Faber

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
2 author rows

Possible papers

2

AAAI Conference 2025 Conference Paper

TinySubNets: An Efficient and Low Capacity Continual Learning Strategy

  • Marcin Pietron
  • Kamil Faber
  • Dominik Żurek
  • Roberto Corizzo

Continual Learning (CL) is a highly relevant setting gaining traction in recent machine learning research. Among CL works, architectural and hybrid strategies are particularly effective due to their potential to adapt the model architecture as new tasks are presented. However, many existing solutions do not efficiently exploit model sparsity, and are prone to capacity saturation due to their inefficient use of available weights, which limits the number of learnable tasks. In this paper, we propose TinySubNets (TSN), a novel architectural CL strategy that addresses the issues through the unique combination of pruning with different sparsity levels, adaptive quantization, and weight sharing. Pruning identifies a subset of weights that preserve model performance, making less relevant weights available for future tasks. Adaptive quantization allows a single weight to be separated into multiple parts which can be assigned to different tasks. Weight sharing between tasks boosts the exploitation of capacity and task similarity, allowing for the identification of a better trade-off between model accuracy and capacity. These features allow TSN to efficiently leverage the available capacity, enhance knowledge transfer, and reduce computational resources consumption. Experimental results involving common benchmark CL datasets and scenarios show that our proposed strategy achieves better results in terms of accuracy than existing state-of-the-art CL strategies. Moreover, our strategy is shown to provide a significantly improved model capacity exploitation.

ECAI Conference 2023 Conference Paper

Ada-QPacknet - Multi-Task Forget-Free Continual Learning with Quantization Driven Adaptive Pruning

  • Marcin Pietron
  • Dominik Zurek
  • Kamil Faber
  • Roberto Corizzo

Continual learning (CL) is a challenging machine learning setting that is attracting the interest of an increasing number of researchers. Among recent CL works, architectural strategies appear particularly promising due to their potential to expand and adapt the model architecture as new tasks are presented. However, existing solutions do not efficiently exploit model sparsity due to the adoption of constant pruning ratios. Moreover, current approaches exhibit a tendency to quickly saturate model capacity since the number of weights is limited and each weight is restricted to a single value. In this paper, we propose Ada-QPacknet, a novel architectural CL method that resorts to adaptive pruning and quantization. These two features allow our model to overcome the two crucial issues of effective exploitation of model sparsity and efficient use of model capacity. Specifically, adaptive pruning restores model capacity by reducing the number of weights assigned to each task to a smaller subset of weights that preserves the performance of the full set, allowing other weights to be used for future tasks. Adaptive quantization separates each weight into multiple components with adaptively reduced bit-width, allowing a single weight to solve more than one task without significant performance drops, leading to improved exploitation of model capacity. Experimental results on benchmark CL scenarios show that our proposed method achieves better results in terms of accuracy than existing rehearsal, regularization, and architectural CL strategies. Moreover, our method significantly outperforms forget-free competitors in terms of efficient exploitation of model capacity.