Arrow Research search

Author name cluster

Naeemullah Khan

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

AAAI Conference 2022 Conference Paper

DeformRS: Certifying Input Deformations with Randomized Smoothing

  • Motasem Alfarra
  • Adel Bibi
  • Naeemullah Khan
  • Philip H.S. Torr
  • Bernard Ghanem

Deep neural networks are vulnerable to input deformations in the form of vector fields of pixel displacements and to other parameterized geometric deformations e. g. translations, rotations, etc. Current input deformation certification methods either (i) do not scale to deep networks on large input datasets, or (ii) can only certify a specific class of deformations, e. g. only rotations. We reformulate certification in randomized smoothing setting for both general vector field and parameterized deformations and propose DEFORMRS-VF and DEFORMRS-PAR, respectively. Our new formulation scales to large networks on large input datasets. For instance, DEFORMRS-PAR certifies rich deformations, covering translations, rotations, scaling, affine deformations, and other visually aligned deformations such as ones parameterized by Discrete-Cosine-Transform basis. Extensive experiments on MNIST, CIFAR10, and ImageNet show competitive performance of DEFORMRS-PAR achieving a certified accuracy of 39% against perturbed rotations in the set [−10◦, 10◦ ] on ImageNet.

NeurIPS Conference 2020 Conference Paper

Continual Learning in Low-rank Orthogonal Subspaces

  • Arslan Chaudhry
  • Naeemullah Khan
  • Puneet Dokania
  • Philip Torr

In continual learning (CL), a learner is faced with a sequence of tasks, arriving one after the other, and the goal is to remember all the tasks once the continual learning experience is finished. The prior art in CL uses episodic memory, parameter regularization or extensible network structures to reduce interference among tasks, but in the end, all the approaches learn different tasks in a joint vector space. We believe this invariably leads to interference among different tasks. We propose to learn tasks in different (low-rank) vector subspaces that are kept orthogonal to each other in order to minimize interference. Further, to keep the gradients of different tasks coming from these subspaces orthogonal to each other, we learn isometric mappings by posing network training as an optimization problem over the Stiefel manifold. To the best of our understanding, we report, for the first time, strong results over experience-replay baseline with and without memory on standard classification benchmarks in continual learning.