Group Orthogonalization Regularization

Department of Electrical Engineering
Tel Aviv University
BMVC 2023 (Oral)

Group Orthogonalization Regularization GOR is motivated by the observation that orthonormal filters are more diverse, expressive, and less redundant than correlated filters. We enforce orthogonality only between groups of filters.

Abstract

As neural networks become deeper, the redundancy within their parameters increases. This phenomenon has led to several methods that attempt to reduce the correlation between convolutional filters. We propose a computationally efficient regularization technique that encourages orthonormality between groups of filters within the same layer. Our experiments show that when incorporated into recent adaptation methods for diffusion models and vision transformers (ViTs), this regularization improves performance on downstream tasks. We further show improved robustness when group orthogonality is enforced during adversarial training

Diffusion Model LoRA finetuning + GOR

We use our regularization method to enforce orthogonality on LoRA layers while fine-tuning a Diffusion Model on downstream datasets. For each of the grids below, top row is LoRA baseline and bottom row is LoRA with GOR.

BibTeX

@article{kurtz2023group,
  title={Group Orthogonalization Regularization For Vision Models Adaptation and Robustness},
  author={Kurtz, Yoav and Bar, Noga and Giryes, Raja},
  journal={arXiv preprint arXiv:2306.10001},
  year={2023}
}