As neural networks become deeper, the redundancy within their parameters increases. This phenomenon has led to several methods that attempt to reduce the correlation between convolutional filters. We propose a computationally efficient regularization technique that encourages orthonormality between groups of filters within the same layer. Our experiments show that when incorporated into recent adaptation methods for diffusion models and vision transformers (ViTs), this regularization improves performance on downstream tasks. We further show improved robustness when group orthogonality is enforced during adversarial training
We use our regularization method to enforce orthogonality on LoRA layers while fine-tuning a Diffusion Model on downstream datasets. For each of the grids below, top row is LoRA baseline and bottom row is LoRA with GOR.
@article{kurtz2023group,
title={Group Orthogonalization Regularization For Vision Models Adaptation and Robustness},
author={Kurtz, Yoav and Bar, Noga and Giryes, Raja},
journal={arXiv preprint arXiv:2306.10001},
year={2023}
}