Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA


James Seale Smith1,2, Yen-Chang Hsu1, Lingyu Zhang1, Ting Hua1

Zsolt Kira2, Yilin Shen1, Hongxia Jin1


1Samsung Research America, 2Georgia Institute of Technology

Transactions on Machine Learning Research (TMLR) 2024


paper

A use case of our work - a mobile app sequentially learns new customized concepts. At a later time, the user can generate photos of prior learned concepts. The user should be able to generate photos with multiple concepts together, thus ruling out methods such as per-concept adapters or single-image conditioned diffusion. Furthermore, the concepts are fine-grained, and simply learning new tokens or words is not effective.

Abstract


Recent works demonstrate a remarkable ability to customize text-to-image diffusion models while only providing a few example images. What happens if you try to customize such models using multiple, fine-grained concepts in a sequential (i.e., continual) manner? In our work, we show that recent state-of-the-art customization of text-to-image models suffer from catastrophic forgetting when new concepts arrive sequentially. Specifically, when adding a new concept, the ability to generate high quality images of past, similar concepts degrade. To circumvent this forgetting, we propose a new method, C-LoRA, composed of a continually self-regularized low-rank adaptation in cross attention layers of the popular Stable Diffusion model. Furthermore, we use customization prompts which do not include the word of the customized object (i.e., person for a human face dataset) and are initialized as completely random embeddings. Importantly, our method induces only marginal additional parameter costs and requires no storage of user data for replay. We show that C-LoRA not only outperforms several baselines for our proposed setting of text-to-image continual customization, which we refer to as Continual Diffusion, but that we achieve a new state-of-the-art in the well-established rehearsal-free continual learning setting for image classification. The high achieving performance of C-LoRA in two separate domains positions it as a compelling solution for a wide range of applications, and we believe it has significant potential for practical impact.


Method


Our method, C-LoRA, updates the key-value (K-V) projection in U-Net cross-attention modules of Stable Diffusion using a continual, self-regulating low-rank weight adaptation. The past LoRA weight deltas are used to regulate the new LoRA weight deltas by guiding which parameters are most available to be updated. Unlike prior work, we initialize custom tokens as random features and remove the concept name (e.g., person) from the prompt.

Results: Faces


Qualitative results of continual customization using the Celeb-A HQ dataset. Results are shown for three concepts from the learning sequence sampled after training ten concepts sequentially.


Multi-concept results after training on 10 sequential tasks using Celeb-A HQ. Using standard quadrant numbering (I is upper right, II is upper left, III is lower left, IV is lower right), we label which target data belongs in which generated image by directly annotating the target data images.

Results: Landmarks


Qualitative results of continual customization using waterfalls from the Google Landmarks dataset. Results are shown for three concepts from the learning sequence sampled after training ten concepts sequentially.

BibTeX

                @article{smith2024continualdiffusion,
                  title={Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA},
                  author={Smith, James Seale and Hsu, Yen-Chang and Zhang, Lingyu and Hua, Ting and Kira, Zsolt and Shen, Yilin and Jin, Hongxia},
                  journal={Transactions on Machine Learning Research},
                  issn={2835-8856},
                  year={2024}
                }