this post was submitted on 16 Sep 2024
8 points (100.0% liked)

Stable Diffusion

77 readers
1 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Even_Adder@lemmy.dbzer0.com 1 points 1 week ago (2 children)

Last I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn't really work.

[–] clb92@feddit.dk 2 points 1 week ago

Oh well, in practice I'll just continue to enjoy this (possibly forgetful and not-fully-finetunable) model then, that still gives me amazing results 😊

[–] erenkoylu@lemmy.ml 0 points 1 week ago* (last edited 1 week ago)

quite the opposite. Lora's are very effective against catastrophic forgetting, and full finetuning is very dangerous (but also much more powerful).