Skip to content

Commit d501865

Browse files
lsteinLincoln Stein
and
Lincoln Stein
authored
add a new FAQ for converting safetensors (invoke-ai#6736)
Co-authored-by: Lincoln Stein <[email protected]>
1 parent d62310b commit d501865

File tree

1 file changed

+16
-0
lines changed

1 file changed

+16
-0
lines changed

docs/help/FAQ.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -196,6 +196,22 @@ tips to reduce the problem:
196196
=== "12GB VRAM GPU"
197197

198198
This should be sufficient to generate larger images up to about 1280x1280.
199+
200+
## Checkpoint Models Load Slowly or Use Too Much RAM
201+
202+
The difference between diffusers models (a folder containing multiple
203+
subfolders) and checkpoint models (a file ending with .safetensors or
204+
.ckpt) is that InvokeAI is able to load diffusers models into memory
205+
incrementally, while checkpoint models must be loaded all at
206+
once. With very large models, or systems with limited RAM, you may
207+
experience slowdowns and other memory-related issues when loading
208+
checkpoint models.
209+
210+
To solve this, go to the Model Manager tab (the cube), select the
211+
checkpoint model that's giving you trouble, and press the "Convert"
212+
button in the upper right of your browser window. This will conver the
213+
checkpoint into a diffusers model, after which loading should be
214+
faster and less memory-intensive.
199215

200216
## Memory Leak (Linux)
201217

0 commit comments

Comments
 (0)