File tree 1 file changed +16
-0
lines changed
1 file changed +16
-0
lines changed Original file line number Diff line number Diff line change @@ -196,6 +196,22 @@ tips to reduce the problem:
196
196
=== "12GB VRAM GPU"
197
197
198
198
This should be sufficient to generate larger images up to about 1280x1280.
199
+
200
+ ## Checkpoint Models Load Slowly or Use Too Much RAM
201
+
202
+ The difference between diffusers models (a folder containing multiple
203
+ subfolders) and checkpoint models (a file ending with .safetensors or
204
+ .ckpt) is that InvokeAI is able to load diffusers models into memory
205
+ incrementally, while checkpoint models must be loaded all at
206
+ once. With very large models, or systems with limited RAM, you may
207
+ experience slowdowns and other memory-related issues when loading
208
+ checkpoint models.
209
+
210
+ To solve this, go to the Model Manager tab (the cube), select the
211
+ checkpoint model that's giving you trouble, and press the "Convert"
212
+ button in the upper right of your browser window. This will conver the
213
+ checkpoint into a diffusers model, after which loading should be
214
+ faster and less memory-intensive.
199
215
200
216
## Memory Leak (Linux)
201
217
You can’t perform that action at this time.
0 commit comments