You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+16-14Lines changed: 16 additions & 14 deletions
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,9 @@
1
1
Wrapper for the official [Stable Diffusion](https://github.com/Stability-AI/stablediffusion) repository, to allow installing via `pip`. Please see the installation section below for more details.
This repository contains [Stable Diffusion](https://github.com/CompVis/stable-diffusion) models trained from scratch and will be continuously updated with
9
9
new checkpoints. The following list provides an overview of all currently available models. More coming soon.
@@ -14,6 +14,8 @@ new checkpoints. The following list provides an overview of all currently availa
14
14
15
15
Step 1 is not necessary on Mac.
16
16
17
+
This will install the `ldm` package, which contains the stable diffusion code.
18
+
17
19
## News
18
20
19
21
**December 7, 2022**
@@ -32,7 +34,7 @@ Per default, the attention operation of the model is evaluated at full precision
32
34
- Added a [x4 upscaling latent text-guided diffusion model](#image-upscaling-with-stable-diffusion).
33
35
- New [depth-guided stable diffusion model](#depth-conditional-stable-diffusion), finetuned from _SD 2.0-base_. The model is conditioned on monocular depth estimates inferred via [MiDaS](https://github.com/isl-org/MiDaS) and can be used for structure-preserving img2img and shape-conditional synthesis.
To augment the well-established [img2img](https://github.com/CompVis/stable-diffusion#image-modification-with-stable-diffusion) functionality of Stable Diffusion, we provide a _shape-preserving_ stable diffusion model.
@@ -166,19 +168,19 @@ streamlit run scripts/streamlit/depth2img.py configs/stable-diffusion/v2-midas-i
166
168
```
167
169
168
170
This method can be used on the samples of the base model itself.
169
-
For example, take [this sample](assets/stable-samples/depth2img/old_man.png) generated by an anonymous discord user.
171
+
For example, take [this sample](https://github.com/Stability-AI/stablediffusion/raw/main/assets/stable-samples/depth2img/old_man.png) generated by an anonymous discord user.
170
172
Using the [gradio](https://gradio.app) or [streamlit](https://streamlit.io/) script `depth2img.py`, the MiDaS model first infers a monocular depth estimate given this input,
171
173
and the diffusion model is then conditioned on the (relative) depth output.
This model is particularly useful for a photorealistic style; see the [examples](assets/stable-samples/depth2img).
180
+
This model is particularly useful for a photorealistic style; see the [examples](https://github.com/Stability-AI/stablediffusion/raw/main/assets/stable-samples/depth2img).
179
181
For a maximum strength of 1.0, the model removes all pixel-based information and only relies on the text prompt and the inferred monocular depth estimate.
0 commit comments