Skip to content

Commit 0439b51

Browse files
lsteinmauwii
andauthored
Simple Installer for Unified Directory Structure, Initial Implementation (invoke-ai#1819)
* partially working simple installer * works on linux * fix linux requirements files * read root environment variable in right place * fix cat invokeai.init in test workflows * fix classical cp error in test-invoke-pip.yml * respect --root argument now * untested bat installers added * windows install.bat now working fix logic to find frontend files * rename simple_install to "installer" 1. simple_install => 'installer' 2. source and binary install directories are removed * enable update scripts to update requirements - Also pin requirements to known working commits. - This may be a breaking change; exercise with caution - No functional testing performed yet! * update docs and installation requirements NOTE: This may be a breaking commit! Due to the way the installer works, I have to push to a public branch in order to do full end-to-end testing. - Updated installation docs, removing binary and source installers and substituting the "simple" unified installer. - Pin requirements for the "http:" downloads to known working commits. - Removed as much as possible the invoke-ai forks of others' repos. * fix directory path for installer * correct requirement/environment errors * exclude zip files in .gitignore * possible fix for dockerbuild * ready for torture testing - final Windows bat file tweaks - copy environments-and-requirements to the runtime directory so that the `update.sh` script can run. This is not ideal, since we lose control over the requirements. Better for the update script to pull the proper updated requirements script from the repository. * allow update.sh/update.bat to install arbitrary InvokeAI versions - Can pass the zip file path to any InvokeAI release, branch, commit or tag, and the installer will try to install it. - Updated documentation - Added Linux Python install hints. * use binary installer's :err_exit function * user diffusers 0.10.0 * added logic for CPPFLAGS on mac * improve windows install documentation - added information on a couple of gotchas I experienced during windows installation, including DLL loading errors experienced when Visual Studio C++ Redistributable was not present. * tagged to pull from 2.2.4-rc1 - also fix error of shell window closing immediately if suitable python not found Co-authored-by: mauwii <[email protected]>
1 parent ef6870c commit 0439b51

37 files changed

+1069
-559
lines changed

.github/workflows/test-invoke-conda.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -114,9 +114,9 @@ jobs:
114114
run: |
115115
python scripts/configure_invokeai.py --no-interactive --yes
116116
117-
- name: cat ~/.invokeai
117+
- name: cat invokeai.init
118118
id: cat-invokeai
119-
run: cat ~/.invokeai
119+
run: cat ${{ env.INVOKEAI_ROOT }}/invokeai.init
120120

121121
- name: Run the tests
122122
id: run-tests

.gitignore

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -222,12 +222,11 @@ environment.yml
222222
requirements.txt
223223

224224
# source installer files
225-
source_installer/*zip
226-
source_installer/invokeAI
227-
install.bat
228-
install.sh
229-
update.bat
230-
update.sh
225+
installer/*zip
226+
installer/install.bat
227+
installer/install.sh
228+
installer/update.bat
229+
installer/update.sh
231230

232231
# this may be present if the user created a venv
233232
invokeai

backend/invoke_ai_web_server.py

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -246,14 +246,16 @@ def upload():
246246

247247
def find_frontend(self):
248248
my_dir = os.path.dirname(__file__)
249-
for candidate in (os.path.join(my_dir,'..','frontend','dist'), # pip install -e .
250-
os.path.join(my_dir,'../../../../frontend','dist') # pip install .
249+
# LS: setup.py seems to put the frontend in different places on different systems, so
250+
# this is fragile and needs to be replaced with a better way of finding the front end.
251+
for candidate in (os.path.join(my_dir,'..','frontend','dist'), # pip install -e .
252+
os.path.join(my_dir,'../../../../frontend','dist'), # pip install . (Linux, Mac)
253+
os.path.join(my_dir,'../../../frontend','dist'), # pip install . (Windows)
251254
):
252255
if os.path.exists(candidate):
253256
return candidate
254257
assert "Frontend files cannot be found. Cannot continue"
255258

256-
257259
def setup_app(self):
258260
self.result_url = "outputs/"
259261
self.init_image_url = "outputs/init-images/"

binary_installer/create_installers.sh

Lines changed: 0 additions & 32 deletions
This file was deleted.

binary_installer/invoke.sh.in

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,6 @@
22

33
set -eu
44

5-
# ensure we're in the correct folder in case user's CWD is somewhere else
6-
scriptdir=$(dirname "$0")
7-
cd "$scriptdir"
8-
95
. .venv/bin/activate
106

117
# set required env var for torch on mac MPS

docker-build/Dockerfile.cloud

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ RUN --mount=type=cache,target=/root/.cache/pip \
3636

3737
COPY . .
3838
RUN --mount=type=cache,target=/root/.cache/pip \
39-
cp binary_installer/py3.10-linux-x86_64-cuda-reqs.txt requirements.txt && \
39+
cp environments-and-requirements/requirements-lin-cuda.txt requirements.txt && \
4040
pip install -r requirements.txt &&\
4141
pip install -e .
4242

docs/index.md

Lines changed: 17 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -82,13 +82,18 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB or RAM.
8282

8383
This fork is supported across Linux, Windows and Macintosh. Linux
8484
users can use either an Nvidia-based card (with CUDA support) or an
85-
AMD card (using the ROCm driver). For full installation and upgrade
86-
instructions, please see:
87-
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
85+
AMD card (using the ROCm driver).
86+
87+
First time users, please see [Automated
88+
Installer](installation/INSTALL_AUTOMATED.md) for a walkthrough of
89+
getting InvokeAI up and running on your system. For alternative
90+
installation and upgrade instructions, please see: [InvokeAI
91+
Installation Overview](installation/)
8892

8993
Linux users who wish to make use of the PyPatchMatch inpainting
9094
functions will need to perform a bit of extra work to enable this
91-
module. Instructions can be found at [Installing PyPatchMatch](installation/INSTALL_PATCHMATCH.md).
95+
module. Instructions can be found at [Installing
96+
PyPatchMatch](installation/INSTALL_PATCHMATCH.md).
9297

9398
## :fontawesome-solid-computer: Hardware Requirements
9499

@@ -100,26 +105,25 @@ You wil need one of the following:
100105
- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux only)
101106
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
102107

103-
We do not recommend the GTX 1650 or 1660 series video cards. They are
104-
unable to run in half-precision mode and do not come with sufficient VRAM
105-
to render 512x512 images.
108+
We do **not recommend** the following video cards due to issues with
109+
their running in half-precision mode and having insufficient VRAM to
110+
render 512x512 images in full-precision mode:
111+
112+
- NVIDIA 10xx series cards such as the 1080ti
113+
- GTX 1650 series cards
114+
- GTX 1660 series cards
106115

107116
### :fontawesome-solid-memory: Memory
108117

109118
- At least 12 GB Main Memory RAM.
110119

111120
### :fontawesome-regular-hard-drive: Disk
112121

113-
- At least 12 GB of free disk space for the machine learning model, Python, and
122+
- At least 18 GB of free disk space for the machine learning model, Python, and
114123
all its dependencies.
115124

116125
!!! info
117126

118-
If you are have a Nvidia 10xx series card (e.g. the 1080ti), please run the invoke script in
119-
full-precision mode as shown below.
120-
121-
Similarly, specify full-precision mode on Apple M1 hardware.
122-
123127
Precision is auto configured based on the device. If however you encounter errors like
124128
`expected type Float but found Half` or `not implemented for Half` you can try starting
125129
`invoke.py` with the `--precision=float32` flag:

0 commit comments

Comments
 (0)