Skip to content

Commit 4889372

Browse files
committed
Add missing tiktoken package to support GLM models
1 parent 82cb614 commit 4889372

File tree

2 files changed

+5
-1
lines changed

2 files changed

+5
-1
lines changed

requirements_override.txt

+4
Original file line numberDiff line numberDiff line change
@@ -3,5 +3,9 @@
33
--extra-index-url https://download.pytorch.org/whl/cu121
44
torch==2.2.1+cu121
55

6+
# We are installing the missing "tiktoken" package
7+
# to enable support for the GLM model architecture.
8+
tiktoken==0.7.0
9+
610
# We are importing the llama.cpp project dependencies.
711
--requirement ./vendor/llama.cpp/requirements.txt

vendor/llama.cpp

0 commit comments

Comments
 (0)