Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build fails to use shell configured python #1811

Open
2 tasks
thoraxe opened this issue Mar 27, 2025 · 5 comments
Open
2 tasks

build fails to use shell configured python #1811

thoraxe opened this issue Mar 27, 2025 · 5 comments
Labels
bug Something isn't working

Comments

@thoraxe
Copy link

thoraxe commented Mar 27, 2025

System Info

θ82° [thoraxe:~/.llama/distributions/remote-vllm] [ols-llamastack] 130 $ llama stack build --image-type venv
> Enter a name for your Llama Stack (e.g. my-local-stack): stack
> Enter the image type you want your Llama Stack to be built as (container or conda or venv): venv

Llama Stack is composed of several APIs working together. Let's select
the provider types (implementations) you want to use for these APIs.

Tip: use <TAB> to see options for the providers.

> Enter provider for API inference: remote::openai
> Enter provider for API safety: inline::llama-guard
> Enter provider for API agents: inline::meta-reference
> Enter provider for API vector_io: inline::meta-reference
> Enter provider for API datasetio: inline::localfs
> Enter provider for API scoring: inline::basic
> Enter provider for API eval: inline::meta-reference
> Enter provider for API post_training: inline::torchtune
> Enter provider for API tool_runtime: remote::model-context-protocol
> Enter provider for API telemetry: inline::meta-reference
 
 > (Optional) Enter a short description for your Llama Stack:
Using virtual environment llamastack-stack
Using CPython 3.9.21 interpreter at: /usr/bin/python3.9
Creating virtual environment at: llamastack-stack
Activate with: source llamastack-stack/bin/activate
Using Python 3.9.21 environment at: llamastack-stack
  × No solution found when resolving dependencies:
  ╰─▶ Because the current Python version (3.9.21) does not satisfy Python>=3.10 and all versions of llama-stack depend on Python>=3.10, we can conclude that all versions of llama-stack cannot be used.
      And because you require llama-stack, we can conclude that your requirements are unsatisfiable.
ERROR    2025-03-27 10:40:52,379 llama_stack.distribution.build:128 uncategorized: Failed to build target               
         llamastack-stack with return code 1                                                                            
Error building stack: Failed to build image llamastack-stack
θ87° [thoraxe:~/.llama/distributions/remote-vllm] [ols-llamastack] 1m13s 1 $ python --version
Python 3.11.5
θ78° [thoraxe:~/.llama/distributions/remote-vllm] [ols-llamastack] $ which python
~/.pyenv/shims/python

Information

  • The official example scripts
  • My own modified scripts

🐛 Describe the bug

.

Error logs

.

Expected behavior

.

@thoraxe thoraxe added the bug Something isn't working label Mar 27, 2025
@terrytangyuan
Copy link
Collaborator

Seems like it did not use your Python 3.11.5 in your shell. I'd recommend using uv when running the llama build command.

@thoraxe
Copy link
Author

thoraxe commented Mar 28, 2025

just using uv results in the same error:

uv run llama stack build
> Enter a name for your Llama Stack (e.g. my-local-stack): stack
> Enter the image type you want your Llama Stack to be built as (container or conda or venv): venv

Llama Stack is composed of several APIs working together. Let's select
the provider types (implementations) you want to use for these APIs.

Tip: use <TAB> to see options for the providers.

> Enter provider for API inference: remote::anthropic
> Enter provider for API safety: inline::llama-guard
> Enter provider for API agents: inline::meta-reference
> Enter provider for API vector_io: inline::faiss
> Enter provider for API datasetio: inline::localfs
> Enter provider for API scoring: inline::basic
> Enter provider for API eval: inline::meta-reference
> Enter provider for API post_training: inline::torchtune
> Enter provider for API tool_runtime: inline::code-interpreter
> Enter provider for API telemetry: inline::meta-reference
 
 > (Optional) Enter a short description for your Llama Stack:
Environment 'llamastack-stack' already exists, re-using it.
Using virtual environment llamastack-stack
Using CPython 3.9.21 interpreter at: /usr/bin/python3.9
Creating virtual environment at: llamastack-stack
Activate with: source llamastack-stack/bin/activate
Using Python 3.9.21 environment at: llamastack-stack
  × No solution found when resolving dependencies:
  ╰─▶ Because the current Python version (3.9.21) does not satisfy Python>=3.10 and all versions of llama-stack depend on Python>=3.10, we can conclude that all versions of llama-stack cannot be used.
      And because you require llama-stack, we can conclude that your requirements are unsatisfiable.
ERROR    2025-03-28 14:19:00,686 llama_stack.distribution.build:128 uncategorized: Failed to build target               
         llamastack-stack with return code 1                                                                            
Error building stack: Failed to build image llamastack-stack

@thoraxe
Copy link
Author

thoraxe commented Mar 28, 2025

Even forcibly running with python as a module doesn't fix the problem:

python -m llama_stack.cli.llama stack build
> Enter a name for your Llama Stack (e.g. my-local-stack): stack
> Enter the image type you want your Llama Stack to be built as (container or conda or venv): venv

Llama Stack is composed of several APIs working together. Let's select
the provider types (implementations) you want to use for these APIs.

Tip: use <TAB> to see options for the providers.

> Enter provider for API inference: remote::anthropic
> Enter provider for API safety: inline::prompt-guard
> Enter provider for API agents: inline::meta-reference
> Enter provider for API vector_io: inline::meta-reference
> Enter provider for API datasetio: inline::localfs
> Enter provider for API scoring: inline::basic
> Enter provider for API eval: inline::meta-reference
> Enter provider for API post_training: inline::torchtune
> Enter provider for API tool_runtime: inline::rag-runtime
> Enter provider for API telemetry: inline::meta-reference
 
 > (Optional) Enter a short description for your Llama Stack:
Environment 'llamastack-stack' already exists, re-using it.
Using virtual environment llamastack-stack
Using CPython 3.9.21 interpreter at: /usr/bin/python3.9
Creating virtual environment at: llamastack-stack
Activate with: source llamastack-stack/bin/activate
Using Python 3.9.21 environment at: llamastack-stack
  × No solution found when resolving dependencies:
  ╰─▶ Because the current Python version (3.9.21) does not satisfy Python>=3.10 and all versions of llama-stack depend on Python>=3.10, we can conclude that all versions of llama-stack cannot be used.
      And because you require llama-stack, we can conclude that your requirements are unsatisfiable.
ERROR    2025-03-28 14:32:05,971 llama_stack.distribution.build:128 uncategorized: Failed to build target               
         llamastack-stack with return code 1                                                                            
Error building stack: Failed to build image llamastack-stack

@thoraxe
Copy link
Author

thoraxe commented Mar 28, 2025

#1170 (comment) appears to be a workaround

thanks to @bbrowning for the find

@bbrowning
Copy link
Contributor

I wonder if we should set the UV_PYTHON environment variable ourselves to the currently running Python installation before calling uv venv at

if that env variable is not already set? That would still allow the user to explicitly set UV_PYTHON to choose a different interpreter, but would allow this to just work in the case reported here without the user having to explicitly set UV_PYTHON.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants