Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Simplify getting started doc #1839

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Conversation

raghotham
Copy link
Contributor

Focus on just venv
Add examples for agents and ragagent

Focus on just venv
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Mar 29, 2025
Copy link
Collaborator

@leseb leseb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just nits, thanks!


The API is **exactly identical** for both clients.
Install uv
```bash
Copy link
Collaborator

@franciscojavierarceo franciscojavierarceo Mar 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest this below:

For macOS and Linux:

curl -LsSf https://astral.sh/uv/install.sh | sh

For Windows:

Use irm to download the script and execute it with iex:

powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

Copy link
Collaborator

@franciscojavierarceo franciscojavierarceo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great @raghotham I was planning on handling this today actually along with some broader doc enhancements I'm doing.

A few small suggestions to just add some links and remove some things.

Also looks like the linter needs to be rerun.

pip install llama_stack
llama stack build --template ollama --image-type <conda|venv>
python inference.py
python lstest.py
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added file names as comments so that they can be run with that name - maybe not needed?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, not a bad idea! Users that try this and find issues will probably refer to the script this way.

```

Then you can start the server using the container tool of your choice. For example, if you are running Docker you can use the following command:
Build llama stack for ollama
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a new user, we need to add some explanation of what this is and why do we need to "build in python" ?

Docker containers run in their own isolated network namespaces on Linux. To allow the container to communicate with services running on the host via `localhost`, you need `--network=host`. This makes the container use the host’s network directly so it can connect to Ollama running on `localhost:11434`.

Linux users having issues running the above command should instead try the following:
Run llama stack
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here what does this run stack mean, may be 1-2 lines explaining that we need to run a server locally while this can be a hosted endpoint later.

Let's use the `llama-stack-client` CLI to check the connectivity to the server.

```bash
$ llama-stack-client configure --endpoint http://localhost:$LLAMA_STACK_PORT
> Enter the API key (leave empty if no key is needed):
llama-stack-client configure --endpoint http://localhost:$LLAMA_STACK_PORT --api-key none
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LLAMA_STACK_PORT is not mentioned anywhere, lets just hardcode 8321 here for ease

from llama_stack_client import LlamaStackClient
agent = Agent(client,
model=model_id,
instructions="You are a helpful assistant that can answer questions about the Torchtune project."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that can answer questions about the Torchtune project.

drop this probably ?

stream=True
)
for event in stream:
print(event)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

may be pprint instead of print , that way each object is shown properly instead of a not easy to read python object.

from rich.pretty import pprint

# Streaming example
print("Streaming ...")
stream = agent.create_turn(
messages=[ {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ensure you run pre-commit with proper formatting

)
for log in AgentEventLogger().log(response):
log.print()
for chunk in stream:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are we doing this type of print, lets just use the AgentEventLogger

@franciscojavierarceo
Copy link
Collaborator

franciscojavierarceo commented Apr 4, 2025

@hardikjshah @raghotham I started restructuring some of this to make the layout a little easier for new folks. I'm not done yet but I figured I'd make you aware of it as I think this may help link to related items.

PR here: #1873

I'll adjust more tonight but, again, wanted to share what I'm doing here.

I'll also note that I think for the quickstart we should probably just start with the stack as a library to reduce the number of steps. That said, we could repurpose this for the full comprehensive example.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants