-
Notifications
You must be signed in to change notification settings - Fork 0
Workflowai integration docs #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
☑️ reviewed.
- **Cloud-based or self-hosted** thanks to our [open-source](https://github.com/WorkflowAI/WorkflowAI/blob/main/LICENSE) licensing model | ||
- **We value your privacy** and we are SOC-2 Type 1 certified. We do not train models on your data, nor do the LLM providers we use. | ||
|
||
Learn more about all WorkflowAI's features in our [docs](https://docs.workflowai.com/). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure we want to send developers to the documentation.
the goal of the funnel is to get developers to go to workflowai.com/developers/python/instructor
all other links are basically killing our tracking.
|
||
WorkflowAI is a LLM router, observability and collaboration platform that provides developpers with an extensive toolkit for structured generation. | ||
|
||
## Why use WorkflowAI with Instructor? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this section is too long, too much links to other place, too much.
the main link that we want developers to go to is: workflowai.com/developers/python/instructor
description: "Complete guide to using Instructor with WorkflowAI. Learn how to generate structured, type-safe outputs and leverage WorkflowAI's model-switching, observability & reliability features" | ||
--- | ||
|
||
WorkflowAI is a LLM router, observability and collaboration platform that provides developpers with an extensive toolkit for structured generation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll work on that part....
) | ||
|
||
return client.chat.completions.create( | ||
model="user-info-extraction-agent/claude-3-7-sonnet-latest", # Agent now runs Claude 3.7 Sonnet |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be a lot better to show more code examples with different model instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same feedback apply to the PR on our own documentation.
|
||
WorkflowAI allows you to view all the runs that were made for your agent: | ||
|
||
 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we upload assets?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can add them to the PR. It looks like they have a bunch of images in docs/img
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really doubt that they will let us commit images to their repo.. (I would definitely not). Can we just upload to our storage and pass a URL?
It looks like integrations doc don't use images
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just upload to our storage and pass a URL?
yup, probably the way to go.
messages=[{"role": "user", "content": user_message}], | ||
) | ||
|
||
if __name__ == "__main__": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need for that part.
focus on the relevant part that you're highlighting the section.
(adjust as well in a PR for our own documentation)
|
||
## Templating with Input Variables | ||
|
||
Introducing input variables separates static instructions from dynamic content, making your agents easier to observe, since WorkflowAI logs these input variables separately. Using input variables also allows to use [benchmarks](https://docs.workflowai.com/features/benchmarks) and [deployments](https://docs.workflowai.com/features/deployments). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
re-write this sentence assuming that people don't know what "benchmarks" and "deployments" are, and won't click to know more.
print(f"Classification: {result.kind}") # 'work' | ||
``` | ||
|
||
## Using Deployments for Server-Managed Instructions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you send this section to a engineer friend of yours @yannbu to check if they do understand this part? thanks.
|
||
## Streaming | ||
|
||
We are currently implementing streaming on our OpenAI compatible chat completion endpoint. We'll update this documentation shortly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yannbu could you please confirm you have the tests in autopilot
setup for streaming + instructor already? thanks.
## Streaming | ||
|
||
We are currently implementing streaming on our OpenAI compatible chat completion endpoint. We'll update this documentation shortly. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
include a RECAP section with the main points we want to hit, and a link/CTA on how to get started. I'll review this section, then we will use in all documentations everywhere.
user_info = extract_user_info("John Doe is 32 years old.") | ||
print("Basic example result:", user_info) # UserInfo(name='John Doe', age=32) | ||
``` | ||
### Supporte Instructor Modes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo: "Supporte" (supported?)
|
||
Then either export your credentials: | ||
```bash | ||
export WORKFLOWAI_API_KEY=<your-workflowai-api-key> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That seems unnecessary as well since we are talking to devs ?
Also I don't think the API URL should be configured via an env var since it's a constant ?
We could just say a single sentence like
Set up the base URL of the OpenAI SDK to
https://run.workflowai.com/v1
and use a WorflowAI API key in place of the OpenAI api key
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could have something simpler similar to the openai doc ? https://github.com/567-labs/instructor/blob/main/docs/integrations/openai.md
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i agree with @guillaq
|
||
WorkflowAI allows you to view all the runs that were made for your agent: | ||
|
||
 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can add them to the PR. It looks like they have a bunch of images in docs/img
|
||
WorkflowAI allows you to view all the runs that were made for your agent: | ||
|
||
 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really doubt that they will let us commit images to their repo.. (I would definitely not). Can we just upload to our storage and pass a URL?
It looks like integrations doc don't use images
@pierrevalade @guillaq closes WOR-4550: Prepare MR for Instructor's docs
WDYT ?
Updates compared to our docs are minimal, and are mostly located in the beginning of the docs.
My only question is "is our docs too marketing-oriented ?" if you compare to other docs that are very "dry" and do not try to "sell" the products.