-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Update client SDK snippets #3207
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR updates client SDK snippets in the documentation to reflect new API usage and improved consistency. Key changes include:
- Updating the JavaScript snippets in streaming.md to use the new InferenceEndpoint class and improved formatting.
- Revising the Python examples in visual_language_models.md to instantiate InferenceClient with the base_url parameter.
- Adjusting README.md copy to refer to Inference Endpoints in the plural form.
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.
File | Description |
---|---|
docs/source/conceptual/streaming.md | Updated bash snippet formatting and switched JavaScript usage from HfInferenceEndpoint to InferenceEndpoint. |
docs/source/basic_tutorials/visual_language_models.md | Revised Python code instantiation and updated the JavaScript snippet, but the import statement still references HfInferenceEndpoint. |
README.md | Minor text update to refer to Inference Endpoints. |
Comments suppressed due to low confidence (2)
docs/source/basic_tutorials/visual_language_models.md:145
- The import statement still uses HfInferenceEndpoint while the instantiation later uses InferenceEndpoint. Please update the import to import InferenceEndpoint to ensure consistency.
import { HfInferenceEndpoint } from "@huggingface/inference";
docs/source/basic_tutorials/visual_language_models.md:142
- There is duplicate phrasing in this sentence. Please remove one of the duplicated parts to improve clarity.
We can create a `HfInferenceEndpoint` providing our endpoint URL and We can create a `HfInferenceEndpoint` providing our endpoint URL and [Hugging Face access token](https://huggingface.co/settings/tokens).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm 👌
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This PR updates client SDK snippets in the documentation to reflect new API usage and improved consistency. Key changes include:
(from #3207 (review))