Skip to content

fix: reverted the URL of llama.cpp back to 'completion'. #5726

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ipaddicting
Copy link

@ipaddicting ipaddicting commented May 19, 2025

Description

Closes: #5530
Reverted the URL of llama.cpp back to 'completion' since it's not changed at all.

The latest documentation from llama.cpp:
https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md#post-completion-given-a-prompt-it-returns-the-predicted-completion

/v1/completions is for OAI-compatible clients only, but in LlamaCpp.ts there is no /v1 presented, and I don't think it should.

Tests

Local tests with the extension built.

@ipaddicting ipaddicting requested a review from a team as a code owner May 19, 2025 06:45
@ipaddicting ipaddicting requested review from tomasz-stefaniak and removed request for a team May 19, 2025 06:45
@dosubot dosubot bot added the size:M This PR changes 30-99 lines, ignoring generated files. label May 19, 2025
Copy link

netlify bot commented May 19, 2025

Deploy Preview for continuedev canceled.

Name Link
🔨 Latest commit e8523a2
🔍 Latest deploy log https://app.netlify.com/projects/continuedev/deploys/682af21731c9e800089d265a

Copy link

github-actions bot commented May 19, 2025

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

@ipaddicting ipaddicting changed the title fix: separated llamafile implementation from llama.cpp. fix: separated llamafile implementation from llama.cpp. Closes: #5530 May 19, 2025
@ipaddicting ipaddicting changed the title fix: separated llamafile implementation from llama.cpp. Closes: #5530 fix-5530: separated llamafile implementation from llama.cpp. May 19, 2025
@ipaddicting ipaddicting changed the title fix-5530: separated llamafile implementation from llama.cpp. fix: separated llamafile implementation from llama.cpp. May 19, 2025
@ipaddicting
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

github-actions bot added a commit that referenced this pull request May 19, 2025
@dosubot dosubot bot added size:XS This PR changes 0-9 lines, ignoring generated files. and removed size:M This PR changes 30-99 lines, ignoring generated files. labels May 19, 2025
@ipaddicting ipaddicting changed the title fix: separated llamafile implementation from llama.cpp. fix: reverted the url of llama.cpp back to 'completion'. May 19, 2025
@ipaddicting ipaddicting changed the title fix: reverted the url of llama.cpp back to 'completion'. fix: reverted the URL of llama.cpp back to 'completion'. May 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size:XS This PR changes 0-9 lines, ignoring generated files.
Projects
Status: Todo
Development

Successfully merging this pull request may close these issues.

VSCode llamafile Support Is Broken
1 participant