We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
version b5124
Windows
llama-server
"llama-server.exe" ^ -m test.guff ^ --host 127.0.0.1 --port 8090 ^ --keep 0
The llama-server not read the "keep" param that user input in the cli. I have read the code. Here, https://github.com/ggml-org/llama.cpp/blob/bc091a4dc585af25c438c8473285a8cfec5c7695/examples/server/server.cpp#L242A params.n_keep = json_value(data, "n_keep", defaults.n_keep); shoud be params.n_keep = json_value(data, "n_keep", params_base.n_keep);
params.n_keep = json_value(data, "n_keep", defaults.n_keep);
params.n_keep = json_value(data, "n_keep", params_base.n_keep);
No response
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Name and Version
version b5124
Operating systems
Windows
Which llama.cpp modules do you know to be affected?
llama-server
Command line
"llama-server.exe" ^ -m test.guff ^ --host 127.0.0.1 --port 8090 ^ --keep 0
Problem description & steps to reproduce
The llama-server not read the "keep" param that user input in the cli.
I have read the code. Here, https://github.com/ggml-org/llama.cpp/blob/bc091a4dc585af25c438c8473285a8cfec5c7695/examples/server/server.cpp#L242A
params.n_keep = json_value(data, "n_keep", defaults.n_keep);
shoud be
params.n_keep = json_value(data, "n_keep", params_base.n_keep);
First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered: