We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LM Studio 0.3.14 latest vulcan-llama.cpp v1.26.0 CPU Ryzen 7 7800 X3D w/ on die GPU only { "name": "llama.cpp-win-x86_64-vulkan-avx2", "version": "1.26.0", "domains": [ "llm", "embedding" ], "engine": "llama.cpp", "target_libraries": [ { "name": "llm_engine_vulkan.node", "type": "llm_engine", "version": "0.1.2" }, { "name": "liblmstudio_bindings_vulkan.node", "type": "liblmstudio", "version": "0.2.26" } ], "platform": "win", "cpu": { "architecture": "x86_64", "instruction_set_extensions": [ "AVX2" ] }, "gpu": { "framework": "Vulkan" }, "supported_model_formats": [ "gguf" ], "vendor_lib_package_names": [ "win-llama-vulkan-vendor-v1" ], "manifest_version": "4", "extension_type": "engine" }
[ { "modelCompatibilityType": "gguf", "runtime": { "hardwareSurveyResult": { "compatibility": { "status": "Compatible" }, "cpuSurveyResult": { "result": { "code": "Success", "message": "" }, "cpuInfo": { "name": "", "architecture": "x86_64", "supportedInstructionSetExtensions": [ "AVX", "AVX2" ] } }, "memoryInfo": { "ramCapacity": 67874881536, "vramCapacity": 0, "totalMemory": 67874881536 }, "gpuSurveyResult": { "result": { "code": "Success", "message": "" }, "gpuInfo": [ { "name": "AMD Radeon(TM) Graphics", "deviceId": 0, "totalMemoryCapacityBytes": 34474295296, "dedicatedMemoryCapacityBytes": 0, "integrationType": "Integrated", "detectionPlatform": "Vulkan", "detectionPlatformVersion": "1.3.283", "otherInfo": { "deviceLUIDValid": "true", "deviceLUID": "b2f4560000000000", "deviceUUID": "00000000100000000000000000000000", "driverID": "1", "driverName": "AMD proprietary driver", "driverInfo": "25.3.2 (AMD proprietary shader compiler)", "vendorID": "4098" } } ] } } } } ]
Should more devices/cores be detected? Should both compute cores be used? Can the 3D part be used?
Windows
No response
Running maziyarpanahi/gemma-3-1b-it
works, just looks like it's limping one 1 compute core.
The text was updated successfully, but these errors were encountered:
I think that Compute_0 and Compute_1 are not separate compute units, but just separate performance metrics, so actually your iGPU is fully utilized
Sorry, something went wrong.
Okay, thank you. There are a number of other options as well
No branches or pull requests
Name and Version
LM Studio 0.3.14
latest vulcan-llama.cpp v1.26.0
CPU Ryzen 7 7800 X3D w/ on die GPU only
{
"name": "llama.cpp-win-x86_64-vulkan-avx2",
"version": "1.26.0",
"domains": [
"llm",
"embedding"
],
"engine": "llama.cpp",
"target_libraries": [
{
"name": "llm_engine_vulkan.node",
"type": "llm_engine",
"version": "0.1.2"
},
{
"name": "liblmstudio_bindings_vulkan.node",
"type": "liblmstudio",
"version": "0.2.26"
}
],
"platform": "win",
"cpu": {
"architecture": "x86_64",
"instruction_set_extensions": [
"AVX2"
]
},
"gpu": {
"framework": "Vulkan"
},
"supported_model_formats": [
"gguf"
],
"vendor_lib_package_names": [
"win-llama-vulkan-vendor-v1"
],
"manifest_version": "4",
"extension_type": "engine"
}
[
{
"modelCompatibilityType": "gguf",
"runtime": {
"hardwareSurveyResult": {
"compatibility": {
"status": "Compatible"
},
"cpuSurveyResult": {
"result": {
"code": "Success",
"message": ""
},
"cpuInfo": {
"name": "",
"architecture": "x86_64",
"supportedInstructionSetExtensions": [
"AVX",
"AVX2"
]
}
},
"memoryInfo": {
"ramCapacity": 67874881536,
"vramCapacity": 0,
"totalMemory": 67874881536
},
"gpuSurveyResult": {
"result": {
"code": "Success",
"message": ""
},
"gpuInfo": [
{
"name": "AMD Radeon(TM) Graphics",
"deviceId": 0,
"totalMemoryCapacityBytes": 34474295296,
"dedicatedMemoryCapacityBytes": 0,
"integrationType": "Integrated",
"detectionPlatform": "Vulkan",
"detectionPlatformVersion": "1.3.283",
"otherInfo": {
"deviceLUIDValid": "true",
"deviceLUID": "b2f4560000000000",
"deviceUUID": "00000000100000000000000000000000",
"driverID": "1",
"driverName": "AMD proprietary driver",
"driverInfo": "25.3.2 (AMD proprietary shader compiler)",
"vendorID": "4098"
}
}
]
}
}
}
}
]
Should more devices/cores be detected?
Should both compute cores be used?
Can the 3D part be used?
Operating systems
Windows
Which llama.cpp modules do you know to be affected?
No response
Command line
Problem description & steps to reproduce
Running maziyarpanahi/gemma-3-1b-it
works, just looks like it's limping one 1 compute core.
First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered: