Skip to content

Misc. bug: Only using 1 compute core on AMD #12978

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
vRobM opened this issue Apr 16, 2025 · 2 comments
Open

Misc. bug: Only using 1 compute core on AMD #12978

vRobM opened this issue Apr 16, 2025 · 2 comments

Comments

@vRobM
Copy link

vRobM commented Apr 16, 2025

Name and Version

LM Studio 0.3.14
latest vulcan-llama.cpp v1.26.0
CPU Ryzen 7 7800 X3D w/ on die GPU only
{
"name": "llama.cpp-win-x86_64-vulkan-avx2",
"version": "1.26.0",
"domains": [
"llm",
"embedding"
],
"engine": "llama.cpp",
"target_libraries": [
{
"name": "llm_engine_vulkan.node",
"type": "llm_engine",
"version": "0.1.2"
},
{
"name": "liblmstudio_bindings_vulkan.node",
"type": "liblmstudio",
"version": "0.2.26"
}
],
"platform": "win",
"cpu": {
"architecture": "x86_64",
"instruction_set_extensions": [
"AVX2"
]
},
"gpu": {
"framework": "Vulkan"
},
"supported_model_formats": [
"gguf"
],
"vendor_lib_package_names": [
"win-llama-vulkan-vendor-v1"
],
"manifest_version": "4",
"extension_type": "engine"
}

Image

[
{
"modelCompatibilityType": "gguf",
"runtime": {
"hardwareSurveyResult": {
"compatibility": {
"status": "Compatible"
},
"cpuSurveyResult": {
"result": {
"code": "Success",
"message": ""
},
"cpuInfo": {
"name": "",
"architecture": "x86_64",
"supportedInstructionSetExtensions": [
"AVX",
"AVX2"
]
}
},
"memoryInfo": {
"ramCapacity": 67874881536,
"vramCapacity": 0,
"totalMemory": 67874881536
},
"gpuSurveyResult": {
"result": {
"code": "Success",
"message": ""
},
"gpuInfo": [
{
"name": "AMD Radeon(TM) Graphics",
"deviceId": 0,
"totalMemoryCapacityBytes": 34474295296,
"dedicatedMemoryCapacityBytes": 0,
"integrationType": "Integrated",
"detectionPlatform": "Vulkan",
"detectionPlatformVersion": "1.3.283",
"otherInfo": {
"deviceLUIDValid": "true",
"deviceLUID": "b2f4560000000000",
"deviceUUID": "00000000100000000000000000000000",
"driverID": "1",
"driverName": "AMD proprietary driver",
"driverInfo": "25.3.2 (AMD proprietary shader compiler)",
"vendorID": "4098"
}
}
]
}
}
}
}
]

Should more devices/cores be detected?
Should both compute cores be used?
Can the 3D part be used?

Operating systems

Windows

Which llama.cpp modules do you know to be affected?

No response

Command line

Problem description & steps to reproduce

Running maziyarpanahi/gemma-3-1b-it

works, just looks like it's limping one 1 compute core.

First Bad Commit

No response

Relevant log output

@pl752
Copy link

pl752 commented Apr 17, 2025

I think that Compute_0 and Compute_1 are not separate compute units, but just separate performance metrics, so actually your iGPU is fully utilized

@vRobM
Copy link
Author

vRobM commented Apr 17, 2025

Okay, thank you.
There are a number of other options as well

Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants