Many thanks Antonio.
For all interested in AI, llama.cpp with Vulkan can use many "normal" GPUs.
Building llama.cpp with Vulkan support is as easy as building
Harbour. The main steps are:
- install Vulkan SDK
- git pull llama.cpp
- cmake -B build -DGGML_VULKAN=1 -DLLAMA_CURL=ON
- cmake --build build --config Release
regards
--
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: https://groups.google.com/group/harbour-users
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/f16c6fc5-53b4-4467-b34b-5d4de56f4132n%40googlegr