This release primarily adds llama-cpp-python version 0.3.2 meaning we now support Granite 3.0 GGUF models as well as all other advancements that come with llama-cpp-python 0.3.z. We were previously on a 0.2.z release.
We have also added a new config class, so please run ilab config init for proper functionality!