Hello,
It seems you haven't followed the procedure properly.
I just tested with llama3.2 (I haven't installed gemini) in the Dutch chatBot and it works as expected.
1- I see you haven't made the first contact with the chatBot which is required,
because a variable for LLM is written in the chatBot predicates for the current user.
Make sure to give a name to the robot, and also give your name.
2- Then you need to "enable" the LLM
(if later you change name of user you will need to enable the LLM again)
3- In the LLM interface, make sure the url is set like below if MRL is running on the same PC as Ollama:
http://localhost:11434/api/generate
If Ollama is running on a different PC than Myrobotlab, the url should look like below:
http://192.168.1.45:11434/api/generate (modify according to your own url)
4- Set your model as shown below.
Notice also that because you "enabled" the LLM, it now forces the model to respond in Dutch.
5- Now save what you have modified:
6- You should now be ready to use the LLM via the chatBot.
7- To ensure all these are saved for the next time, save your main config.