Hi everyone,
I'm trying to configure UniMRCP to work with Azure Speech-to-Text on-premise containers, and I’m having trouble getting language identification to function as expected.
Here’s what I’ve done so far:
Set language-identification="true"
Defined candidate-languages="he-IL,en-US"
Added multiple <service-endpoint> blocks, each pointing to a local container for a different language.
Example:
<service-endpoints load-balancing="round-robin" fail-over="true">
<service-endpoint enable="true" service-uri="http://localhost:5001/..." language="en-US"/>
<service-endpoint enable="true" service-uri="http://localhost:5002/..." language="he-IL"/>
<service-endpoint enable="true" service-uri="http://localhost:5003/..." language="auto"/>
</service-endpoints>
However, in practice, UniMRCP always uses the PrimaryLanguage container. It doesn’t appear to perform any real-time language detection or switch between containers based on the spoken input. I checked the logs and saw no evidence of it attempting to reach the other containers.
I’d greatly appreciate it if anyone could:
Confirm whether multi-language detection is supported in the Azure SR plugin in this on-premise setup.
Share working configurations or steps that made it work.
Suggest anything I might be missing or misunderstanding.
Thanks in advance for any help or insights!