Hello,
Hello I am using MRCP server 1.7.0 with the Azure SR plugin, I have installed and run the diagnostic test of using the bsr1 scenario from the umc-addons.
I have created a copy of the bsr1 scenario and changed it to use a custom audio file. And here comes my problem.
When I set the audio file to be a small audio file, say a single sentence, I get the transcript of it as expected. When I set the audio File to be something larger in my case a 20 second recording, the server gets as reply the following:
MRCP/2.0 243 RECOGNITION-COMPLETE 1 COMPLETE
Channel-Identifier: f56e798f646c4b7f@speechrecog
Completion-Cause: 001 no-match
While on the umc side I get the following output:
[WARN] No NLSML data available
I have modified the umsazuresr.xml file to not apply single utterances as can be seen here:
<ws-streaming-recognition
language="de-DE"
max-alternatives="1"
alternatives-below-threshold="false"
sort-alternatives="false"
confidence-format="auto"
confidence-precision="2"
results-indent="2"
results-format="standard"
tag-format="default"
input-format="default"
start-of-input="service-originated"
skip-unsupported-grammars="true"
transcription-grammar="transcribe"
grammar-param-separator=";"
auth-validation-period="480"
auth-request-timeout="30"
inter-result-timeout="0"
input-token="Lexical"
instance-token="ITN"
connect-timeout="0"
auto-reconnect="false"
max-audio-data-chunks="0"
max-connection-duration="0"
graceful-ws-close="false"
single-utterance="false"
/>
If anyone could point me on the right direction on how to do the speech recognition in continious mode, that would be awesome. Also I am not well versed on the UNImrcp Server logic so it could be that there is some default I have missed
Kind Regards
Fidel Gil