The help document on GDF plugin provides passing event name which maps to Intent's event in dialogflow
thus, when using
<action application="play_and_detect_speech" data="say: Welcome detect:unimrcp:google-mrcp-v2 {start-input-timers=false,no-input-timeout=5000,recognition-timeout=50000}builtin:event/welcome?project_id=abcrde-1234;caller_id=+188888888"/>
welcome intent is triggered in DialogFlow and parameter passed are caller_id as in example above.
The DialogFlow log shows the request as
{"session":"901005f9c5be478e","query_input":"{\n \"event\": {\n \"name\": \"welcome\"\n }\n}","timezone":"Asia/Colombo"}
However, no TTS for the say: Welcome is heard nor the audio spoken to gets added as part of RECOGNIZE and sent to Dialogflow
This is only possible using builtin:speech, the max parameter that can be passed is context forexample:
<action application="play_and_detect_speech" data="say: Welcome
detect:unimrcp:google-mrcp-v2
{start-input-timers=false,no-input-timeout=5000,recognition-timeout=50000}builtin:speech/transcribe?projectid=abcrde-1234;context=input.something"/>
the DialogFlow log shows this request as
{"session":"35e7b6c9c85e4de7","query_input":"{\n \"text\": {\n \"textInputs\": [{\n \"text\": \"hi \"\n }]\n }\n}","timezone":"Asia/Colombo"}"
My question is:
1. Am I using the builtin:event grammar incorrectly?
2. Is there any other way to pass parameters like caller_id using builtin:speech grammar?