I recently acquired a Google Home smart speaker and have spent a couple of days understanding the Google Assistant architecture and how to integrate it with Node Red.
Google Assistant comprises of a number of layers:
1. Android or Home smart devices to perform speech capture and playback
2.
Actions to perform speech recognition and initial application routing
3. DialogFlow to add contextual understanding to the user's request
4. (Optional) IFTTT to add logic to processing requests
5. Third party servers like Belkin Wemo or Sonoff eWeLink to control the actual hardware
This node is a wrapper around Google's
actions-on-google-nodejs client library and connects to layer 2 above. It receives the raw text from the speech recognition of the user's request and sends back a response to be spoken to the user. Each request also includes some state information to allow a conversation to be maintained over multiple requests.
The node runs it's own Express web server, rather than use the Node Red one, so that it can be opened to the Internet without exposing all of Node Red. Keep in mind that there is no security implemented at the moment.
Also keep in mind that Google Assistant is not intended to host private apps. However, if you keep your app in perpetual test, then it works like a private app as it is only accessible from devices linked to your account.
There is a sample flow included which demonstrates a simple app that responds to queries containing the work 'number' or 'fancy' such as 'tell me a number', 'what is your number', 'what number is your favourite', 'say something fancy', 'talk fancy to me' etc
Enjoy
Dean