Dialogflow CX is providing the same functionalities as Dialogflow ES in terms of checking the agent quality. The agent validation feature provides a list of classified validation messages that you can correct to improve the quality and performance of your agent.
The Analytics feature shows various agent request and response data statistics. This data helps you assess how your agent is being used in production and may be used to improve your agent.
You also need to make sure you create at least 10-20 (depending on complexity of intent) training phrases , so your agent can recognize a variety of end-user expressions.
Dialogflow CX uses confident scores to match with potential intents, and the score value ranges from 0.0 (completely uncertain) to 1.0 (completely certain). There are two possible outcomes following a score : 1) If the highest scoring intent has a confidence score greater than or equal to the classification threshold setting, it is returned as a match or 2) If no intents meet the threshold, then a no-match event will be invoked. You can check the confidence scores in the Original response (JSON) of the simulator by clicking the clipboard icon (image) .
You can also use the built-in test feature to uncover bugs and prevent regressions in your agent. You will be creating test cases (golden test cases), and use the simulator to run these tests on your bot. The test cases will validate whether or not the agent responses have changed for end-user inputs defined in the test case.
Hope this helps!