Clearly one key to quality recognition is good representative utterance files. However, it can be hard to anticipate what natural language a user will use to interact with your application. A developer maybe too caught up in their own linguistic experience to consider regionalisms or different modes of expression. The temptation is to just write grammatically perfect utterances, which is not how people speak. From the user side, I'm sure it is as frustrating as playing some of those old text adventure games, where half the game-play was working out how to express what you wanted in a way the program understood! What would help in improving apps, once deployed, would be some sort of feedback mechanism. Our apps can easily log what works. But we don't know what didn't work. If once a day/week/month we could get a report of all speech recognized for our app at the raw text level, then what utterance matched it, or if no utterance matched it. Developers could then review these reports, and if consistent patterns are noticed, either the in-app help could be improved, or the utterance file updated to mold it better to user's usage.
Thank you for your feedback. We appreciate your participation and interest in Amazon's Alexa Skills Kit developer program. We are always looking for new ways to improve the Echo and the Alexa Skills Kit. Your suggestions will be relayed to the development team, and as I am sure you can appreciate we are not able to comment on any speculative information.