In the field of machine translation, being able to pick a subject domain greatly improves the quality of the translation. My understanding is that voice recognition works similarly. I'm sure you've picked the best general purpose data set to tune the Echo with. But I'm pretty sure that it could be improved further if domains were supported. An application could supply, somewhere in its metadata, the domain that it operates in (financial, music, biology, etc) then Echo could use this to tune it's recognition. Alternatively, the text in the utterance file could be used as a fallback to try to auto-determine the domain.
Thank you for your feedback. We appreciate your participation and interest in Amazon's Alexa Appkit developer program. We are always looking for new ways to improve the Echo and the Alexa Skills Kit. Your suggestions have been relayed to the development team, and as I am sure you can appreciate we are not able to comment on any speculative information.