If I understand the toolkit correctly, all defined intents are always "active". I.e. when Alexa parses the voice, it attempts to match against all defined intents. Many concepts, however, require a more fine grained approach. Say, for example, you had a VoiceXML definition for a Interactive Voice Response system and you wanted to use the same data set for a Echo system. You are really going to want to be able to implement a state machine and alter what intents are valid for the current state of the session. It's not clear to me how this can be achieved with the current API. The simplest extension I can think of to service this type of application would be to allow you to assign an "id" to each intent when defined in the schema. The response syntax could be extended to let you return either a list of intent IDs to enable, or to disable. With this in place the application could have a lot more control over the recognition process and a higher fidelity response would result. A more powerful approach would be to allow intents/utterances to be dynamically defined in the response on a per-session basis. I'm looking at an application where I do not know ahead of time what all the possible answers will be. But, for each state, I will know. So being able to define, for the current state, what intents I wish to accept, would be ideal.