question

Matt Cashatt avatar image
Matt Cashatt asked

Wild card utterances / default response?

Is there a way to configure utterances for wildcard questions? Take this silly scenario for example: Say I have coded a "Yes Man!" app for Echo that simply responds to any question asked with answers like, "Go for it!", "You bet", "Couldn't agree more!", you get the idea. How would I set up that utterance file without knowing in advance what the user might ask? Thanks! Matt
alexa skills kitvoice-user interface
10 |5000

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Matt Cashatt avatar image
Matt Cashatt answered
I found a way to make this work, though it is admittedly a hack: In your index.js, just re-purpose the "HelpIntent" property of your intentHandlers object and treat it as you would your primary intent. So, instead of this: HelpIntent: function (intent, session, response) { response.ask("Some generic help message"); } You do this (extending the "Yes Man" example): HelpIntent: function (intent, session, response) { var possibleResponses = [ "You bet!", "Absolutely!", "That's a great idea!", "Couldn't have said it better!" ]; var index = Math.floor(Math.random() * 3) + 0; var yesManResponse = possibleResponses[index]; response.tellWithCard(yesManResponse , "Yes Man!", yesManResponse ); } Since the help intent is the default for misunderstood utterances, it serves the purpose of a universal default response logic invoker for all utterances. This probably doesn't have much legitimate use outside of novelty apps, but at least it lets us have some fun for now. Now I just need some help/suggestions for more cheesy Yes Man! responses. Any ideas? MC By the way, here is the related official Amazon position on this topic: https://forums.developer.amazon.com/forums/thread.jspa?threadID=5081&tstart=0
10 |5000

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Matt Cashatt avatar image
Matt Cashatt answered
10 |5000

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

bartm59 avatar image
bartm59 answered
The echo does not seem to translate all words into a sentence, that is then pushed through a pattern recognition of words (which would allow you to do a wild card). It seems the Echo needs all permutations of the uterances in order to pre-build a voice representation for all of them. When listening the Echo software will do a "best match" on the voice patterns and hits on the phrase that has the best score. If the spoken string has more words, then it is still a good match. With this in mind you can create the following uterance: "OneshotIntent to {search for | Mytext}" Suppose the app was called "peter", this will match: "Alexa, ask Peter to search for" --> returns "search for" in the Mytext property "Alexa, ask Peter to search for something to eat" --> returns "search for something to eat" in the Mytext property "Alexa, ask Peter to search the web for the meaning of life" --> returns "search the web for the meaning of life" in the Mytext property So we have created a wild card that looks like: "search for * " with "*" potentially containing multiple words. The bad news is that the echo may not be able to hear the exact words spoken (as there is no template that needs to be matched). You typically need to be sitting close to the echo to have this work reliable. good luck BartM
10 |5000

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Matt Cashatt avatar image
Matt Cashatt answered
This is great guidance! Thank you bartm59!
10 |5000

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

AslansServant avatar image
AslansServant answered

Bartm59

I am not able to get your solution to work. What would the IntentSchema.json look like for this?

This Article explains how the new Custom slots actually work. One of the interesting things is:

When you create a custom slot type, a key concept to understand is that this is training data for Alexa’s NLP (natural language processing). The values you provide are NOT a strict enum or array that limit what the user can say. This has two implications 1) words and phrases not in your slot values will be passed to you, 2) your code needs to perform any validation you require if what’s said is unknown.

This means that if the user give a response that does not match an item listed in the custom type, the user's word(s) are still packed into the slot variable and delivered to your Intent implementation.

This seems like a very much needed feature for organizations that want to constantly improve their offering. I have two use cases.

  1. As a business we want to allow the user to query for any information we have.
    1. Example: Alexa, ask Health Site about appendectomy, or gastric bypass.
  2. As a business we want to know what information users are requesting that we do not have.
    1. Example: Alexa, ask Health Site about Paleo Diet. We do not have this information now, but if enough people are asking for it, then we may want to start offering it.
    2. Example: Alex, tell Health Site to pay my premium. We may want to add that ability to our skill.

    In my opinion this is a HUGE opportunity for Alexa to add value to it's business developers.

10 |5000

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.