Yet another skill from me... This time I've hooked up the Watson Q&A engine on BlueMix to The Echo. It only covers two information domains (healthcare, travel) but it's answers are a bit more focused than the default Echo lookup. It also gives confidence ratings which is nice.
https://youtu.be/ufrGo_JUeEg Lessons learned: * Did my first "settings" UI with this skill. There is some of this dialog at the end of the video. Not sure if this approach would work for lots of settings. But for a small number it came out quite smoothly. * Before I put in the option to not hear answers below a certain confidence rating, I just had a single intent. That worked fine. When I added more intents, and more utterances, it bombed. It would hear "Tell me about skin cancer" as "settings". I only had one utterance for settings? That's pretty poor recognition. I had to hard-code it for the demo. * I wanted to do two skills for this, but only one servlet. I decided to pass a request parameter to differentiate the endpoints. Only the BaseServlet class doesn't make the request parameters available to the Speechlet class. I ended up doing an interesting workaround whereby the servlet punts the parameter into a map indexed by the current thread, and the speechlet retrieves it.
Very nice. I came across the Watson api yesterday when looking for a way to get answers from Google. Watson was way two limited with its two categories for what I wanted, but I do like your implementation. Google is so much better at answering random questions than Alexa and Siri.
Watson is designed for very specific uses. Even this narrow data hasn't been trained, so its pretty hit or miss. I looked into Google a while back too. They'll tell you they are a search company. They aren't. They're an ad company. All of their significant revenue comes from ads. Providing an API that let you get just the search results, without giving them a chance to slap ads on it would undermine their business model. See if Duck-Duck-Go has an API. They aren't as good, but they aren't ad-revenue driven. So they might provide one.
Just saw the video for this linked through your reddit post -- this is awesome! Hearing confidence levels in the answers adds a sense of lenience to the process, followed by enjoyment to hear it's actually answering the question properly :) Great work!
Thanks! I'm stumbling towards best practice with UX in audio apps. The bit where you can set it to give low confidence answers always/ask/never was my first use of the metaphor. One of the nifty things Watson will do is give you an answer not only with a confidence rating, but also with citations. I.e. not just [i]what[/i] the answer is but how it came to that conclusion. I wanted further settings in the app to use the always/ask/never construct with citations as well. Unfortunately the public data sets for their beta have not been trained. So the references are all to internal documents that can't be referenced. (Lots of http://10.X.X.X URLs.) So there was no value surfacing them. It would have been a pretty powerful demo showing the difference between deep and broad as far as searches go. I'll have to see once the service goes out of beta if they offer more groomed sources. In the meantime, I'll see what other BlueMix services might work with the Echo.