I saw the faq regarding testing via curl. But what I'm looking for is a way to type input to Alexa as if post speech recognition. Otherwise to test we have to keep yammering at Alexa all day! Plus useful for running test suites. This matters more than testing the web device alone st this stage because we want to see what the input given actually turns into and if we have the configurations tuned.
I came here looking for the same sort of thing: a way to test my Alexa service without an Echo. I had already brainstormed a bit on it, and I don't think it would be that difficult to write one. A small app that you can start, point at our intent schema, utterance file, and end point. You can then type text into an input box that represents post-recognized speech. The app would do a very simple emulation of Echo's intent matching logic, call your endpoint, get the response and display it. It's not going to be as smart as the echo at recognizing incorrect entries. But I think such a system would be suitable for testing and debugging intent schemas to make sure the general logic flow works. So who would find this useful? Anyone interested in contributing?
I expected this kind of tool to be here somewhere. I imagine the echo devs themselves would be using this kind of setup themselves for regression testing. So I thought some kind of echo emulator console. Otherwise I'm not sure what people do. Even if the echo is sitting next to me it will be quite awkward to be chatting the same things at it all day long!
From the testing of my app under the certification process, it does appear that my backend is receiving automated JSON arrays from Alexa, and the Echo 'device' at the other end I suspect is virtualised. I'm working on automated testing via spoken word (TTS) and voice recognition to at least provide some basic regression testing. I shall share out once I've got something that might work with more than my own specific setup :)
I started into the basic harness last night. It's coming along pretty quickly. I had some other ideas along your lines while trying to fall asleep. :-) 1) Script support. The harness could run a test script. At the simplest level this would be a series of utterances followed by the expected response. This could be used for automated testing. 2) Automated script generation. If it knows your whole schema, it can write a test to test your whole schema. Either randomly, in random order, or exhaustively. 3) Automatically generate manual scripts. Ultimately you are going to need to test your app via the Echo. Although it might be interesting to use some STT and TTS to automate that (an Echo app to test Echo apps?) a simpler solution is to generate a similar randomized or exhaustive script for a human to use to test via the Echo. Anyway. Got to get the basics going first. But I know something like this would beat the stuffing out of curl...
This sounds great, where do I sign up? :) I'm attacking the problem a little differently, running everything thru the Echo, but at least having to save my own voice by having my computer do the speaking and listening whilst I get on and code :) Here's what I'm doing - working on a basic utterance (text) and intents (JSON) parser that takes both files in, and generates a bunch of test output to be spoken to the Echo. You can seed the tool with what you expect to hear back from the Echo per phrase, or, have it report back on what it heard the Echo say for each spoken phrase for you to review later. For the TTS component, I'm using -
http://responsivevoice.org/ ...and for listening to Echo and working out what was said -
It's cool you're doing this but I hope one of the team members comes in and comments. It seems silly to have to do that work when there is likely an api internally that accepts the processed speech. All we need is access to that api, and frosting would be a web page control panel that lets you push things into it.
> Here's what I'm doing - working on a basic utterance > (text) and intents (JSON) parser that takes both > files in, and generates a bunch of test output to be > spoken to the Echo. Doh. I just did that. Argh for duplication of effort. OK. I've pushed mine up into GitHub. You can find it here:
https://github.com/jjaquinta/EchoSim I've done a "release" of a basic runnable jar file with what works so far. Use my code in your own project, or contribute to this one. Whatever works for you. I've only tested it on the 'horoscope" example in the doc. It would be great to see if it falls over on real data.