So, we got our first round of feedback on the first skill we submitted, and there were a few minor issues which will be easy to fix, as well as the expected complaint about one-shot requests not working (which, as we've mentioned in another thread, we believe to be due to a weakness of the intent matching capabilities of Alexa), but the one that surprised us was that they told us our skill needs to follow "Section 4.1 - Intent Response (Session Management)" more closely. Specifically, they had a problem with the fact that we don't ask a question, and yet don't close our session after finishing speaking. This is absolutely, 100% intentionally done, and I believe that section 4.1 is too rigid in this case, and needs to be taken on a case-by-case basis. Our skill is one that jjaquinta would refer to as a "fortune cookie" skill, in that generally speaking, it hands out canned statements. We've tried to add some flourishes to what other similar skills have been doing (and in fact we've been slowly working on this since even before LME's Angry Bard dropped). One of the things that we noticed in our user testing is that, in the case of a "fortune cookie" skill, people absolutely hate having to say "Alexa, tell mySkill to do myThing" over and over and over if they want to reuse the skill. To combat this, we started maintaining an open session, which allows the user to just say "do myThing" - much less cumbersome. We then took it a step further, and implemented an intent called "ANOTHER", where the user could just say "another" or "again" and it will trigger the last thing they did again, getting them another "fortune". This was [i]extremely[/i] well received. I can't stress enough how much of a difference this made in the usability of our skill. Beyond that, the maintenance of an active session allowed us to do neat things, like make sure we don't ever repeat "fortunes" (or even repeat things that were thematically close to each other) within the same session. For us, killing the session is simply not an option. That said, we understood that voice interface is new, and that users may need coaching, which is why we have taken every opportunity to explain to them how to use these special intents once they have an active session. Our help intent goes out of its way to explain how you can take advantage of the "ANOTHER" intent once you've activated the skill. The documentation we link to from our help intent notes the same thing, and our example phrases in our skill's description note the preference for an active session. We wanted to build something that felt nice for users to use, and all of our feedback has told us that this is the approach we should take. How do we convince the Alexa team that something that goes against their guidelines may be the right thing for our users?
The certification process has gone out of control. This is one example of the... I'm struggling to keep to professional language... [i]questionable judgement[/i]... show by the certification team. I've had two skills come back with a similar objection. I want to ask them if they have even used Alexa regularly? It gets [i]REALLY[/i] tedious to have it endlessly repeat things. It's like, "Yes, I know what I can do next. Please let me get on with it." The first time I sent it back with an explanation of how this was by design. But, to cater for their concern, I used my dynamic adaptive feature so that people who were new to the skill heard an extra prompt, but after you had used the skill for a while, the prompt was removed. This was all pasted into the comments section that they added. [b]IT CAME BACK TODAY WITH THE SAME OBJECTION. VERBATIM.[/b] And that wasn't the only example. There were at least six dumb/pedantic objections. [i]Many of which I had specifically commented on in my feedback[/i]. It's like they didn't even read the feedback. [b]What's the point in having a comments section if the certification team is not going to read it?[/b] That leads to several other questions like... [b]How can we possibly be innovative in our skills if the arbitrary rules are going to be so narrowly interpreted?[/b] [b]If no meaningful dialog can be held with the certification team how can we ever get clarification on their interpretation?[/b] [b]What's the point of spending serious time on a skill only to run into the brick wall that is the random capriciousness of certification?[/b] I'm sorry. As far as I'm concerned the certification process is broken. I'm on vacation tomorrow and I was going to polish off two of my skills and submit them. But with the completely idiotic reply I got from the certification team today I wonder why bother? My time would probably be better spent just playing Minecraft. I'll have to see how I feel in the morning.
Man, you absolutely nailed my thoughts on each of those points. Also, I was feeling rather clever having just come up with the same process I think you're describing as your "dynamic adaptive feature" as a workaround - I guess it's good to know in advance that implementing it won't actually help anything.
"Dynamic Adaptive Features". Yeah. My CEO. She has a MBA. She's good at marketing-speak. I can't remember if we called it that in the book or not. But, no, you should do it. It will help things. [i]The users[/i]. I'm pissed that the current process has the effect of stifling innovation. I should know better. I did a blog entry on "Explaining Amazon's Indifference"
http://ocean-of-storms.com/tsatsatzu/explaining-amazons-indifference/ Their whole business model is built around being a service provider, not a business partner. If you look at all of AWS, it's just people filling out forms and availing of utilities and getting charged. There is no dialog. There is no interaction. That's their corporate culture. >_<