question

Stephen Edmonds avatar image
Stephen Edmonds asked

How to troubleshoot an app in production?

So I have gotten my application approved, Admirer application. But I am now getting individuals that are saying it does not work. I have others saying it does work. When I test it and Amazon reviewed it, it works fine. Any suggestions on how to get feedback on what is happening to trigger a non- event? The rating system seems to be one sided and I don't know how to find out what is going wrong. Any help would be appreciated. This is my first app so I have little to no experience on how to troubleshoot something like this. Thanks in advance.
alexa skills kitsubmission testing certification
10 |5000

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Steve A avatar image
Steve A answered
I'm with you! There needs to be some sort of feedback mechanism where devs can offer support without rating their own apps. I've often been tempted (when, for example, it's clear my skill doesn't work for someone because the user is saying the wrong trigger phrase) to chime in with help, but that forces me to rate my own app, which seems like bad practice. More generally, there seems to be a missing piece between certification requirements and end user experience, given how many 1 star reviews there are, with comments like "doesn't work", "broken", "tried five times and wouldn't open", etc. My guess is the problems are do to user error in the majority of cases. I've tried some of the skills that have a lots and lots of "it's broken" comments, and they all worked fine for me. I don't have an answer, but there seems to some problem here. I can totally understand ratings based on the skills usefulness, or difficulty of use, etc. But the number of people who apparently can't open the skills is discouraging. (Of course, it could all be a self-selection problem.)
10 |5000

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

jjaquinta avatar image
jjaquinta answered
All reviewing systems have this problem. It's usually evened out with quantity and marking things "helpful". That means the cogent ones float to the top. But here the number of users is reasonably small. So scale isn't working. For the non-user errors... I rely on copious logging. Pretty much every intent received gets logged by me. Mostly to memory, and I can then ping the servlet on a different URI and get the in-memory logs. I can sift through those and, basically, watch each persons history and get a feel for how they use the skill. For a more complicated one, like Starlanes, I have persistent logs to dynamo. I can then dump reports on them and do more detailed analysis. This is kind of hard to do for a pure lambda skill. There is no way to track in-memory. And if you choose to persist, you add the time it takes to persist to the response time, since you can't do anything in the background.
10 |5000

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.