How can I implement AVS in my iOS app? (Already using AWS for auth)
a. For using AVS from an iOS app, it was mentioned that I have to implement the Login-With-Amazon-SDK. I have an existing app with Cognito implemented already for authentication. I am getting a token from Cognito that basically gives me access to DynamoDB. So I was wondering if I can use that token to grant my end users access to my amazon account. Is this possible? Or will I still have to implement the Login With Amazon SDK. b. I needed some clarity on the AVS and ASK use cases. As in, can I use ASK only if I have a device that already works with the Alexa Voice Service? Should I first make sure that I am using AVS before I can create new skills? What are the different scenarios wherein I must or can use each AVS and ASK? c. I am developing a device with my team that is IoT capable. I am using BLE (Bluetooth Low Energy) to connect my device to the iOS app. My device doesn't have a microphone, so I wanted the app to be able to recognize voice commands from the user and perform actions on the device through BLE based on the response to the command. > Who is doing the talking? What device is doing the listening? - The end users must be able to speak into the iPhone mic and the app should basically send this to Alexa. The app should receive the response. > What device wants to be notified? - I want that the received response from Alexa trigger an event onto the app, which should then send a bluetooth command to my device. The bluetooth command will trigger an action on the device. > Is this a once-off or something you want to publish for multiple people to use? - Yes, I want multiple people to be able to use the device and the app once development has come to a close.
a. Unfortunately, (as I understand it) Cognito is for allowing users to access your AWS data. For AVS, we need the user's permission to access THEIR account data. That means you'll need to get an access token via the LWA SDK. b. AVS = Access Alexa through your device. Think an Echo. (Almost) everything an Echo can do, AVS can do. ASK = Give Alexa more capabilities. For instance, Garag.io introduced a Skill that allows Alexa to control your garage door. If you want to control your device through Alexa, you'll want ASK. If you want something that acts like an Echo, you want AVS. You don't need to use AVS to create a Skill, and you don't need to create a Skill to use AVS. The two work well together though! AVS has access to every Skill that Echo does. c. Here are the assumptions I'm making. If any are wrong, please let me know: * You want an Echo-like experience for your IoT device * You ALSO want to control your device via Echo * You have a persistent internet connection to the IoT device - it does NOT use the phone It sounds like you would like the phone to be the primary thing that talks to AVS, and have it forward commands to the IoT device. However, it's probably easier to have the majority of the AVS-related code on your IoT device, and have the phone push your voice commands to it via Bluetooth. BLE is not the best for voice, but (I'm assuming) you can renegotiate the bluetooth connection to a different protocol when you want to speak. If you want to control your device with custom commands, you'll need a way to push those custom commands to your device. The way Skills work (in broad terms) is that when a user asks Alexa to do something using keywords from your skill (for instance, "Alexa, ask My Fancy Device to set Foo to 3"), AVS will trigger a function that you provide via AWS Lambda. You can then use that function to trigger something on your IoT device. Usually that process involves sending a request to your own cloud, which manages the connection and controls your IoT device. I hope that answered your questions! If you have any more, or want clarification, please feel free to respond. This URL may also be helpful:
Thanks for the answer, it cleared up a lot of lingering doubts about the data flow. I dont really need all the capabilities of the Echo. I was wondering if it was possible to just have ASK working without AVS, like a standalone service? Basically, I want only a few commands and responses that will be relevant to the product in general. Okay, say i were to implement the LWA SDK. What if i want to provide shared access to my account itself to all my users? Is this possible? Will the data load be too heavy? The reason I want to follow this approach is that I am already authenticating my users at login. Maybe I can log in to my account also as the user is authenticated. To your deductions: > You want an Echo-like experience for your IoT device - This is true, but only to a point. I don't want all the functionality of an echo, only those that are specific to the app and device and can complement the overall user experience. > You also want to control your device via Echo (I'm assuming you mean AVS?); You have a persistent internet connection to the IoT device - Well the product we're trying to make doesn't really have a mic in its current version. So i was hoping that I can control my device via the app that I create. The product we're developing does not have a wifi module, instead it is connected via BLE to the app. I am currently able to control the device from the app. Whenever I want it to start a treatment for example, I just call a function that will write a bluetooth command to the device to trigger a treatment from the device. Also, I was wondering if its possible to change the wake word from Alexa or Amazon to something else? Cheers
Unfortunately, ASK is not a standalone service. If you want to be able to talk to your device, you'll need to use AVS. And in order to certify your device as Alexa-capable, you would need to include all of Alexa, not just the pieces you care about. Additionally, "hard-coding" your account and having all of your users interact with AVS through your account wouldn't work either. It breaks the following item from the terms and conditions: (j) you will not facilitate or provide access to the Alexa Service through any means other than through the authentication methods we specify, and you will not disable, circumvent or avoid any security device, mechanism or protocol of the Alexa Service; See the authentication diagram here:
https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/getting-started-with-the-alexa-voice-service#authorizing-a-device-to-access-the-alexa-voice-service Actually, I did mean the Echo. If you implement a skill, then you can access that skill from across the entire Alexa ecosystem, including Echo. But from this forum post, it seems that you care more about voice-enabling your device from the device, rather than from the entire Alexa ecosystem. Anyway, if you don't have a persistent Internet connection through your device, you're going to run into a lot of trouble getting AVS to work correctly. It wasn't designed to be used over Bluetooth LE in the manner you've described. Finally, on the subject of the wake word. If you have a push-button approach (using the phone microphone), you do not need a wake word. If you desire far-field technology with a wake-word, that's an entirely different discussion that requires explicit Amazon approval. See the below item from the Terms and Conditions: (m) you will not implement far-field voice recognition or use of a spoken word to trigger the activation of the Alexa Service in Your Products without Amazon's prior written approval and any such implementation may be subject to additional terms and conditions; It doesn't sound like AVS is the best fit for your product right now. Sorry!