VIDEO: Echo controlling anything in my home including KODI and z-wave stuff
Link to videos:
https://www.youtube.com/channel/UC6fLTEmRoGam7B5VoKazC9A Details from the KODI video (there's also one on other stuff including IR control of TV, Z-Wave thermostat, lights, fan controller, etc): This is one of three videos of what I'm doing with the Amazon Echo, KODI and home automation. If I get enough subscribers, I plan to make future more organized videos along with details of how to set all this up in your home. I’ll also post the KODI module for you to play with too in conjunction with the how to videos. In this video, I'm demonstrating true voice enabled two way feedback with KODI, my home automation system and the Amazon Echo! Here’s what I’m using to do this: 1. Amazon's Echo (aka Alexa) 2. KODI running on an nVidia Shield AndroidTV device 3. A free home automation program called Motorola Premise Home Control (
http://cocoontech.com/forums/page/home-a...premise-r3) and... 4. A KODI module I've written for Premise allowing full two-way ip based functionality (including library importing) and IR too (so you can use the native Netflix 4k App that comes on the nVidia Shield without picking up another remote). 5. A very versatile SpeechParser module I've written for Premise, that takes a generic command phrase, then performs some action and forms a natural language response. 6. A new Amazon Echo skill I'm calling "Premise" that is in testing under my developer account. It uses an Intent called “Premise” to pass whatever is said after “Alexa ask Premise to” to my home automation server. 7. A free tiered Amazon Web Services (AWS) account to send Alexa commands to my home automation server over HTTPS. The same AWS lambda function also reads back an HTTP response of what actions took place that is sent from my home automation server (via the SpeechParser module). Some additional even more geeky details: Everything you see is done in a very generic fashion. No individual phrases were programmed for what you see in the video, I’m too lazy for that! I’ve written code (a Premise SpeechParser module) for my home automation system that actually interprets the sentence using nested regular expressions to find what property state, property value, device type and room location you are trying to control based on what command you say. In this manner, the command phrases are NOT order dependent (unlike most other options out there including Amazon’s), and leverage the object based structure of Premise, to recursively find a match within my home for whatever command is issued. To elaborate, once found from the command phrase, the device type and room location are then used to examine all devices in the under a particular location (e.g. room) that match a particular device type (e.g. light). Once a match is found (e.g. table lamp in the living room), the properties under that object are compared using recursion to find the best match for the command sentence, and the new value is set. The queries in the Part 2 video also work in a similar manner, but instead of setting a property value, they grab the value and return a response to the query.
Really interesting work! I have read up on the Premise project in the past but went in favor of Openhab for various reasons. I wish someone would bring this level of functionality on that end (just a matter of time). Currently using the echobridge script to control various items from Alexa-->Openhab as items, ie. Turn On "Security" which arms the alarm...Open the "garage door", etc.
I'm all for open source software, but with the kind of talent and money it would take for OpenHAB to be a serious Premise contender, it sadly just isn't going to happen. For one, I'm sure the openhab folks have jobs, and Premise is a very powerful and already free software solution. The IDE for Premise is 100% stable, and very easy to use and learn. Millions of dollars were spent on Premise (ask Motorola and Lantronix), and a team of ex-MS employee's that are very well known programmers developed it... I'm sure you realize this from my video, but the commands I'm giving Premise can be spoken in any order, and can even include synonyms (e.g. turn on vs. power on, etc....). I developed this "SpeechParser" module from within the Premise IDE, and it's all open-source code in vbscript; which means you don't have to be a programmer to add your own custom phrases. This isn't a "hack" that requires coding each command on the amazon cloud or emulating a Philips hue bridge with limited commands. 100% of the sentence is processed on the HA server side, where it makes the most sense. In other words, actual interpretation of the sentence is taking place on the HA server by examining the objects (aka devices) in my home using regular expressions to compare property names, object names, room locations etc... to what was said (otherwise I'd have to code all possible strings for every possible way to say a sentence).