I know the corner cameras are heavily targeting face detection, but if you could provide access to the depth map generated by MVS for the entire image, that would be an extremely differentiating feature for me. I assume you are constraining the problem by only looking at the parallax between facial features, instead of looking on a pixel-by-pixel basis for the whole image...but...if you did have a depth map...that would allow all sorts of new UX models for gestures and would allow developers to leverage their own IP for facial recognition and the like. Are there plans to provide sdk access to this type of depth data for the entire image? Cheers! Donnie
Hi Donnie, The short answer is no, we have no plans to expose this at this time. Your ability to interface to the 4 corner cameras is limited to the HeadTracking API. The HeadTracking API provides the data as an event containing the following pieces of information: • The timestamp of the event. • Whether the software currently detects the user's face and is actively tracking the user's head. • The position of the head relative to the screen orientation of the phone, as X, Y, and Z coordinates. • The angle at which the head is inclined right or left relative to the screen orientation of the phone.
Thanks, Kevin. I understand the business case for locking it down. This is an awesome first step, and it's exciting to see all of the movement in this space from things like amazon fire, htc one m8, and software solutions like google lens blur.