Facebook F8 Day 2: AR glasses, brain typing, skin listening, and more
Day 1 at Facebook’s annual F8 conference was filled with enhancing the present. Day 2 was all about setting the vision for the future, and things got weird. The company also stated a new goal to "create and ship new, category-defining consumer products that are social first, at scale."
New Surround 360 cameras
Facebook began by announcing two new 360-degree developer cameras, the x24 and the x6, which will join the company’s Surround 360 line that the company unveiled at F8 last year.
Oculus’ chief scientist, Michael Abrash, took the stage and talked about AR and AR Glasses. Abrash defined "full AR" as AR that is socially acceptable, with both audio and visual elements and contextually aware AI — not an occasionally used device for special situations. Abrash said that such an AR device is five years away at best.
Terragraph, Aquila and Tether-tenna
The company gave an update on its Terragraph and Aquila projects, and introduced Tether-tenna, a small helicopter attached to a fiber line that can be flown to create a virtual tower a few hundred feet above the ground. "This is still in the early stages of development, and lots of work is needed to ensure that it will be able operate autonomously for months at a time, but we’re excited about the progress so far," Yael Maguire wrote.
Facebook’s Regina Dugan, who leads Facebook’s mysterious Building 8 took the stage and demoed a few awe-inspiring, if terrifying, technologies.
Dugan demoed a "brain mouse for AR," which the company described as "a silent speech interface with the speed and flexibility of voice and privacy of text." Facebook has "a goal of creating a system capable of typing 100 words per minute, straight from the speech center of your brain — 5x faster than you can type on your smart phone today."
Dugan also played a video of an engineer who has learned to "listen" with her skin. The company is using the "Tadoma method" which was developed based on the experience of Helen Keller. "The cochlea in your ear takes in sound and separates it into frequency components that are transmitted to the brain. We can do the same work of the cochlea, but transmit the resulting frequency information, instead, via your skin."
Dugan didn’t give a timeline on the skin interface, but she did say the brain-typing tech is about three years away.