At FRL, our research teams are starting to build the core infrastructure that will underpin tomorrow’s AR experiences.
How does this work? Think of the 3D spaces we showed above, and now picture this capability at planet scale. Using machine vision alongside localization and mapping technology, LiveMaps will create a shared virtual map. To deliver this technology at this scale, we envision that LiveMaps will rely on crowd-sourced information captured by tomorrow’s smart devices. To populate the first generation of maps, our researchers are exploring mapping our own campuses and the use of small pieces of geotagged public images to generate point clouds — a common technique used in navigation mapping technology today.
Rather than reconstructing your surroundings in real time, AR glasses will one day tap into these 3D maps. That means drastically reduced compute power will be needed for your glasses, enabling them to run on a mobile chipset. With these 3D spaces, your avatar could teleport anywhere in the world.
In addition to teleportation, LiveMaps will one day allow you to search and share real-time information about the physical world. This will enable a powerful assistant to bring you personalized information tied to where you are, instantaneously. It will also give you an overlay that will allow you to anchor virtual content in the real world. Imagine getting showtimes just by looking at a movie theater’s marquee, or calendar reminders from your personal assistant who can also tell when you’ve left behind something you’ll need.
We’re still in the research phase, and we’re committed to doing that research out in the open, sharing our progress on the road to AR glasses. We’ll also continue this work with privacy in mind, engaging global experts for their feedback as we think through best practices and how people can retain control of their information. This technology could radically change the way that humans connect in the future, and we’re excited to see where it takes us.
Ещё видео!