So far, I’ve yet to really give much coverage to AR (Augmented Reality) and MR (Mixed Reality) - predominantly due to the fact that my own definition of where technology and the physical world meet is ill-defined.
The last few years have seen big shifts in the merging of both mixed and augmented reality. These shifts have been accelerated by bigger steps forward in SLAM
(simultaneous localisation and mapping
(visual inertia-odometry), camera sensor advances and Google and Apple’s releases of their SDKs, ARkit and ARCore. For the purposes of this exploration, AR/MR can be divided between in-phone and headsets.
- One of the areas in which mixed reality has been touted as being potentially transformational has been the physical retail experience. An industry which readers of this newsletter, or indeed any newspaper, will know is in need of tranformation. Magic Leap is mixed reality’s poster child, the Florida based company was founded in 2010, has raised over $2bn and launched it’s first product, Magic Leap One
(creator edition) in August 2018.
Whilst a lot of AR/MR has straddled some interesting use cases and demos, in my view apart from gaming there’s yet to be a killer use case in the wild. AR enabled retail promises enhanced in-store and at-home experiences for consumers and better analytics and ultimately conversion for retailers. Retail has already seen some narrow applications of AR, whether it’s the virtual assistance for make-up (Modiface
- recently acquired by L'Oreal), product placement (a ton of apps incl. Artsy
= art, Amazon
= furniture, 3co
= plants) and wayfinding (HEMA video
). ARs promised transformation of retail is based around transposing limitless visual information to the “experience” of shopping (urgh, a shopping experience). This can broadly be divided into three categories aimed at driving conversion;
Item selection and placement: the ability to choose potentially limitless stock items and place them on yourself or in-situ
Data layering: object recognition can add third-party information to the environment, such as which celebrity wore those jeans or which of your friends recommended you buy that turtle neck (pro tip: never buy a turtle neck)
Experiential: retailers can now add storytelling and education to the traditional in-store retail experience
To further visualise this, check out the demo
of MasterCard’s vision for the future of retail.
Magic Leap, alongside Microsoft’s Hololens
are to the forerunners in the application of MR to retail. Both companies have signed big partnerships with retailers; Magic Leap with Wayfair
and Hololens with Alibaba’s Taoboa.
Alibaba has been using VR, arguably a gateway drug to AR/MR, since 2015.
🚕 Mobility -
to date AR/MR in mobility has been
fairly limited, with vehicle heads-up displays providing the most utility. We’re starting to see AR/MR applied to orthogonal mobility use cases such as car sales and augmented vehicle repairs. Volkswagen have been early to use the technology, adopting it for service support and new vehicle design visualisations
, in a system they call MARTA (Mobile Augmented Reality Technical Assistance). By 2021, 72% of UK car dealerships are expected to use AR to aide in selling vehicles. Though these are related to mobility, these aren’t sexy use cases which get us excited.
The recent “mapping renaissance”, has been triggered by autonomous vehicles’ need for more detailed and precise two-dimensional and three-dimensional maps. The collision of higher resolution maps alongside AR/MR, will allow for a rich graphical UI on the real world, potentially spanning wayfinding, guided tours and augmented retail, dining and cultural experiences. Google has slowly been adapting it’s Maps
product to become more immersive and centered around IRL augmented discovery.
The more we become accustomed to AR/MR placing objects around us, the more this becomes an AI question, instead of a physical surroundings question. A number of recent advances are allowing machine to better understand visual space. DeepMind recently announced a program which learns to visualise scenes from various angles
using a GQN (generative query network). One of my favourite recent papers ‘The Sound of Pixels
’, this AI can detect sound sources from video - ultimately allowing sound control through image only.