The Next Breakthroughs in Augmented Reality

Apr 18 2018 | By Marilyn Harris | Photo: Timothy Lee Photographers

Though many consumers are already familiar with the concept of virtual reality (VR), augmented reality (AR) is a relative newcomer to the marketplace.

Aside from some video game and pilot training applications, these technologies haven’t been integrated into most people’s lives—yet. But they are already underpinning groundbreaking work among the scientific community, and Columbia Engineering researchers are developing the software interfaces that will integrate AR into everyday applications capable of contributing to health, security, interconnection, and creativity.

AR differs from VR in that, rather than creating a self-contained environment, it adds information to the real world as humans perceive it—typically by overlaying graphics or text onto a see-through headset or, in the case of the popular mobile gaming app Pokémon GO, via a smartphone screen, which uses the camera mode to populate a player’s real-world view with virtual creatures.

Brain-computer interfaces, such as the one pictured here, are poised to play a significant role in the future trajectory of augmented reality systems.

Columbia engineers have long played a meaningful role in developing and integrating some of the key components of AR systems. And as the technology behind these elements has advanced, so have the dreams of how AR’s application can beget a wide variety of social benefits. AR visionaries have foreseen its value in medicine, industry, tourism, defense, real estate, emergency response, and leisure. Projects underway at Columbia Engineering will help make these applications possible.

One of the earliest AR visionaries, Professor Steve Feiner has been recognized by the IEEE for his nearly three decades of pioneering contributions to the field. His lab is currently refining projects such as a wearable AR device to assist physicians during minimally invasive surgical procedures, an educational AR system that lets students document a 16th-century French manuscript, and AR interaction techniques that help users perform skilled 3D maintenance and assembly tasks.

Professor Paul Sajda focuses his research on the brain-computer interface (BCI)—the direct communication path between the wired or enhanced brain and an external device. Sajda’s Laboratory for Intelligent Imaging and Neural Computing is developing BCI platforms that leverage naturally evoked neural responses to events and objects in realworld environments. AR systems that integrate these BCIs can, for instance, enable even the most severely disabled individuals to control robotic arms or other intelligent devices.

Professor Peter Allen is developing just such a device. His expertise in robotic grasping helped create an assistive robotic system that allows severely disabled patients to control a robot arm/hand system to perform complex grasping and manipulation tasks using a novel brain-muscle computer interface (BMCI). The motivation behind this project was not just technology-driven—clear and necessary clinical needs compelled it.

Ultimately, the advent of brand-new materials may well be the key to creating the practical, wearable AR devices that are needed for the technology to prosper in the mass market. Nanophotonics expert Professor Michal Lipson is spearheading a large interdisciplinary team at the forefront of just that: inventing a novel optical material that could make multifunctional, lightweight, scalable AR glasses a reality. The first application will be for the military—which has long used AR/VR systems—as they pursue such glasses for battlefield situations.