Training Computers to Understand Human Intent

Apr 18 2018 | By Marilyn Harris | Photos: Eileen Barroso (top) | Timothy Lee Photographers (bottom)

Though much effort is being directed toward development of invasive, wireless recording of brain signals, scalp EEG remains one of the least invasive methods to enable brain-computer interfaces (BCI) for consumer applications. Researchers have made great strides in refining this delicate and evanescent pathway, which could one day enable people with physical challenges to access or regain functionality they otherwise lack, among other wonders. As a result, BCIs are now positioned to play a crucial role in the future trajectory of augmented and virtual reality.

Paul Sajda is developing a combined BCI-AR system that will train computers to react to a user’s cognitive state.

We’re starting now to understand how the brain responds to—and makes decisions based on—our very complex environment.

Paul Sajda
Director of the Laboratory for Intelligent Imaging and Neural Computing (LIINC),

As founder and director of Columbia University’s Laboratory for Intelligent Imaging and Neural Computing (LIINC), Paul Sajda spends a good deal of his waking day—and perhaps also his REM sleep—exploring new ways for humans to communicate with computers. The result is new breakthroughs in enhancing how people experience the world around them, through BCIs paired with augmented reality (AR) and virtual reality (VR).

“We’re starting now to understand how the brain responds to—and makes decisions based on—our very complex environment,” says Sajda, a professor of biomedical engineering, radiology and electrical engineering. “That understanding informs how we can improve human-machine interaction.”

One of LIINC’s primary missions is to use principles of reverse neuroengineering to characterize the cortical networks underlying perceptual and cognitive processes. Based on this information, Sajda’s team develops BCI platforms to leverage people’s neural responses to events and objects in real-world environments.

Although there are various techniques for limning brain reactions, Sajda’s team employs the most researched and practical method currently available, which measures electrical signals the brain emits via electrodes placed over the scalp in a wearable headset. The team also makes use of a method Sajda developed to determine where a person’s attention is engaged by measuring barely perceptible dilation of the pupils. Together, these complementary indicators form an AR system’s understanding of human intention.

His team routes this combination of brain and pupillary measurements, paired with data on heart rate variability and chemical changes on the skin, into a feedback system connected into a complex BCI. By then employing a novel machine learning application that Sajda’s lab developed, the BCI can make inferences about an individual’s cognitive state and reaction to his or her environment. The next step—still in development—will be for the computer to react to that data and, retrieving additional information from relevant databases, to offer up suggestions or context on whatever may have caught the  individual’s attention.

Professor Paul Sajda hooked up to a Brain-Computer Interface
Paul Sajda demonstrating a brain-computer interface.

In a presentation Sajda has created to showcase how such an AR system might work in an everyday setting, a man wearing a headset and carrying a smartphone walks down a street, taking notice of various people, places, and things. He briefly focuses on a woman walking past. The BCI system senses his attention orienting to her face, prompting the AR software to search the cloud for his Facebook contacts, so that her name and affiliation (e.g., a former classmate or colleague) pops up on his phone.

For disabled individuals, the potential applications for such a system are especially striking. Someone with limited motor function could remotely control numerous intelligent devices just by reacting to the environment; a deaf person could enjoy an interactive, guided tour of a city or museum.

Though Sajda notes that AR’s current stage of development for consumers is “still somewhat of a novelty,” his lab already has one project with some very near-term applications. Just as the military uses AR/VR systems to train  fighter pilots to perform well under stress and in dynamic environments, BCIs integrated into AR/VR simulations can now be used to train the system itself. Sajda and Computer Science Professor Peter Allen are using that insight to teach an autonomous vehicle to adapt to a human passenger’s preferences and comfort level. In a virtual environment they devised, BCIs can register when a vehicle moving too quickly unnerves its passenger, triggering the system to modify the vehicle’s speed.

Stay up-to-date with the Columbia Engineering newsletter

* indicates required