Corinna Underwood writes: “According to a report at Neurogadget, at team at Kennesaw State University’s Brainlab has developed a prototype that will allow a Glass user to interface with and give commands to Google Glass using evoked brain responses rather than swipes, nods or voice commands. BrainLab’s Executive Director Adriane Randolph says that although the device picks up brain waves and sends feedback to Google Glass, it differs from similar BCI systems because instead of reading input from a continuous brainwave, (such as Alpha waves), BrainLab captures and reads a brain response known as P300, allowing the user more control.”

Advertisements