Dec 11 2007
Data Analytics

Synapse to Circuit — Literally

Think radical human-computer interfaces are years away from development? Think again.

Ever wish a computer could read your mind? The Defense Advanced Research Projects Agency wants this to be more than a wish and has pumped funds into work on opposite coasts of the United States to help realize a human-computer interface that will make direct connections between human brains and computers possible.

This type of research conjures up images from sci-fi novels and video games and seems a step removed from the office or even the field work of most agencies. But the need to quickly capture, understand and sort data is the simple underlying goal of HCI research, says Paul Sajda of Columbia University in New York, who is working on one such DARPA-funded project. In that respect, the work is little different than what anyone in information technology is trying to achieve — just on a faster and far less tangible level.

An immediate application of this technology is analysis of the huge volumes of still and video images collected by the intelligence community. “There are video cameras going up all over the world that could capture something important, but there aren’t enough expert eyes to view them,” Sajda says. “A combination of computers screening certain images and human brain-tagging them as important would create a dramatic increase in efficiency.”

For the Defense and Homeland Security departments, the need is real and now — not some future tech prospect. The reams of intelligence and battle data far outpace DOD and DHS staffs’ ability to review it in real or even near-real time. The team led by Sajda at Columbia and another team in Oregon are trying to develop tools that can meet this need.

Eye of the Beholder

At Columbia’s Laboratory for Intelligent Imaging and Neural Computing, Sajda is focusing on visual computer interface (VCI) technology to create a computer program that can analyze vast amounts of images super fast. Instead of replacing human vision and image processing, Sajda and his colleagues are trying to tap into those vast capabilities and wed them with information technology.

“No computer vision technology comes close to [a human being’s] ability to analyze and recognize objects in the face of noise, occlusion and changing sizes of objects,” says Sajda, director of the lab, which received a $758,000 DARPA grant for the project.

Although the human brain can process and interpret images much more accurately than any computer, a person’s ability to register that information and then analyze and record results using IT is both difficult and time-consuming. Currently, the easiest ways for humans to indicate interest in an image to a computer involves pressing a key, clicking a mouse or speaking, all of which have a narrow bandwidth and slow down the process, says Sajda.

Transforming the human-computer interface into a direct connection between brain and machine promises:


• The ability to provide access to individual human intelligence and experiences that were previously difficult or impossible to share in a systematic way


• A way to let machines sense their users as some now sense their environments and adjust performance in response


• A tool for manipulating actions in computing environments

Connecting a human directly to a computer using brainwaves as input would be a much faster way of transferring information. The lab uses an electroencephalogram (EEG) to read the electrical impulses generated in the brain of a person viewing a succession of images. The trick is to intercept neural frequencies related to a decision regarding the value of an image without slowing down the display of multiple images. Sajda’s group shows subject images at rates as high as 10 images per second and measures changes in brain activity as each image is shown. Interesting images are automatically tagged for later, more thorough, analysis.

Direct Response

Across the continent, Misha Pavel approaches human computer interfaces from another perspective.

Working on a team at the Oregon Health and Science University in Beaverton, Pavel is trying to see if by examining brain state, a machine can determine whether a human is receptive to additional information and how that information should be provided.

The OHSU team is thinking beyond the applications that could be developed for a desktop computer, Pavel says, because eventually minicomputers will be embedded in most items, from cell phones to houses. “It’s not just computers; it could be anything.”

Pavel, a professor of biomedical engineering and director of the OHSU Point of Care Laboratory, hopes to directly connect machines to the user experience. For example, a cell phone would automatically go to voice mail when its owner is fighting heavy traffic but ring when that driver is cruising on a deserted rural interstate. Toward that end, Pavel is developing algorithms for data fusion, intelligent signal processing, image fusion, pattern recognition and speech processing.

The Next Wave

Beyond intelligence, Sajda and Pavel see many possible applications for these types of direct-from-mind to processor IT. The examples cover the government and the commercial landscape. For instance, in medicine, there’s radiology — letting a physician scan hundreds of X-ray images to detect abnormalities — and elder care — directly but unobtrusively monitoring medical conditions. In air traffic control, tower officials could more quickly absorb flight, weather and airport information.

The future may not be quite so far away.

 

Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT