Spytech Agency Wants Software Brains to Connect the Dots

The military’s on a fast track to master every facet of the human mind. In the last year alone, we’ve seen attempts at boosting long-term memory, creating a new theory for intelligence and even replacing G.I.s with machines capable of complex reasoning. Next up: a computer system that can replicate – and then outdo – […]

The military's on a fast track to master every facet of the human mind. In the last year alone, we've seen attempts at boosting long-term memory, creating a new theory for intelligence and even replacing G.I.s with machines capable of complex reasoning. Next up: a computer system that can replicate - and then outdo - our own decision-making, by tapping into "relevant cognitive biases."

The Pentagon's national intelligence innovators, Iarpa, are behind this latest project. The agency is hosting a conference in January to offer up more details on the program, called Integrated Cognitive-Neuroscience Architectures for Understanding Sensemaking (ICArUS).

Iarpa hopes to create a computational model of human "sensemaking," the process whereby we create hypotheses to explain a situation and predict likely outcomes. Intelligence analysts are often tasked with generating and evaluating explanations for data that's sparse or deceptive. But, as Iarpa points out, they're only human - susceptible to selective memory, bias and stress. The Pentagon and the spy agencies have tried to use software to replace the feckless fleshies. "Yet despite the centrality of sensemaking in intelligence analysis, current models of sensemaking remain descriptive and qualitative in nature and thus are of limited utility to the Intelligence Community," Iarpa notes.

Until now, the agency points out, the human brain has remained "the only known example of a general-purpose sensemaking system." Not for long: Iarpa wants a computer that would mimic human strengths, like analytic reasoning or learning from mistakes, but do it without the accompanying weaknesses. The ideal Iarpa system would first process and explain human sensemaking: why an analyst opted for one hypothesis over another. Then, the computer would improve upon it, by determining whether a decision-maker was affected by ambiguous data, deception, or even denial. Finally, the system would offer its own sensemaking hypothesis - without any extenuating influence - instead.

Iarpa suggests that the system would help out "overburdened analysts with routine, low-level analytic tasks." But a 2001 report from the Office of the Assistant Secretary of Defense points out that sensemaking is most often compromised in high-stress situations, and, for that reason, humans are usually the weakest link:

An analysis of these cases revealed that the information systems available to the decisionmakers generally tended to perform adequately. That is, the right data were collected and put together appropriately, decisions and rationale were shared, and information was put together in a form that facilitated awareness. However, prior knowledge was relatively less influential than emotions, beliefs, cognitive factors, and mental models--all components of sensemaking.

[Photo: U.S. Army]

ALSO: