CoRAVEN: Knowledge-Based Support for Intelligence Analysis
 
 

Dr. Patricia M. Jones, Dr. David C. Wilkins, Dr. Robin Bargar, Dr. Janet Sniezek, Mr. Peter Asaro, Ms. Nora Danner, Mr. Jay Eychaner, Mr. Sasha Chernyshenko, Mr. Gunnar Schrah
University of Illinois at Urbana-Champaign
Beckman Institute
405 N. Mathews
Urbana IL 61801

Dr. Caroline Hayes, Mr. Nan Tu, Mr. Hakan Ergan, Ms. Li Lu
University of Minnesota
Department of Mechanical Engineering
11 Church St. S. E.
Minneapolis MN 55455
 

ABSTRACT



Intelligence analysis is one of the major functions performed by an Army staff in battlefield management. In particular, intelligence analysts develop intelligence requirements based on the commander's information requirements, develop a collection plan, and then monitor messages from the battlefield with respect to the commander's information requirements.
 
 

The goal of the CoRAVEN project is to develop an intelligent collaborative multimedia system to support intelligence analysts. Key ingredients of our design approach include (1) significant knowledge engineering and iterative prototyping activities with domain experts, (2) graphical user interfaces to provide flexible support for the multiple tasks in which analysts are engaged, (3) the use of Bayesian belief networks as a way to structure inferences that relate observable data to the commander's information requirements, (4) sonification of data streams and alarms to support enhanced situation awareness, (5) collaboration technologies, and (6) psychological studies of reasoning and judgment under uncertainty.
 
 

This paper reports on progress on the development and evaluation of the CoRAVEN prototype. In particular, we report the results of a usability evaluation of CoRAVEN 1.1 conducted in April, 1999, at the University of Illinois, related requirements for redesign of the user interfaces and underlying architecture, and further empirical work with expert decision makers.
 
 
 
 

Prepared through collaborative participation in the Advanced Displays

and Interactive Displays Consortium sponsored by the U.S. Army Research

Laboratory under Cooperative Agreement DAAL01-96-2-0003.
 
 

The views and conclusions contained in this document are those of the authors and should not be interpreted as presenting the official policies, either expressed or implied, of the Army Research Laboratory or the US Government. The US Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation thereon.
 
 




INTRODUCTION





Intelligence analysis is one of the major functions performed by the commander's staff in battlefield management. It focuses on collecting information about the enemy and on making inferences about the enemy's current location, capabilities, and future intent. More specifically, intelligence analysts work within the context defined by the commander's information requirements to develop a collection plan and to monitor real-time messages from the battlefield. In addition to being organized around the commander's information requirements, the collection plan is organized around the nature of the battlefield terrain, expressed as a collection of named areas of interest (NAIs) and other terrain abstractions such as phase lines (PLs) and lines of defensible terrain (LDTs). The collection plan uses a variety of available assets (e.g., scouts, JSTARS, UAVs, SIGINT, ELINT) to examine specific NAIs at certain times in order to draw inferences about enemy location, capabilities, and intent. The interpretation of messages from these collection assets constitutes a huge data overload problem for intelligence analysts; as many as 500 messages may arrive within an hour or less to be interpreted and assimilated into a current best hypothesis about enemy location, capabilities, and intent.
 
 

The CoRAVEN project is a proof-of-concept technology demonstration that is intended to support intelligence staff officers in this analysis and interpretation process. It currently focuses on the real-time interpretation of simulated battlefield messages (SALUTE reports) by using Bayesian belief networks (BBNs) to reason about how messages act as evidence for particular information requirements. The four major types of displays provided are: (1) interactive map display where users can see LDTs and NAIs change their color saturation as their associated probability values change; (2) NetViewer, a graphical viewer for BBNs which also includes marginal probability display and bargraph of probabilities for top-level nodes; (3) synchronization matrix that illustrates the collection plan and includes a scrolling text window of the individual SALUTE reports; and (4) sonification of part of the BBN related to PIR #1, the enemy's main defense.
 
 

In last year's symposium, we reported on the basic CoRAVEN architecture and capabilities (Jones et al., 1999). In this paper, we provide new information about (1) a usability evaluation study, (2) design improvements, (3) architectural improvements, and (4) expert judgment study.
 
 

USABILITY EVALUATION STUDY


 
 

In April 1999, four expert intelligence analysts visited UIUC to engage in a usability evaluation study of CoRAVEN (at that time, we named that version CoRAVEN 1.1). All the experts were retired Army officers with significant experience in intelligence work.
 
 

The general procedure used for the study is summarized in Appendix 1. The questionnaire that subjects filled out is outlined in Appendix 2. Results to Likert scale questions were scored numerically from 5 ("Very") to 1 ("not at all"). The following table shows the mean subjective ratings for the Likert scale questions.
 
 

Table 1. Mean subjective ratings for Likert scale questions on scale from 1 to 5. Five is the best score ("very familiar/useful/usable"). Questions 1 and 2 are on familiarity; Questions 3-6 each have two separate ratings, the first for usefulness and the second for usability
 
 
 
Question Mean Ratings
1. Task familiarity 3.5
2. Familiarity with Windows  5.0
3.0. Overall CoRAVEN 4.75 and 3.25
3a. Overall Map display 4.5 and 3.5
3b. Phase line display 4.75 and 3.25
3c. NAI display  4.5 and 3.0
3d. Data sonification 3.25 and 2.75
4. Overall NetViewer display 3.5 and 2.75
4a. NetViewer bargraph display 4.5 and 4.5
4b. NetViewer PIR tree display 3.5 and 2.5
4c. NetViewer conditional probability table display 2.67 and 2.67
5. Synchronization matrix display 3.0 and 3.0
6. SALUTE report display 4.25 and 2.75

 

Overall, these data indicate that users found the familiar concepts of map displays, bargraph displays, and SALUTE reports quite useful and their implementation in CoRAVEN 1.1 moderately usable. Lower ratings in general were given for more complex and esoteric features (e.g., Bayesian network displays and sonification). The synchronization matrix, while familiar, was actually not very useful or usable in this version of CoRAVEN because it was simply a static picture. Finally, it should be noted that usability ratings were presumably not affected by the fact that CoRAVEN is implemented in Windows; all users rated themselves "very familiar" with Windows conventions related to using the mouse, multiple windows, scrolling, etc.
 
 

Subjects' comments provided a rich source of data about particular problematic issues and ideas for redesign. A major theme in many comments was the need for cross-linking information among the displays. For example, subjects wanted to be able to click on a map object and have associated Bayesian network nodes, synchronization matrix elements, and SALUTE reports highlighted. This kind of integration is very important to support in problem solving environments in that it provides multiple perspectives and rationale for high-level summaries or hypotheses. Similarly, users wanted explanations of why significant changes occurred during the scenario. Indeed, cross-linking of information as just described is one way to provide a rich explanation without having to have yet another window of text. Thirdly, users wanted configuration control; for example, they wanted to be able to set up their own sounds for sonification, their own conventions for map color-coding, and the like.
 
 

These comments and further conversations with experts led to a variety of requirements for iterative design for the next version of CoRAVEN. These design requirements are the subject of the next section.
 
 

REDESIGN FOR CoRAVEN 1.2



The design of CoRAVEN 1.2 has focused on configuration control and cross-linking of information between the spatial (map), temporal (synchronization matrix) and logical (Bayesian network) displays. With respect to configuration control, we have focused on the map and sonification displays. In the map display, users can now add their own Named Areas of Interest (NAIs) to the map, create their own NAI categories, and choose their own colors for NAIs and other map objects. Second, we have also created a timeline user interface to support user configuration of sounds for sonification of alarms.
 
 

The Timeline GUI currently exists as a visual arrangement tool for building message sequences from an existing audio file. It possesses the capacity to import messages, arrange any number of these messages over a period of time, play the arrangement as a sound client,

and export the arrangement back into the audio file as

well as save the visual portions of the visual set-up. It is expected that a user of this interface will be enabled to construct meaningful associations between sound events and situational events by auditioning possible associations in this editor.
 
 

The Timeline GUI is arranged in three portions: the control panel, the message panel, and the track panel. The message panel displays information about the arrangement in response to user queries. The control panel is relatively simple. There are four controls: Zoom, Speed, Time Point, and Start/Reset. The Zoom slider controls the visual scale of the track layout. The Speed slider adjusts the speed at which the Timeline GUI sends messages to the sound server during playback. The Time Point slider adjusts the starting point of playback, relative to the total length. The last two controls

are the Start and Reset buttons - Start alternately starts and stops playback, and the Reset button moves the Time Point back to the beginning of the arrangement.
 
 

The majority of the users work with the Timeline GUI will be with the track panel. This area is where users will arrange the elements of a sequence by laying out 'mods' onto 'tracks'. A Track is an abstract organizational element upon which 'mods' are placed. A Mod is any message element extracted from an audio file. By creating new Tracks and placing Mods onto them, users can arrange a series of tracks in any way desired.
 
 

ARCHITECTURE REDESIGN



A second major initiative has been on improving the infrastructure of the CoRaven socket architecture to make CoRaven more flexible and extensible. CoRaven is composed of group of semi-independent applications or agents which communicate to each other through the socket architecture. A central controller and messaging architecture is being developed to support "plug-and-play" of components like GIS, BBN, and sound systems. These changes will make it easier to add additional agents (which is particularly important for allowing true collaborative use of CoRaven) and to establish a uniform message passing protocol by which agents may communicate CoRaven specific information.

Collaborative functions will operate through the same client-server socket architecture that is internal to CoRaven. This will provide a very robust framework for multi-user interaction and the sharing of displays, Bayesian inference tools, and data between analysts at remote sites.
 
 

FURTHER EMPIRICAL WORK



Finally, we have conducted empirical studies related to the judgment and decision processes that characterize expert performance. We report on collection of data from observations of 41 military intelligence experts at Ft. Huachuca who used portions of the CoRaven system. The particular interest was in how experts evaluated probabilities in the Bayesian Belief Net and how they updated those probabilities based on course of action information and priority intelligence requests.
 
 








APPENDIX 1: PARTIAL SUBJECT INSTRUCTIONS FOR USABILITY EVALUATION STUDY



CoRAVEN Usability Study

Formative Evaluation

April 12-15, 1999
 
 

The goal of this session is to obtain your feedback on the current version of the CoRAVEN prototype (CoRAVEN 1.1). We will provide a short demonstration and training session on how to use CoRAVEN, and then ask you to perform some tasks using CoRAVEN, and finally we will ask you to fill out a short questionnaire. We will also take notes during your use of CoRAVEN to gain insight into how well it supports your problem solving process. The entire session is planned for approximately 55 minutes.
 
 

Experimental Procedure…
 
 

1. Patty does this overview

2. Patty launches CoRAVEN in GIS Demo mode.

3. Subject explores CoRAVEN before running the simulation.

4. Subject monitors CoRAVEN during simulation run, makes comments.

5. Patty asks subject for answer and justification for PIR #1 (main defense).

6. Subject responds by exploring CoRAVEN.

7. Patty asks subject for answer and justification for PIR #3 (counterattack forces).

8. Subject responds by exploring CoRAVEN.

9. Patty restarts demo with sound …

10. Repeat Steps 3-6.

11. Patty administers questionnaire.
 
 

About CoRAVEN:

How to use CoRAVEN to monitor reports as they come in: 1. On the map, you see the highest-level interpretations: objects become more saturated with color as they become the most likely conclusion based on current evidence. 2. You also hear associated sonification; currently it correlates with the visual color saturation. 3. On the NetViewer, you can see a bargraph of the probabilities which are associated with the color and sound. You can also browse the Bayesian Belief Networks that show how observable states relate to inferences. 4. You can also see the exact list of SALUTE reports.
 
 

In this particular CoRAVEN scenario, you will see:

1. PIR #1 is: where is the enemy's main defense -- Phase Line 1, PL 2, PL 3, or PL 4? More likely PLs turn redder, less likely whiter.

2. Similarly, PIR #3 is: at which NAIs are the counterattack forces? More likely NAIs turn yellower, less likely whiter.

3. In the sonification demo, we just use PIR #1.

There, you hear systematic differences in sound corresponding to the changing probabilities; the volume of each track of music or each tone.

4. For both PIRs you can see all the options and dynamic bargraph of their current probability values in the NetViewer window(s).

And you can see all the SALUTE reports as they

arrive into this text list.
 
 

Scenario:
 
 

Your brigade task force consists of three mech battalions, an armor battalion, a light infantry battalion, an attack helicopter company (OPCON), and normal CS and CSS support. The enemy uses Krasnovian style composition and doctrine. … this mission is a Movement To Contact, until intelligence assets can regain contact with the main enemy force. This will be your primary task. …
 
 

Operation Plan:


 
 

Mission: 7th Brigade attacks in zone NLT 070615 Dec 97 to seize key terrain vic OBJ JODI.
 
 

Commander's Intent: At the end of this mission I want two totally secure Lines of Communication in Zone from the Line of Departure to Tactical Assembly Areas on OBJ JODI. …
 
 

Concept of Operation: We will conduct a vigorous intelligence collection effort to gain contact with the enemy on 5 and 6 December. On the evening of 6th December we will Air Assault our light infantry battalion into zone to conduct an infiltration attack against the

enemy's main defense. That attack will reduce obstacles and eliminate enemy weapon systems, and mark lanes to facilitate the passage of our main attack. …
 
 

PIR-1: Will the enemy main defense occur at PL {Phase Line} 1, 2, 3, or 4? Dependent DP: Insertion of Light Infantry at main defense.
 
 

PIR-2: Will the enemy put more than 70% of its forces in the Northern Zone? Dependent DP: Switch main attack to the Southern Zone.
 
 

PIR-3: Where is the enemy CATK reserve? When will it commit? Dependent DP: Commitment of Attack Helicopters to destroy the CATK before it can affect the ground battle.

.
 
 

APPENDIX 2: QUESTIONNAIRE
 
 

NOTE: Likert scales for responses have not been reproduced here because of space limitations; they are five-point scales worded as "Very x", "x", "Somewhat x", "Not very x", and "Not at all x" where x stands for the concept of interest (e.g., familiar, useful, usable).
 
 

CoRAVEN Feedback
 
 

Date:___________

Subject:_________________________________
 
 

Version of CoRAVEN (Circle one): 1.0 1.1
 
 

Comments during Exploration:

Comments/Answers to PIR #1 (enemy's main defense)

Comments/Answers to PIR #3 (counterattack forces)

Comments/Answers to PIR #1 with sound
 
 

1. Your familiarity with the task that CoRAVEN supports:
 
 

2. Your familiarity with the Windows environment (including using a mouse, closing windows, clicking, etc.):
 
 

Please rate your opinion of the usefulness and usability

of the following features of CoRAVEN.
 
 

Here, consider "usefulness" as the overall utility of

the functionality. In contrast, "usability" refers to

your opinion of the "user-friendliness" of this implementation of the functionality. User-friendliness refers to several things: how quickly you could learn to use the feature, how intuitively the feature supported your work, how many errors, if any, you made while using this feature. (For example, you might think Bayesian belief networks are very useful, *and* that this NetViewer user interface is also very usable.)
 
 

3-0. Your overall opinion of CoRAVEN.

3a. Your opinion of the interactive map display overall.

3b. Your opinion of the phase lines display on the map.

3c. Your opinion of the NAIs display on the map.

3d. Your opinion of the data sonification that is associated with the phase lines shown on the map.
 
 

4. Your opinion of the NetViewer display overall.

4a. Your opinion of the NetViewer bargraph display.

4b. Your opinion of the NetViewer PIR tree display.

4c. Your opinion of the NetViewer conditional probability table display.

4d. Your opinion of the NetViewer marginal probability display.
 
 

5. Your opinion of the synchronization matrix display.

6. Your opinion of the SALUTE report display.

7. Please provide two or more specific ideas on how we can redesign our next version of CoRAVEN.