DARPA, in conjunction with OSD, sponsored the Unmanned Ground Vehicle Program from the late 1980s through the mid 1990s. As part of this project, Colorado State developed an advanced multisensor target recognition system. Here is a diagram of the overall system architecture.
The center box, Hypothesis Generation, is precisely the LADAR probing algorithm described here. It was developed originally by Alliant Techsystems in Minneapolis, MN. For more information about our larger RSTA system, see the "Multi-Sensor Object Recognition" on the vision group's projects webpage.
There is another ATR probing algorithm developed by the Geman Brothers and refined by Night Vision Labs for recognizing targets in IR imagery. This system, ARTM, was jointly developed by the Geman Brothers and Alliant Techsystems at roughly the same time that the LADAR probing algorithm presented here was developed. The ARTM algorithm is considerably more sophisticated, using a decision tree to group and apply probesets hierarchically. It is, however, very similar to our algorithm in the manner that individual probesets are defined and evaluated relative to an image.
Fundamentally, a probeset is a collection of probes. Each probe, when evaluated, returns either a 1 or a 0. The probeset score is simply the ratio of the number of probes that return 1 over the number of probes in the set.
Each probe consists of 4 integer values. These values are a pair or 2D Coordinates, relative to the "center" of the probeset. For instance a single probe might be {10, -1, 8, 2}. This would correspond to the pixels [10,-1] and [8,2], with the origin of the coordinate system placed at the center of the probeset. The unsigned difference of the values in these two pixels is computed. If this difference exceeds a preset threshold value, then that probe evaluates to 1, otherwise it evaluates to 0.
In the following image, a handmade probeset is shown overlaying some LADAR imagery. Each probe in the set appears as a dumbbell shaped figure. As the image implies, these probesets straddle the silhouette of an object being searched. Each of the probes is color coded to show whether that particular probe is satisfied: active or inactive. Red probes are inactive while green probes are active. When the probe is on-target as in the left figure, then all of the probes are active and 1.0 is returned as the score for this case. When the probeset is off-target, many of the probes are inactive and the probeset returns a value less than 1.0: in this case 6/13 (0.46).
As an example with real data and a real probeset, the result of running the most successful probe on a LADAR image of the M113 is shown below. The LADAR image is part of Array 5 of the Fort Carson Dataset. One can see a color image for the same array and target.
Pseudo Colored LADAR Image
Note that the LADAR image here is shown at half the resolution of the result image below. The result image is defined only for pixels in the LADAR where the entire probeset fits within the LADAR data. The LADAR image is 120 pixels wide by 24 pixels wide. This data was collected with a LADAR produced in the early 1980s. A modern LADAR would produce higher resolution data with far less noise.
Pseudo Colored Result of Probing, i.e.
Probe Scores.
The bright red spot represents the area where the probeset returned a score of nearly 1.0, i.e. a perfect match.