Sun 13 Mar 2005
Celeste Biever, NewScientist.com news service, via Slashdot, reports:
Lawyers, judges and jurors could soon explore crime scenes in three dimensions in the courtroom, in the same way that video gamers explore virtual worlds.
Software called instant Scene Modeler (iSM) re-creates an interactive 3D model from a few hundred frames of a scene captured by a special video camera. Users can zoom in on any object in the 3D model, measure distances between objects and look at scenes from different angles.
Currently investigators try to recreate the scene of the crime in court by sifting through photos or sketches, but this approach is limited and time-consuming, explains Piotr Jasiobedzki, iSM’s project manager at MDRobotics in Toronto, Canada. The software could also assist detectives during their investigations.
“This is the first time this kind of technology has been applied to solving crime,” says Linda Shapiro, a computer vision researcher at the University of Washington in Seattle.
iSM also could be used to help geologists explore mines remotely, or even allow space scientists to investigate the Martian landscape, says Jasiobedzki. A police force in Canada is currently testing the technology and the company is talking to mining companies.
Double-barrelled camera
The system uses a gun-shaped stereo-camera that consists of two ordinary video cameras aligned at a set distance from each other. This enables the depth of the captured scene to be calculated at every point, just as a pair of eyes gauges distances.
Stereo-cameras have been used in the past to provide 3D obstacle detection to autonomous robots. For example many of the robotic vehicles that took part in the Grand Challenge robot race in October 2004 were equipped with them. But this only gives a series of snapshot views of a scene from fixed points.
iSM is different because it creates a virtual model of the scene that can then be explored from any angle. It does this by using a set of algorithms called SIFT (Scale Invariant Feature Transform) developed by David Lowe, computer vision expert at the University of British Columbia in Vancouver, Canada.
SIFT very quickly identifies common features in sequential images, Lowe told New Scientist, allowing separate 3D images to be transformed into a virtual 3D world. The virtual world is rendered by a graphics gaming card inside an ordinary laptop or PC.
(You can view an animation of a scene being filmed here and the subsequent 3D recreation here.)
SIFT is also used in the robotic dog Aibo made by Sony. It allows Aibo to recognise its charging station by comparing the real world it captures on camera to an image saved to its hard drive, and to “go fetch” objects. And Evolution Robotics in Pasadena, California, uses SIFT for anti-theft software that can spot objects being sneaked through a checkout in the lower tray of a shopping cart.
23:07 10 March 2005