|
Post by Cctv on Oct 26, 2006 23:24:26 GMT -5
www.newscientisttech.com/article.ns?id=dn10387
www.springerlink.com/content/cxy0r0wmenkdqm0n/
A person offering someone a stick of gum or a cigarette can look similar to someone being threatened with a knife, for example. To cope with this, Park and Aggarwal chose to build up a profile for each type of behaviour.
Park calls it a "semantic analysis" of the interaction. This means several different factors are considered. For example, when identifying two people shaking hands, their hands must not only be close, but must also move in synchrony.
They meticulously coded a description of these key characteristics, which the software searches for when analysing a scene. This allows it to assign a probability that a particular activity is being observed. At the moment, the system has to capture the interaction from side-on to make its evaluation.
Hugging and punching "The system works quite accurately," says Park. Tests were carried out on six different pairs of people performing a total of 54 different staged interactions including hugging, punching, kicking and shaking hands. On average, the system was 80% accurate at identifying these activities correctly.
|
|