Robotics Research Lab
CRES
USC Computer Science
USC Engineering
USC
/ Research / Projects / Automated Detection and Classification of Positive vs. Negative Robot Interactions with Children with Autism

Motivation Approach
Publications Support Contact Details
Top
Motivation

Recent feasibility studies involving children with autism spectrum disorders (ASD) interacting with socially assistive robots have shown that some children have positive reactions to robots, while others may have negative reactions. It is unlikely that children with ASD will enjoy any robot 100% of the time. It is therefore important to develop methods for detecting negative child behaviors in order to minimize distress and facilitate effective human-robot interaction. The goal of this project is to describe and validate a non-heuristic method for determining if a child is interacting positively or negatively with a robot, based on Gaussian mixture models (GMM) and a naive-Bayes classifier of overhead camera observations.

The unconstrained nature of the free-play task used as part of ASD therapy is intended to engage children on a wide range of the autism spectrum, including lower-functioning children with less mature communication abilities. In human-robot implementations of the free-play task, the child and robot can interact however the child chooses, with no specific task or game rules or constraints. However, autonomous operation of the robot in such a free-form social setting presents a range of challenges, including understanding the social behavior that occurs during the experiment session in time to formulate appropriate real-time robot responses. In addition, the unconstrained nature of the interaction means that any a priori categorization of the child's behavior can be quickly and frequently confounded, especially considering the heterogeneous nature of the ASD population.

As part of our development of an autonomous robot for free-play settings, we aim to show that automatic behavior coding can be used to discriminate between children that are attempting to interact socially with a robot and children that are not. We present results using data from a pilot study involving eight children with ASD.
Top
Approach

We conducted a feasibility study with children with ASD that provided the data reported here. The study consisted of a free-play scenario involving a robot (shown on the right), a child, and a parent. The recruited children were all diagnosed with ASD. The robot moved autonomously around the room, was able to gesture, make non-verbal vocalizations, and blow bubbles. The autonomous behavior of the robot was designed to encourage social interaction.


A total of 100 minutes of experiment time was recorded over all sessions with all participants, 60 of those involving human-robot interaction and the rest involving interaction with a non-robotic toy. A preliminary data coding showed that some children had a positive impression of the robot and made several attempts to engage the robot socially. In particular, these children played with the robot when it blew bubbles and spoke to it in order to encourage it to socially interact with them. Some children beckoned the robot to follow them around the room. In contrast, some children had a negative reaction to the robot. Negative reactions ranged from avoiding the robot, to backing up against the walls of the experiment space, to seeking comfort from the parent.

We equipped the experiment space at Childrens Hospital Los Angeles Boone Fetter Clinic with an overhead camera. Using an overhead vision system we developed, the positions of the child, robot, and parent are automatically determined. A spatio-temporal model of social behavior, using and 8-dimensional feature vector, including distances and velocities, between the child and robot, parent, and wall was computationally obtained using the overhead data and clustered using expectation-maximization to a Gaussian Mixture Model (GMM) into 50 clusters. The clusters were then classified using a naïve Bayes classifier, based on a human rating for training data into the above described behavior. The model was trained on 20% of the recorded data, and tested on the remaining 80%.

The approach achieves a 91.4% accuracy rate in classifying robot interaction, parent interaction, avoidance, and hiding against the wall behaviors, and demonstrates that these classes are sufficient for distinguishing between positive and negative reactions of the child to the robot.

The goal of this work was to automatically distinguish between positive and negative reactions of children with ASD to a robot by classifying child reactions into approach, interaction, parent, and wall behavior classes. The overhead camera system was sufficient for collecting data that contained relevant features for classification. We were able to extract the positions of all the experiment participants and were able to obtain motion information for an effective set of distance information.

We have shown that the GMM-based method for state clustering can efficiently and effectively cluster the 8-dimensional feature space. These states are readily labelled by using annotated training data and could be used for partial behavior transcription. Potential concerns we are exploring further include over-generalization that can happen with human labelling, and over-specialization given the heterogeneity of the participant population.
Publications

A larger list of relevant publications can be found here.
Support

This project is funded in part by the Okawa Foundation, the Institute for Creative Technologies, the Dan Marino Foundation through the Marino Autism Research Institute (MARI), the USC Provost's Center for Interdisciplinary Research, and AnthroTronix, Inc.

Collaborators
Contact

David Feil-Seifer