Autism is a heterogeneous neurodevelopmental condition characterized by impairments in social communication, along with restrictive and repetitive patterns of interests and behaviors and sensory atypicalities. Early impairments in gestural communication, especially in deictic gestures, are significantly associated with autism and strong predictors of language development. Despite the implication of deictic gestures in autism has been acknowledged, it has not been sufficiently explored by artificial intelligence. To address this, the paper proposes an automatic digital coding approach based on deep learning models. By using a transformer architecture, a multi-frame modeling strategy has been implemented and applied on 37 video clips of naturalistic mother-child interactions with the aim to recognize four main deictic gestures: pointing, giving, showing and requesting. The system was trained and validated on 31 clips, internally tested on 6 clips and externally tested on 5 extra clips, using Python. Preprocessing phase involves using a 1024 feature extractor based on Densenet pretrained on Imagenet. Preliminary results showed respectively 100% of accuracy for training set, 80% for validation set and 67% for internal testing set. These findings suggest that the proposed system is a very promising approach for the automatic analysis of deictic gestures. In future work, we plan to validate our model on a larger number of samples to achieve higher and more reliable performances.

A deep learning approach for automatic video coding of deictic gestures in children with autism

Mastrogiuseppe Marilina;
2023-01-01

Abstract

Autism is a heterogeneous neurodevelopmental condition characterized by impairments in social communication, along with restrictive and repetitive patterns of interests and behaviors and sensory atypicalities. Early impairments in gestural communication, especially in deictic gestures, are significantly associated with autism and strong predictors of language development. Despite the implication of deictic gestures in autism has been acknowledged, it has not been sufficiently explored by artificial intelligence. To address this, the paper proposes an automatic digital coding approach based on deep learning models. By using a transformer architecture, a multi-frame modeling strategy has been implemented and applied on 37 video clips of naturalistic mother-child interactions with the aim to recognize four main deictic gestures: pointing, giving, showing and requesting. The system was trained and validated on 31 clips, internally tested on 6 clips and externally tested on 5 extra clips, using Python. Preprocessing phase involves using a 1024 feature extractor based on Densenet pretrained on Imagenet. Preliminary results showed respectively 100% of accuracy for training set, 80% for validation set and 67% for internal testing set. These findings suggest that the proposed system is a very promising approach for the automatic analysis of deictic gestures. In future work, we plan to validate our model on a larger number of samples to achieve higher and more reliable performances.
2023
979-8-3503-2297-2
File in questo prodotto:
File Dimensione Formato  
A_deep_learning_approach_for_automatic_video_coding_of_deictic_gestures_in_children_with_autism.pdf

Accesso chiuso

Descrizione: Contibuto
Tipologia: Documento in Versione Editoriale
Licenza: Copyright Editore
Dimensione 946.99 kB
Formato Adobe PDF
946.99 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11368/3047338
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact