Traditionally, creativity research has involved asking human raters to judge responses to verbal creativity tasks, such as the Alternate Uses Task (AUT). These manual scoring practices have been useful to the field, but they have notable limitations, including labor-intensiveness and subjectivity, which can potentially threaten experimental reliability and validity. To address these challenges, creativity researchers are increasingly employing automated scoring approaches, including computational models of semantic distance. In English samples, semantic distance correlates positively with human ratings of creativity on the AUT, as well as other markers of creativity, such as openness to experience and creative achievement. However, semantic distance has only been validated in English-speaking samples, with very little psychometric work available in the many other languages of the world. In a multi-lab study, we seek to validate semantic distance across many non-English datasets, including Arabic, Chinese, French, German, Hebrew, Italian, Polish, Russian, and Spanish. We gathered AUT responses and human creativity ratings, as well as criterion measures for validation (e.g., openness to experience, creative achievement). We will use a deep learning-based language model, Bidirectional Encoder Representations from Transformers (BERT)—publicly-available in over 100 languages—to compute semantic distance scores and validate this automated metric with our behavioral data. These nine languages will be incorporated into the openly available SemDis platform, with the goal of facilitating greater diversity and accessibility in automated creativity assessment.
Automated Creativity Assessment Around the World: Validating Semantic Distance Across Multiple Non-English Contexts.
Agnoli S.;
2022-01-01
Abstract
Traditionally, creativity research has involved asking human raters to judge responses to verbal creativity tasks, such as the Alternate Uses Task (AUT). These manual scoring practices have been useful to the field, but they have notable limitations, including labor-intensiveness and subjectivity, which can potentially threaten experimental reliability and validity. To address these challenges, creativity researchers are increasingly employing automated scoring approaches, including computational models of semantic distance. In English samples, semantic distance correlates positively with human ratings of creativity on the AUT, as well as other markers of creativity, such as openness to experience and creative achievement. However, semantic distance has only been validated in English-speaking samples, with very little psychometric work available in the many other languages of the world. In a multi-lab study, we seek to validate semantic distance across many non-English datasets, including Arabic, Chinese, French, German, Hebrew, Italian, Polish, Russian, and Spanish. We gathered AUT responses and human creativity ratings, as well as criterion measures for validation (e.g., openness to experience, creative achievement). We will use a deep learning-based language model, Bidirectional Encoder Representations from Transformers (BERT)—publicly-available in over 100 languages—to compute semantic distance scores and validate this automated metric with our behavioral data. These nine languages will be incorporated into the openly available SemDis platform, with the goal of facilitating greater diversity and accessibility in automated creativity assessment.File | Dimensione | Formato | |
---|---|---|---|
SfNC 2022 - Full Program-1.pdf
accesso aperto
Descrizione: Cover, index, abstract
Tipologia:
Documento in Versione Editoriale
Licenza:
Digital Rights Management non definito
Dimensione
468.94 kB
Formato
Adobe PDF
|
468.94 kB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.