Background: Chatbots based on artificial intelligence (AI) and machine learning are rapidly growing in popularity. Patients may use these technologies to ask questions regarding surgical interventions, preoperative assessments, and postoperative outcomes. The aim of this study was to determine whether ChatGPT could appropriately answer some of the most frequently asked questions posed by patients about lung cancer surgery. Methods: Sixteen frequently asked questions about lung cancer surgery were asked to the chatbot in one conversation, without follow-up questions or repetition of the same questions. Each answer was evaluated for appropriateness and accuracy using an evidence-based approach by a panel of specialists with relevant clinical experience. The responses were assessed using a four-point Likert scale (i.e., “strongly agree, satisfactory”, “agree, requires minimal clarification”, “disagree, requires moderate clarification”, and “strongly disagree, requires substantial clarification”). Results: All answers provided by the chatbot were judged to be satisfactory, evidence-based, and generally unbiased overall, seldomly requiring minimal clarification. Moreover, information was delivered in a language deemed easy-to-read and comprehensible to most patients. Conclusions: ChatGPT could effectively provide evidence-based answers to the most commonly asked questions about lung cancer surgery. The chatbot presented information in a language considered understandable by most patients. Therefore, this resource may be a valuable adjunctive tool for preoperative patient education.
An Assessment of ChatGPT’s Responses to Common Patient Questions About Lung Cancer Surgery: A Preliminary Clinical Evaluation of Accuracy and Relevance
Marina Troian;Stefano Lovadina;Aneta Aleksova;Elisa Baratella;Maurizio Cortale
2025-01-01
Abstract
Background: Chatbots based on artificial intelligence (AI) and machine learning are rapidly growing in popularity. Patients may use these technologies to ask questions regarding surgical interventions, preoperative assessments, and postoperative outcomes. The aim of this study was to determine whether ChatGPT could appropriately answer some of the most frequently asked questions posed by patients about lung cancer surgery. Methods: Sixteen frequently asked questions about lung cancer surgery were asked to the chatbot in one conversation, without follow-up questions or repetition of the same questions. Each answer was evaluated for appropriateness and accuracy using an evidence-based approach by a panel of specialists with relevant clinical experience. The responses were assessed using a four-point Likert scale (i.e., “strongly agree, satisfactory”, “agree, requires minimal clarification”, “disagree, requires moderate clarification”, and “strongly disagree, requires substantial clarification”). Results: All answers provided by the chatbot were judged to be satisfactory, evidence-based, and generally unbiased overall, seldomly requiring minimal clarification. Moreover, information was delivered in a language deemed easy-to-read and comprehensible to most patients. Conclusions: ChatGPT could effectively provide evidence-based answers to the most commonly asked questions about lung cancer surgery. The chatbot presented information in a language considered understandable by most patients. Therefore, this resource may be a valuable adjunctive tool for preoperative patient education.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.