Background
Informed consent is crucial in healthcare, yet often flawed, especially in vulnerable groups. This study explores how AI, specifically Large Language Model (LLM) based chatbots, can enhance informed consent in plastic surgery by improving comprehension and decision-making.
Methods
This study compared ChatGPT models 3.5 and 4 generated informed consent documents with traditional forms for open carpal tunnel release. We utilized the Flesch Reading Ease, Flesch–Kincaid Grade Level scores, and Coleman-Liau index, adhering to the Declaration of Helsinki and STROBE guidelines to measure readability. A panel of Plastic Surgeons evaluated accuracy and completeness, using the QAMAI tool and Likert scale to assess the overall quality of these LLMs, aligning with standards from the American Society of Plastic Surgeons and Queensland Health.
Results
ChatGPT-3.5 provided essential procedural details but lacked depth in patient-specific considerations. ChatGPT-4 improved by detailing specifics, offering a clearer understanding of the medical condition, treatment, alternatives, and additional patient-centred considerations, enhancing engagement and decision-making. ChatGPT-4 also showed superior performance in DISCERN and QAMAI scores, indicating higher quality in clarity, accuracy, and relevance. However, both versions fell short of thoroughness in addressing legal and financial aspects compared to Queensland Health and ASPS, necessitating further development.
Conclusion
Analysing ChatGPT-generated consent forms revealed remarkable accuracy and comprehensiveness. ChatGPT-4's detailed approach significantly enhances patient understanding and decision-making in medical procedures as it outperformed ChatGPT-3.5 across various metrics. However, challenges in ensuring content accuracy and addressing legal, and financial aspects underscore the need for careful integration into clinical practice.