Artificial intelligence (AI) is intelligence demonstrated by machines. The main goal of AI is not to replace healthcare professionals, but to enable better patient experience and better inform the clinical decision-making process to improve the safety of patients, reliability and efficiency of clinicians.[1-3]
ChatGPT (Generative Pre-Trained Transformer) series, culminating in GPT-4 in 2023, utilize deep learning to generate human-like text, revolutionizing interfaces such as chatbots.[4] Its capabilities span from analyzing patient data to understanding complex medical literature, offering health information, and improving text writing, indicating the promising potential of future GPT versions.[4]
COPE (Committee on Publication Ethics) is committed to educating and supporting editors, publishers, universities, research institutes, and all those involved in publication ethics.[5]
WAME (World Association of Medical Editors) is association of editors of peer-reviewed medical journals from countries throughout the world who seek to foster international cooperation among and education of medical journal editors.[6]
• COPE joins organizations such as WAME and the others to state that AI tools cannot be listed as an author of a paper.
• AI tools cannot meet the requirements for authorship, as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements.
• Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used.
• All authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool and are, thus, liable to any breach of publication ethics.