• Je něco špatně v tomto záznamu ?

Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora's Box Has Been Opened

M. Májovský, M. Černý, M. Kasal, M. Komarc, D. Netuka

. 2023 ; 25 (-) : e46924. [pub] 20230531

Jazyk angličtina Země Kanada

Typ dokumentu časopisecké články, práce podpořená grantem

Perzistentní odkaz   https://www.medvik.cz/link/bmc23011348

BACKGROUND: Artificial intelligence (AI) has advanced substantially in recent years, transforming many industries and improving the way people live and work. In scientific research, AI can enhance the quality and efficiency of data analysis and publication. However, AI has also opened up the possibility of generating high-quality fraudulent papers that are difficult to detect, raising important questions about the integrity of scientific research and the trustworthiness of published papers. OBJECTIVE: The aim of this study was to investigate the capabilities of current AI language models in generating high-quality fraudulent medical articles. We hypothesized that modern AI models can create highly convincing fraudulent papers that can easily deceive readers and even experienced researchers. METHODS: This proof-of-concept study used ChatGPT (Chat Generative Pre-trained Transformer) powered by the GPT-3 (Generative Pre-trained Transformer 3) language model to generate a fraudulent scientific article related to neurosurgery. GPT-3 is a large language model developed by OpenAI that uses deep learning algorithms to generate human-like text in response to prompts given by users. The model was trained on a massive corpus of text from the internet and is capable of generating high-quality text in a variety of languages and on various topics. The authors posed questions and prompts to the model and refined them iteratively as the model generated the responses. The goal was to create a completely fabricated article including the abstract, introduction, material and methods, discussion, references, charts, etc. Once the article was generated, it was reviewed for accuracy and coherence by experts in the fields of neurosurgery, psychiatry, and statistics and compared to existing similar articles. RESULTS: The study found that the AI language model can create a highly convincing fraudulent article that resembled a genuine scientific paper in terms of word usage, sentence structure, and overall composition. The AI-generated article included standard sections such as introduction, material and methods, results, and discussion, as well a data sheet. It consisted of 1992 words and 17 citations, and the whole process of article creation took approximately 1 hour without any special training of the human user. However, there were some concerns and specific mistakes identified in the generated article, specifically in the references. CONCLUSIONS: The study demonstrates the potential of current AI language models to generate completely fabricated scientific articles. Although the papers look sophisticated and seemingly flawless, expert readers may identify semantic inaccuracies and errors upon closer inspection. We highlight the need for increased vigilance and better detection methods to combat the potential misuse of AI in scientific research. At the same time, it is important to recognize the potential benefits of using AI language models in genuine scientific writing and research, such as manuscript preparation and language editing.

Citace poskytuje Crossref.org

000      
00000naa a2200000 a 4500
001      
bmc23011348
003      
CZ-PrNML
005      
20230815090613.0
007      
ta
008      
230718s2023 xxc f 000 0|eng||
009      
AR
024    7_
$a 10.2196/46924 $2 doi
035    __
$a (PubMed)37256685
040    __
$a ABA008 $b cze $d ABA008 $e AACR2
041    0_
$a eng
044    __
$a xxc
100    1_
$a Májovský, Martin $u Department of Neurosurgery and Neurooncology, First Faculty of Medicine, Charles University, Prague, Czech Republic $1 https://orcid.org/0000000177255181 $7 xx0228525
245    10
$a Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora's Box Has Been Opened / $c M. Májovský, M. Černý, M. Kasal, M. Komarc, D. Netuka
520    9_
$a BACKGROUND: Artificial intelligence (AI) has advanced substantially in recent years, transforming many industries and improving the way people live and work. In scientific research, AI can enhance the quality and efficiency of data analysis and publication. However, AI has also opened up the possibility of generating high-quality fraudulent papers that are difficult to detect, raising important questions about the integrity of scientific research and the trustworthiness of published papers. OBJECTIVE: The aim of this study was to investigate the capabilities of current AI language models in generating high-quality fraudulent medical articles. We hypothesized that modern AI models can create highly convincing fraudulent papers that can easily deceive readers and even experienced researchers. METHODS: This proof-of-concept study used ChatGPT (Chat Generative Pre-trained Transformer) powered by the GPT-3 (Generative Pre-trained Transformer 3) language model to generate a fraudulent scientific article related to neurosurgery. GPT-3 is a large language model developed by OpenAI that uses deep learning algorithms to generate human-like text in response to prompts given by users. The model was trained on a massive corpus of text from the internet and is capable of generating high-quality text in a variety of languages and on various topics. The authors posed questions and prompts to the model and refined them iteratively as the model generated the responses. The goal was to create a completely fabricated article including the abstract, introduction, material and methods, discussion, references, charts, etc. Once the article was generated, it was reviewed for accuracy and coherence by experts in the fields of neurosurgery, psychiatry, and statistics and compared to existing similar articles. RESULTS: The study found that the AI language model can create a highly convincing fraudulent article that resembled a genuine scientific paper in terms of word usage, sentence structure, and overall composition. The AI-generated article included standard sections such as introduction, material and methods, results, and discussion, as well a data sheet. It consisted of 1992 words and 17 citations, and the whole process of article creation took approximately 1 hour without any special training of the human user. However, there were some concerns and specific mistakes identified in the generated article, specifically in the references. CONCLUSIONS: The study demonstrates the potential of current AI language models to generate completely fabricated scientific articles. Although the papers look sophisticated and seemingly flawless, expert readers may identify semantic inaccuracies and errors upon closer inspection. We highlight the need for increased vigilance and better detection methods to combat the potential misuse of AI in scientific research. At the same time, it is important to recognize the potential benefits of using AI language models in genuine scientific writing and research, such as manuscript preparation and language editing.
650    _2
$a lidé $7 D006801
650    12
$a umělá inteligence $7 D001185
650    12
$a algoritmy $7 D000465
650    _2
$a jazyk (prostředek komunikace) $7 D007802
650    _2
$a sémantika $7 D012660
650    _2
$a analýza dat $7 D000078332
655    _2
$a časopisecké články $7 D016428
655    _2
$a práce podpořená grantem $7 D013485
700    1_
$a Černý, Martin $u Department of Neurosurgery and Neurooncology, First Faculty of Medicine, Charles University, Prague, Czech Republic $1 https://orcid.org/0000000286010554 $7 xx0304973
700    1_
$a Kasal, Matěj $u Department of Psychiatry, Faculty of Medicine in Pilsen, Charles University, Pilsen, Czech Republic $1 https://orcid.org/0000000164458983
700    1_
$a Komarc, Martin $u Institute of Biophysics and Informatics, First Faculty of Medicine, Charles University, Prague, Czech Republic $u Department of Methodology, Faculty of Physical Education and Sport, Charles University, Prague, Czech Republic $1 https://orcid.org/0000000341065217 $7 pna20191026198
700    1_
$a Netuka, David $u Department of Neurosurgery and Neurooncology, First Faculty of Medicine, Charles University, Prague, Czech Republic $1 https://orcid.org/0000000186094789 $7 xx0061783
773    0_
$w MED00007388 $t Journal of medical Internet research $x 1438-8871 $g Roč. 25, č. - (2023), s. e46924
856    41
$u https://pubmed.ncbi.nlm.nih.gov/37256685 $y Pubmed
910    __
$a ABA008 $b sig $c sign $y p $z 0
990    __
$a 20230718 $b ABA008
991    __
$a 20230815090610 $b ABA008
999    __
$a ok $b bmc $g 1963644 $s 1197613
BAS    __
$a 3
BAS    __
$a PreBMC-MEDLINE
BMC    __
$a 2023 $b 25 $c - $d e46924 $e 20230531 $i 1438-8871 $m JMIR. Journal of medical internet research $n J Med Internat Res $x MED00007388
LZP    __
$a Pubmed-20230718

Najít záznam

Citační ukazatele

Nahrávání dat ...

Možnosti archivace

Nahrávání dat ...