Large language models are changing landscape of academic publications. A positive transformation?
Language English Country Czech Republic Media print
Document type Journal Article, Review
PubMed
38981715
PII: 136673
Knihovny.cz E-resources
- Keywords
- large language models (LLMs), neural networks, academic writing, artificial intelligence, transformer architecture, scientific research automation, publishing ethics, detection of AI-generated text,
- MeSH
- Language MeSH
- Humans MeSH
- Neural Networks, Computer * MeSH
- Publishing ethics MeSH
- Natural Language Processing MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
- Review MeSH
The advent of large language models (LLMs) based on neural networks marks a significant shift in academic writing, particularly in medical sciences. These models, including OpenAI's GPT-4, Google's Bard, and Anthropic's Claude, enable more efficient text processing through transformer architecture and attention mechanisms. LLMs can generate coherent texts that are indistinguishable from human-written content. In medicine, they can contribute to the automation of literature reviews, data extraction, and hypothesis formulation. However, ethical concerns arise regarding the quality and integrity of scientific publications and the risk of generating misleading content. This article provides an overview of how LLMs are changing medical writing, the ethical dilemmas they bring, and the possibilities for detecting AI-generated text. It concludes with a focus on the potential future of LLMs in academic publishing and their impact on the medical community.