OpenAI
Dotaz
Zobrazit nápovědu
Příchod velkých jazykových modelů (LLMs) založených na neuronových sítích představuje zásadní změnu v akademickém psaní, zejména v lékařských vědách. Tyto modely, např. GPT-4 od OpenAI, Google’s Bard či Claude od Anthropic, umožňují efektivnější zpracování textu díky architektuře transformátorů a mechanismu pozornosti. LLMs jsou schopny generovat koherentní texty, které se těžko rozeznávají od lidských. V medicíně mohou přispět k automatizaci rešerší, extrakci dat a formulaci hypotéz. Současně však vyvstávají etické otázky týkající se kvality a integrity vědeckých publikací a rizika generování zavádějícího obsahu. Článek poskytuje přehled o tom, jak LLMs mění psaní odborných textů, etická dilemata a možnosti detekce generovaného textu. Závěrem se zaměřuje na potenciální budoucnost LLMs v akademickém publikování a jejich dopad na lékařskou komunitu.
The advent of large language models (LLMs) based on neural networks marks a significant shift in academic writing, particularly in medical sciences. These models, including OpenAI's GPT-4, Google's Bard, and Anthropic’s Claude, enable more efficient text processing through transformer architecture and attention mechanisms. LLMs can generate coherent texts that are indistinguishable from human-written content. In medicine, they can contribute to the automation of literature reviews, data extraction, and hypothesis formulation. However, ethical concerns arise regarding the quality and integrity of scientific publications and the risk of generating misleading content. This article provides an overview of how LLMs are changing medical writing, the ethical dilemmas they bring, and the possibilities for detecting AI-generated text. It concludes with a focus on the potential future of LLMs in academic publishing and their impact on the medical community.
- MeSH
- jazyk (prostředek komunikace) MeSH
- lidé MeSH
- neuronové sítě MeSH
- publikování * etika MeSH
- zpracování přirozeného jazyka * MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- přehledy MeSH
This letter explores the potential of artificial intelligence models, specifically ChatGPT, for content analysis, namely for categorizing social media posts. The primary focus is on Twitter posts with the hashtag #plasticsurgery. Through integrating Python with the OpenAI API, the study provides a designed prompt to categorize tweet content. Looking forward, the utilization of AI in content analysis presents promising opportunities for advancing understanding of complex social phenomena.Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine Ratings, please refer to Table of Contents or online Instructions to Authors http://www.springer.com/00266 .
- MeSH
- lidé MeSH
- sociální média * MeSH
- umělá inteligence MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- dopisy MeSH
We investigate the accuracy and reliability of ChatGPT, an artificial intelligence model developed by OpenAI, in providing nutritional information for dietary planning and weight management. The results have a reasonable level of accuracy, with energy values having the highest level of conformity: 97% of the artificial intelligence values fall within a 40% difference from United States Department of Agriculture data. Additionally, ChatGPT displayed consistency in its provision of nutritional data, as indicated by relatively low coefficient of variation values for each nutrient. The artificial intelligence model also proved efficient in generating a daily meal plan within a specified caloric limit, with all the meals falling within a 30% bound of the United States Department of Agriculture's caloric values. These findings suggest that ChatGPT can provide reasonably accurate and consistent nutritional information. Further research is recommended to assess the model's performance across a broader range of foods and meals.
- MeSH
- jídla MeSH
- lidé MeSH
- nutriční poradci * MeSH
- reprodukovatelnost výsledků MeSH
- umělá inteligence * MeSH
- živiny MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- Geografické názvy
- Spojené státy americké MeSH
Evolution provides a creative fount of complex and subtle adaptations that often surprise the scientists who discover them. However, the creativity of evolution is not limited to the natural world: Artificial organisms evolving in computational environments have also elicited surprise and wonder from the researchers studying them. The process of evolution is an algorithmic process that transcends the substrate in which it occurs. Indeed, many researchers in the field of digital evolution can provide examples of how their evolving algorithms and organisms have creatively subverted their expectations or intentions, exposed unrecognized bugs in their code, produced unexpectedly adaptations, or engaged in behaviors and outcomes, uncannily convergent with ones found in nature. Such stories routinely reveal surprise and creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. Bugs are fixed, experiments are refocused, and one-off surprises are collapsed into a single data point. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This article is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.
BACKGROUND: Artificial intelligence (AI) has advanced substantially in recent years, transforming many industries and improving the way people live and work. In scientific research, AI can enhance the quality and efficiency of data analysis and publication. However, AI has also opened up the possibility of generating high-quality fraudulent papers that are difficult to detect, raising important questions about the integrity of scientific research and the trustworthiness of published papers. OBJECTIVE: The aim of this study was to investigate the capabilities of current AI language models in generating high-quality fraudulent medical articles. We hypothesized that modern AI models can create highly convincing fraudulent papers that can easily deceive readers and even experienced researchers. METHODS: This proof-of-concept study used ChatGPT (Chat Generative Pre-trained Transformer) powered by the GPT-3 (Generative Pre-trained Transformer 3) language model to generate a fraudulent scientific article related to neurosurgery. GPT-3 is a large language model developed by OpenAI that uses deep learning algorithms to generate human-like text in response to prompts given by users. The model was trained on a massive corpus of text from the internet and is capable of generating high-quality text in a variety of languages and on various topics. The authors posed questions and prompts to the model and refined them iteratively as the model generated the responses. The goal was to create a completely fabricated article including the abstract, introduction, material and methods, discussion, references, charts, etc. Once the article was generated, it was reviewed for accuracy and coherence by experts in the fields of neurosurgery, psychiatry, and statistics and compared to existing similar articles. RESULTS: The study found that the AI language model can create a highly convincing fraudulent article that resembled a genuine scientific paper in terms of word usage, sentence structure, and overall composition. The AI-generated article included standard sections such as introduction, material and methods, results, and discussion, as well a data sheet. It consisted of 1992 words and 17 citations, and the whole process of article creation took approximately 1 hour without any special training of the human user. However, there were some concerns and specific mistakes identified in the generated article, specifically in the references. CONCLUSIONS: The study demonstrates the potential of current AI language models to generate completely fabricated scientific articles. Although the papers look sophisticated and seemingly flawless, expert readers may identify semantic inaccuracies and errors upon closer inspection. We highlight the need for increased vigilance and better detection methods to combat the potential misuse of AI in scientific research. At the same time, it is important to recognize the potential benefits of using AI language models in genuine scientific writing and research, such as manuscript preparation and language editing.
- MeSH
- algoritmy * MeSH
- analýza dat MeSH
- jazyk (prostředek komunikace) MeSH
- lidé MeSH
- sémantika MeSH
- umělá inteligence * MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH