OpenAI Dotaz Zobrazit nápovědu
This paper describes the evaluation options of Dupuytren's contracture by subjective and objective methods. There are various classification schemes named after their authors, including graphical representation for objective evaluation of the disease. Subjective assessment was performed in the form of a questionnaire for patients. The QuickDASH with a small specification for Dupuytren's contracture is the most commonly used questionnaire. The Southampton Dupuytren's Scoring Scheme questionnaire appears to be a higher specification. The classifications allow evaluation of treatment success to determine prognosis of the disease. The analysis of articles is based on PubMed search from the years 1967-2022, with 28 relevant articles were retrieved. Based on this analysis, the Tubiana classification appears to be the most appropriate one for patients with Dupuytren's contracture. Of patient questionnaires, the Southampton Dupuytren's Scoring Scheme meets these parameters.
- Klíčová slova
- ChatGPT, DASH, Dupuytren's contracture, Michigan Hand Questionnaire, OpenAI, QuickDASH, Southampton Dupuytren's Scoring Scheme (SDSS ), classification, objective and subjective scoring,
- MeSH
- Dupuytrenova kontraktura * diagnóza MeSH
- lidé MeSH
- PubMed MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- přehledy MeSH
This letter explores the potential of artificial intelligence models, specifically ChatGPT, for content analysis, namely for categorizing social media posts. The primary focus is on Twitter posts with the hashtag #plasticsurgery. Through integrating Python with the OpenAI API, the study provides a designed prompt to categorize tweet content. Looking forward, the utilization of AI in content analysis presents promising opportunities for advancing understanding of complex social phenomena.Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine Ratings, please refer to Table of Contents or online Instructions to Authors http://www.springer.com/00266 .
- MeSH
- lidé MeSH
- sociální média * MeSH
- umělá inteligence MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- dopisy MeSH
We investigate the accuracy and reliability of ChatGPT, an artificial intelligence model developed by OpenAI, in providing nutritional information for dietary planning and weight management. The results have a reasonable level of accuracy, with energy values having the highest level of conformity: 97% of the artificial intelligence values fall within a 40% difference from United States Department of Agriculture data. Additionally, ChatGPT displayed consistency in its provision of nutritional data, as indicated by relatively low coefficient of variation values for each nutrient. The artificial intelligence model also proved efficient in generating a daily meal plan within a specified caloric limit, with all the meals falling within a 30% bound of the United States Department of Agriculture's caloric values. These findings suggest that ChatGPT can provide reasonably accurate and consistent nutritional information. Further research is recommended to assess the model's performance across a broader range of foods and meals.
- Klíčová slova
- AI, Artificial intelligence, ChatGPT, Dietary planning, Nutritional data, Weight management,
- MeSH
- jídla MeSH
- lidé MeSH
- nutriční poradci * MeSH
- reprodukovatelnost výsledků MeSH
- umělá inteligence * MeSH
- živiny MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- Geografické názvy
- Spojené státy americké MeSH
This study explores the capabilities of large language models to replicate the behavior of individuals with underdeveloped cognitive and language skills. Specifically, we investigate whether these models can simulate child-like language and cognitive development while solving false-belief tasks, namely, change-of-location and unexpected-content tasks. GPT-3.5-turbo and GPT-4 models by OpenAI were prompted to simulate children (N = 1296) aged one to six years. This simulation was instantiated through three types of prompts: plain zero-shot, chain-of-thoughts, and primed-by-corpus. We evaluated the correctness of responses to assess the models' capacity to mimic the cognitive skills of the simulated children. Both models displayed a pattern of increasing correctness in their responses and rising language complexity. That is in correspondence with a gradual enhancement in linguistic and cognitive abilities during child development, which is described in the vast body of research literature on child development. GPT-4 generally exhibited a closer alignment with the developmental curve observed in 'real' children. However, it displayed hyper-accuracy under certain conditions, notably in the primed-by-corpus prompt type. Task type, prompt type, and the choice of language model influenced developmental patterns, while temperature and the gender of the simulated parent and child did not consistently impact results. We conducted analyses of linguistic complexity, examining utterance length and Kolmogorov complexity. These analyses revealed a gradual increase in linguistic complexity corresponding to the age of the simulated children, regardless of other variables. These findings show that the language models are capable of downplaying their abilities to achieve a faithful simulation of prompted personas.
- MeSH
- jazyk (prostředek komunikace) * MeSH
- kognice * MeSH
- lidé MeSH
- lingvistika MeSH
- nadání MeSH
- vývoj dítěte MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
BACKGROUND: Artificial intelligence (AI) has advanced substantially in recent years, transforming many industries and improving the way people live and work. In scientific research, AI can enhance the quality and efficiency of data analysis and publication. However, AI has also opened up the possibility of generating high-quality fraudulent papers that are difficult to detect, raising important questions about the integrity of scientific research and the trustworthiness of published papers. OBJECTIVE: The aim of this study was to investigate the capabilities of current AI language models in generating high-quality fraudulent medical articles. We hypothesized that modern AI models can create highly convincing fraudulent papers that can easily deceive readers and even experienced researchers. METHODS: This proof-of-concept study used ChatGPT (Chat Generative Pre-trained Transformer) powered by the GPT-3 (Generative Pre-trained Transformer 3) language model to generate a fraudulent scientific article related to neurosurgery. GPT-3 is a large language model developed by OpenAI that uses deep learning algorithms to generate human-like text in response to prompts given by users. The model was trained on a massive corpus of text from the internet and is capable of generating high-quality text in a variety of languages and on various topics. The authors posed questions and prompts to the model and refined them iteratively as the model generated the responses. The goal was to create a completely fabricated article including the abstract, introduction, material and methods, discussion, references, charts, etc. Once the article was generated, it was reviewed for accuracy and coherence by experts in the fields of neurosurgery, psychiatry, and statistics and compared to existing similar articles. RESULTS: The study found that the AI language model can create a highly convincing fraudulent article that resembled a genuine scientific paper in terms of word usage, sentence structure, and overall composition. The AI-generated article included standard sections such as introduction, material and methods, results, and discussion, as well a data sheet. It consisted of 1992 words and 17 citations, and the whole process of article creation took approximately 1 hour without any special training of the human user. However, there were some concerns and specific mistakes identified in the generated article, specifically in the references. CONCLUSIONS: The study demonstrates the potential of current AI language models to generate completely fabricated scientific articles. Although the papers look sophisticated and seemingly flawless, expert readers may identify semantic inaccuracies and errors upon closer inspection. We highlight the need for increased vigilance and better detection methods to combat the potential misuse of AI in scientific research. At the same time, it is important to recognize the potential benefits of using AI language models in genuine scientific writing and research, such as manuscript preparation and language editing.
- Klíčová slova
- ChatGPT, artificial intelligence, ethics, fraudulent medical articles, language models, neurosurgery, publications,
- MeSH
- algoritmy * MeSH
- analýza dat MeSH
- jazyk (prostředek komunikace) MeSH
- lidé MeSH
- sémantika MeSH
- umělá inteligence * MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
The advent of large language models (LLMs) based on neural networks marks a significant shift in academic writing, particularly in medical sciences. These models, including OpenAI's GPT-4, Google's Bard, and Anthropic's Claude, enable more efficient text processing through transformer architecture and attention mechanisms. LLMs can generate coherent texts that are indistinguishable from human-written content. In medicine, they can contribute to the automation of literature reviews, data extraction, and hypothesis formulation. However, ethical concerns arise regarding the quality and integrity of scientific publications and the risk of generating misleading content. This article provides an overview of how LLMs are changing medical writing, the ethical dilemmas they bring, and the possibilities for detecting AI-generated text. It concludes with a focus on the potential future of LLMs in academic publishing and their impact on the medical community.
- Klíčová slova
- large language models (LLMs), neural networks, academic writing, artificial intelligence, transformer architecture, scientific research automation, publishing ethics, detection of AI-generated text,
- MeSH
- jazyk (prostředek komunikace) MeSH
- lidé MeSH
- neuronové sítě * MeSH
- publikování etika MeSH
- zpracování přirozeného jazyka MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- přehledy MeSH
This letter explores the capability of AI, specifically OpenAI's ChatGPT, in interpreting human behavior and its potential implications for mental health care. Data were collected from the Reddit forum "AmItheAsshole" (AITA) to assess the congruence between AI's verdict and the collective human opinion on this platform. AITA, with its vast range of interpersonal situations, provides rich insights into human behavioral evaluation and perception. Two key research questions were addressed: the degree of alignment between ChatGPT's judgment and collective verdicts of Redditors, and the consistency of ChatGPT in evaluating the same AITA post repeatedly. The results exhibited a promising level of agreement between ChatGPT and human verdicts. It also demonstrated high consistency across repeated evaluations of the same posts. These findings hint at the significant potential of AI in mental health care provision, underscoring the importance of continued research and development in this field.
OpenAI's Chat Generative Pre-trained Transformer (ChatGPT) technology enables conversational interactions with applications across various fields, including sport. Here, ChatGPT's proficiency in designing a 12-week resistance training programme, following specific prompts, was investigated. GPT3.5 and GPT4.0 versions were requested to design 12-week resistance training programmes for male and female hypothetical subjects (20-years-old, no injury, and 'intermediate' resistance training experience). Subsequently, GPT4.0 was requested to design an 'advanced' training programme for the same profiles. The proposed training programmes were compared with established guidelines and literature (e.g., National Strength and Conditioning Association textbook), and discussed. ChatGPT suggested 12-week training programmes comprising three, 4-week phases, each with different objectives (e.g., hypertrophy/strength). GPT3.5 proposed a weekly frequency of ~3 sessions, load intensity of 70-85% of one repetition-maximum, repetition range of 4-8 (2-4 sets), and tempo of 2/0/2 (eccentric/pause/concentric/'pause'). GPT4.0 proposed intermediate- and advanced programme, with a frequency of 5 or 4 sessions, 60-90% or 70-95% intensity, 3-5 sets or 3-6 sets, 5-12 or 3-12 repetitions, respectively. GPT3.5 proposed rest intervals of 90-120 s, and exercise tempo of 2/0/2. GPT4.0 proposed 60-180 (intermediate) or 60-300 s (advanced), with exercise tempo of 2/1/2 for intermediates, and 3/0/1/0, 2/0/1/0, and 1/0/1/0 for advanced programmes. All derived programmes were objectively similar regardless of sex. ChatGPT generated training programmes which likely require additional fine-tuning before application. GPT4.0 synthesised more information than GPT3.5 in response to the prompt, and demonstrated recognition awareness of training experience (intermediate vs advanced). ChatGPT may serve as a complementary tool for writing 'draft' programme, but likely requires human expertise to maximise training programme effectiveness.
- Klíčová slova
- Chatbot, Exercise prescription, Individualised training, Periodisation, Programming, Strength training,
- Publikační typ
- časopisecké články MeSH