Industrial applications of large language models

. 2025 Apr 21 ; 15 (1) : 13755. [epub] 20250421

Jazyk angličtina Země Anglie, Velká Británie Médium electronic

Typ dokumentu časopisecké články

Perzistentní odkaz   https://www.medvik.cz/link/pmid40258923
Odkazy

PubMed 40258923
PubMed Central PMC12012124
DOI 10.1038/s41598-025-98483-1
PII: 10.1038/s41598-025-98483-1
Knihovny.cz E-zdroje

Large language models (LLMs) are artificial intelligence (AI) based computational models designed to understand and generate human like text. With billions of training parameters, LLMs excel in identifying intricate language patterns, enabling remarkable performance across a variety of natural language processing (NLP) tasks. After the introduction of transformer architectures, they are impacting the industry with their text generation capabilities. LLMs play an innovative role across various industries by automating NLP tasks. In healthcare, they assist in diagnosing diseases, personalizing treatment plans, and managing patient data. LLMs provide predictive maintenance in automotive industry. LLMs provide recommendation systems, and consumer behavior analyzers. LLMs facilitates researchers and offer personalized learning experiences in education. In finance and banking, LLMs are used for fraud detection, customer service automation, and risk management. LLMs are driving significant advancements across the industries by automating tasks, improving accuracy, and providing deeper insights. Despite these advancements, LLMs face challenges such as ethical concerns, biases in training data, and significant computational resource requirements, which must be addressed to ensure impartial and sustainable deployment. This study provides a comprehensive analysis of LLMs, their evolution, and their diverse applications across industries, offering researchers valuable insights into their transformative potential and the accompanying limitations.

Zobrazit více v PubMed

Khurana, D., Koli, A., Khatter, K. & Singh, S. Natural Language processing: state of the art, current trends and challenges. Multimed Tools Appl.82, 3713–3744 (2023). PubMed PMC

Kosch, T. et al. A survey on measuring cognitive workload in human-computer interaction. ACM Comput. Surv.55, 1–39 (2023).

Chowdhary, K. & Chowdhary, K. R. Natural Language processing. Fundam Artif. Intell.2020, 603–649 (2020).

Fanni, S. C., Febi, M., Aghakhanyan, G. & Neri, E. Natural language processing. in Introduction to Artificial Intelligence 87–99 (Springer, 2023).

Eisenstein, J. Introduction To Natural Language Processing (MIT Press, 2019).

Bayer, M. et al. Data augmentation in natural Language processing: a novel text generation approach for long and short text classifiers. Int. J. Mach. Learn. Cybern. 14, 135–150 (2023). PubMed PMC

Li, J., Tang, T., Zhao, W. X., Nie, J. Y. & Wen, J. R. Pre-trained Language models for text generation: A survey. ACM Comput. Surv.56, 1–39 (2024).

Zhao, W. X. et al. A survey of large language models. arXiv Prepr. arXiv2303.18223 (2023).

Riedl, M. O. Human-centered artificial intelligence and machine learning. Hum. Behav. Emerg. Technol.1, 33–36 (2019).

Jiang, Z., Xu, F. F., Araki, J. & Neubig, G. How can we know what Language models know? Trans. Assoc. Comput. Linguist. 8, 423–438 (2020).

Shen, Y. et al. ChatGPT and other large language models are double-edged swords. Radiology vol. 307 e230163 at (2023). PubMed

Myagmar, B., Li, J. & Kimura, S. Cross-Domain sentiment classification with bidirectional contextualized transformer Language models. IEEE Access.7, 163219–163230 (2019).

Singh, S. & Mahmood, A. The NLP cookbook: modern recipes for transformer based deep learning architectures. IEEE Access.9, 68675–68702 (2021).

Yang, J. et al. Harnessing the power of Llms in practice: A survey on Chatgpt and beyond. ACM Trans. Knowl. Discov Data. 18, 1–32 (2024).

Huang, Y. et al. Advancing transformer architecture in long-context large language models: A comprehensive survey. arXiv Prepr. arXiv2311.12351 (2023).

Melis, G., Dyer, C. & Blunsom, P. On the state of the art of evaluation in neural language models. arXiv Prepr. arXiv1707.05589 (2017).

Mikolov, T. & others. Statistical language models based on neural networks. (2012).

Naseem, U., Razzak, I., Khan, S. K. & Prasad, M. A comprehensive survey on word representation models: from classical to state-of-the-art word representation Language models. Trans. Asian Low-Resource Lang. Inf. Process.20, 1–35 (2021).

Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D Nonlinear Phenom.404, 132306 (2020).

Subakan, C., Ravanelli, M., Cornell, S., Bronzi, M. & Zhong, J. Attention is all you need in speech separation. in ICASSP –2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 21–25 (2021). (2021).

Xu, M. et al. A survey of resource-efficient llm and multimodal foundation models. arXiv Prepr. arXiv2401.08092 (2024).

Jwa, H., Oh, D., Park, K., Kang, J. M. & Lim, H. Exbake: automatic fake news detection model based on bidirectional encoder representations from Transformers (bert). Appl. Sci.9, 4062 (2019).

Yenduri, G. et al. Gpt (generative pre-trained transformer)--a comprehensive review on enabling technologies, potential applications, emerging challenges, and future directions. IEEE Access.12, 54608–54649 (2024).

Xue, L. et al. mT5: A massively multilingual pre-trained text-to-text transformer. arXiv Prepr. arXiv11934 (2020). (2010).

Zhou, X., Zhao, X. & Li, G. LLM-Enhanced Data Management. arXiv Prepr. arXiv2402.02643 (2024).

Khare, Y. et al. Mmbert: Multimodal bert pretraining for improved medical vqa. in. IEEE 18th International Symposium on Biomedical Imaging (ISBI) 1033–1036 (2021). (2021).

Cui, C. et al. A survey on multimodal large language models for autonomous driving. in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision 958–979 (2024).

Ren, Q. et al. A survey on fairness of large language models in e-commerce: progress, application, and challenge. arXiv Prepr. arXiv2405.13025 (2024).

Parker, M. J., Anderson, C., Stone, C. & Oh, Y. A large Language model approach to educational survey feedback analysis. Int. J. Artif. Intell. Educ. 1–38 (2024).

Lee, J., Stevens, N., Han, S. C. & Song, M. A survey of large language models in finance (finllms). arXiv Prepr. arXiv2402.02315 (2024).

Cascella, M. et al. The breakthrough of large Language models release for medical applications: 1-year timeline and perspectives. J. Med. Syst.48, 22 (2024). PubMed PMC

Turing, A. M. Computing machinery and intelligence. Creat Comput.6, 44–53 (1980).

Masri, N. et al. Survey of rule-based systems. Int. J. Acad. Inf. Syst. Res.3, 1–23 (2019).

Grossberg, S. Recurrent neural networks. Scholarpedia8, 1888 (2013).

Salehinejad, H., Sankar, S., Barfett, J., Colak, E. & Valaee, S. Recent advances in recurrent neural networks. arXiv Prepr. arXiv1801.01078 (2017).

Johnson, S. J., Murty, M. R. & Navakanth, I. A detailed review on word embedding techniques with emphasis on word2vec. Multimed Tools Appl.83, 37979–38007 (2024).

Yu, Y., Si, X., Hu, C. & Zhang, J. A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput.31, 1235–1270 (2019). PubMed

Zhao, Z., Chen, W., Wu, X., Chen, P. C. Y. & Liu, J. LSTM network: a deep learning approach for short-term traffic forecast. IET Intell. Transp. Syst.11, 68–75 (2017).

Kowsher, M. et al. LSTM-ANN \& BiLSTM-ANN: hybrid deep learning models for enhanced classification accuracy. Procedia Comput. Sci.193, 131–140 (2021).

Church, K. W. Word2Vec. Nat. Lang. Eng.23, 155–162 (2017).

Pennington, J., Socher, R. & Manning, C. D. Glove: Global vectors for word representation. in Proceedings of the conference on empirical methods in natural language processing (EMNLP) 1532–1543 (2014). (2014).

Di Gennaro, G., Buonanno, A. & Palmieri, F. A. N. Considerations about learning Word2Vec. J. Supercomput. 77, 1–16 (2021).

Ma, L. & Zhang, Y. Using Word2Vec to process big text data. in IEEE International Conference on Big Data (Big Data) 2895–2897 (2015). (2015).

Abubakar, H. D., Umar, M. & Bakale, M. A. Sentiment classification: review of text vectorization methods: bag of words, Tf-Idf, Word2vec and Doc2vec. SLU J. Sci. Technol.4, 27–33 (2022).

Sivakumar, S. et al. Review on word2vec word embedding neural net. in. international conference on smart electronics and communication (ICOSEC) 282–290 (2020). (2020).

Curto, G., Jojoa Acosta, M. F., Comim, F. & Garcia-Zapirain, B. Are AI systems biased against the poor? A machine learning analysis using Word2Vec and glove embeddings. AI \& Soc.39, 617–632 (2024). PubMed PMC

Singgalen, Y. A. Implementation of global vectors for word representation (GloVe) model and social network analysis through wonderland Indonesia content reviews. J. Sist Komput Dan. Inf.5, 559–569 (2024).

Sitender, S., Sushma, N. S. & Sharma, S. K. Effect of GloVe, Word2Vec and fastText embedding on english and hindi neural machine translation systems. in Proceedings of Data Analytics and Management: ICDAM 2022 433–447Springer, (2023).

Kang, S., Kong, L., Luo, B., Zheng, C. & Wu, J. Principle research of word vector representation in natural language processing. in International Conference on Electronic Information Engineering and Computer Science (EIECS vol. 12602 54–60 (2023). (2022).

Adawiyah, A. R., Baharuddin, B., Wardana, L. A. & Farmasari, S. Comparing post-editing translations by Google NMT and Yandex NMT. TEKNOSASTIK21, 23–34 (2023).

Mo, Y., Qin, H., Dong, Y., Zhu, Z. & Li, Z. Large language model (llm) ai text generation detection based on transformer deep learning algorithm. arXiv Prepr. arXiv2405.06652 (2024).

Oliaee, A. H., Das, S., Liu, J. & Rahman, M. A. Using bidirectional encoder representations from Transformers (BERT) to classify traffic crash severity types. Nat. Lang. Process. J.3, 100007 (2023).

Wibawa, A. P., Cahyani, D. E., Prasetya, D. D., Gumilar, L. & Nafalski, A. Detecting emotions using a combination of bidirectional encoder representations from Transformers embedding and bidirectional long short-term memory. Int. J. Electr. \& Comput. Eng.13, 2088–8708 (2023).

Areshey, A. & Mathkour, H. Transfer learning for sentiment classification using bidirectional encoder representations from Transformers (BERT) model. Sensors23, 5232 (2023). PubMed PMC

Hendy, A. et al. How good are gpt models at machine translation? a comprehensive evaluation. arXiv Prepr. arXiv2302.09210 (2023).

Hanna, M., Liu, O. & Variengien, A. How does GPT-2 compute greater-than? Interpreting mathematical abilities in a pre-trained Language model. Adv. Neural Inf. Process. Syst.36, 76033–76060 (2024).

Bharathi Mohan, G. et al. Text summarization for big data analytics: a comprehensive review of GPT 2 and BERT approaches. Data Anal. Internet Things Infrastruct. 247–264 (2023).

Kalyan, K. S. A survey of GPT-3 family large Language models including ChatGPT and GPT-4. Nat. Lang. Process. J.6, 100048 (2023).

Yan, B. et al. On protecting the data privacy of large language models (llms): A survey. arXiv Prepr. arXiv2403.05156 (2024).

Li, Y., Wang, S., Ding, H. & Chen, H. Large language models in finance: A survey. in Proceedings of the fourth ACM international conference on AI in finance 374–382 (2023).

Zhang, Z. et al. Large language models for mobility in transportation systems: A survey on forecasting tasks. arXiv Prepr. arXiv2405.02357 (2024).

Wang, S. et al. Large language models for education: A survey and outlook. arXiv Prepr. arXiv2403.18105 (2024).

Zhang, D., Zheng, H., Yue, W. & Wang, X. Advancing ITS Applications with LLMs: A Survey on Traffic Management, Transportation Safety, and Autonomous Driving. in International Joint Conference on Rough Sets 295–309 (2024).

Xu, X., Xu, Z., Ling, Z., Jin, Z. & Du, S. Emerging Synergies Between Large Language Models and Machine Learning in Ecommerce Recommendations. arXiv Prepr. arXiv2403.02760 (2024).

Chen, J. et al. When large Language models Meet personalization: perspectives of challenges and opportunities. World Wide Web. 27, 42 (2024).

Xu, H., Gan, W., Qi, Z., Wu, J. & Yu, P. S. Large Language Models for Education: A Survey. arXiv Prepr. arXiv2405.13001 (2024).

Huber, S. E. et al. Leveraging the potential of large Language models in education through playful and game-based learning. Educ. Psychol. Rev.36, 25 (2024).

Yan, L. et al. Practical and ethical challenges of large Language models in education: A systematic scoping review. Br. J. Educ. Technol.55, 90–112 (2024).

Yahyazadeh, N. The Influence of ChatGPT in Education: A Comprehensive Review. (2023).

Zhao, H. et al. Revolutionizing finance with llms: An overview of applications and insights. arXiv Prepr. arXiv2401.11641 (2024).

Godwin Olaoye, H. J. The Evolving Role of Large Language Models (LLMs) in Banking. (2024).

Fieberg, C., Hornuf, L. & Streich, D. Using large Language models for financial advice. Available SSRN4850039, 92 (2024).

Huang, Y., Tang, K. & Chen, M. A. Comprehensive Survey on Evaluating Large Language Model Applications in the Medical Industry. arXiv Prepr. arXiv2404.15777 (2024).

Zheng, Y. et al. Large Language Models for Medicine: A Survey. arXiv Prepr. arXiv2405.13055 (2024).

Yu, P., Xu, H., Hu, X. & Deng, C. Leveraging generative AI and large Language models: a Comprehensive Roadmap for Healthcare Integration. in Healthcare vol. 11 2776 (2023). PubMed PMC

George, J. G. Transforming Banking in the Digital Age: The Strategic Integration of Large Language Models and Multi-Cloud Environments.

Dhillon, A. S. & Torresin, A. Advancing Vehicle Diagnostic: Exploring the Application of Large Language Models in the Automotive Industry. (2024).

Gebreab, S. A., Salah, K., Jayaraman, R., ur Rehman, M. & Ellaham, S. LLM-Based Framework for Administrative Task Automation in Healthcare. in 12th International Symposium on Digital Forensics and Security (ISDFS) 1–7 (2024). (2024). 10.1109/ISDFS60797.2024.10527275

Jin, H. et al. Llm maybe longlm: Self-extend llm context window without tuning. arXiv Prepr. arXiv2401.01325 (2024).

Zhang, T., Yi, J. W., Yao, B., Xu, Z. & Shrivastava, A. Nomad-attention: Efficient llm inference on cpus through multiply-add-free attention. arXiv Prepr. arXiv2403.01273 (2024).

Lin, X. et al. Efficient LLM Training and Serving with Heterogeneous Context Sharding among Attention Heads. arXiv Prepr. arXiv2407.17678 (2024).

Lu, Y. et al. LongHeads: Multi-Head Attention is Secretly a Long Context Processor. arXiv Prepr. arXiv2402.10685 (2024).

Liu, Y. et al. Understanding llms: A comprehensive overview from training to inference. arXiv Prepr. arXiv2401.02038 (2024).

Raje, A. & Communication-Efficient, L. L. M. Training for Federated LearningPh. D. thesis, Carnegie Mellon University Pittsburgh, PA,. (2024).

Zeng, F., Gan, W., Wang, Y. & Philip, S. Y. Distributed training of large language models. in IEEE 29th International Conference on Parallel and Distributed Systems (ICPADS) 840–847 (2023). (2023).

McKinzie, B. et al. Mm1: Methods, analysis \& insights from multimodal llm pre-training. arXiv Prepr. arXiv2403.09611 (2024).

Abbasiantaeb, Z., Yuan, Y., Kanoulas, E. & Aliannejadi, M. Let the llms talk: Simulating human-to-human conversational qa via zero-shot llm-to-llm interactions. in Proceedings of the 17th ACM International Conference on Web Search and Data Mining 8–17 (2024).

Lin, X. et al. Data-efficient Fine-tuning for LLM-based Recommendation. in Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval 365–374 (2024).

Liu, Q. et al. When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical Applications. in Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval 1104–1114 (2024).

Christophe, C. et al. Med42–Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches. arXiv Prepr. arXiv2404.14779 (2024).

Xue, T., Wang, Z. & Ji, H. Parameter-efficient tuning helps language model alignment. arXiv Prepr. arXiv2310.00819 (2023).

Han, Z., Gao, C., Liu, J. & Zhang, S. Q. & others. Parameter-efficient fine-tuning for large models: A comprehensive survey. arXiv Prepr. arXiv2403.14608 (2024).

Vaswani, A. et al. Attention is all you need. Adv. Neural Inf. Process. Syst.30, 5998–6008 (2017).

Bhattamishra, S., Patel, A., Blunsom, P. & Kanade, V. Understanding in-context learning in transformers and llms by learning to learn discrete functions. arXiv Prepr. arXiv2310.03016 (2023).

Yousri, R. & Safwat, S. How Big Can It Get? A comparative analysis of LLMs in architecture and scaling. in International Conference on Computer and Applications (ICCA) 1–5 (2023). (2023).

Peng, B., Narayanan, S. & Papadimitriou, C. On limitations of the transformer architecture. arXiv Prepr. arXiv2402.08164 (2024).

Du, W. et al. Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training. arXiv Prepr. arXiv2405.15319 (2024).

Cao, Y. T. et al. On the intrinsic and extrinsic fairness evaluation metrics for contextualized language representations. arXiv Prepr. arXiv2203.13928 (2022).

Meister, C. & Cotterell, R. Language model evaluation beyond perplexity. arXiv Prepr. arXiv2106.00085 (2021).

Colla, D., Delsanto, M., Agosto, M., Vitiello, B. & Radicioni, D. P. Semantic coherence markers: the contribution of perplexity metrics. Artif. Intell. Med.134, 102393 (2022). PubMed

Soni, A. Enhancing Multilingual Table-to-Text Generation with QA Blueprints: Overcoming Challenges in Low-Resource Languages. in International Conference on Signal Processing and Advance Research in Computing (SPARC) vol. 1 1–7 (2024). (2024).

Chauhan, S. et al. Semantic-syntactic similarity based automatic machine translation evaluation metric. IETE J. Res.70, 3823–3834 (2024).

Mander, S., Phillips, J. & LiSAScore Exploring Linear Sum Assignment on BertScore. in International Conference on Applications of Natural Language to Information Systems 249–257 (2024).

Shankar, S., Zamfirescu-Pereira, J. D., Hartmann, B., Parameswaran, A. & Arawjo, I. Who validates the validators? aligning llm-assisted evaluation of llm outputs with human preferences. in Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology 1–14 (2024).

Wang, Y. et al. The fluency-based semantic network of LLMs differs from humans. Comput. Hum. Behav. Artif. Hum.3, 100103 (2025).

Anderson, C., Vandenberg, B., Hauser, C., Johansson, A. & Galloway, N. Semantic coherence dynamics in large language models through layered syntax-aware memory retention mechanism. (2024).

Meng, C., Arabzadeh, N., Askari, A., Aliannejadi, M. & de Rijke, M. Query performance prediction using relevance judgments generated by large language models. arXiv Prepr. arXiv2404.01012 (2024).

Chu, Z., Wang, Z. & Zhang, W. Fairness in large Language models: A taxonomic survey. ACM SIGKDD Explor. Newsl.26, 34–48 (2024).

Bai, G. et al. Beyond efficiency: A systematic survey of resource-efficient large language models. arXiv Prepr. arXiv2401.00625 (2024).

Lukasik, M., Narasimhan, H., Menon, A. K., Yu, F. & Kumar, S. Metric-aware LLM inference. arXiv Prepr. arXiv2403.04182 (2024).

Wolters, C., Yang, X., Schlichtmann, U. & Suzumura, T. Memory Is All You Need: An Overview of Compute-in-Memory Architectures for Accelerating Large Language Model Inference. arXiv Prepr. arXiv2406.08413 (2024).

Stojkovic, J., Zhang, C., Goiri, Í., Torrellas, J. & Choukse, E. Dynamollm: Designing llm inference clusters for performance and energy efficiency. arXiv Prepr. arXiv2408.00741 (2024).

Kenthapadi, K., Sameki, M. & Taly, A. Grounding and evaluation for large language models: Practical challenges and lessons learned (survey). in Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 6523–6533 (2024).

Laskar, M. T. R. et al. A systematic survey and critical review on evaluating large language models: Challenges, limitations, and recommendations. in Proceedings of the Conference on Empirical Methods in Natural Language Processing 13785–13816 (2024). (2024).

Chang, Y. et al. A survey on evaluation of large Language models. ACM Trans. Intell. Syst. Technol.15, 1–45 (2024).

Reese, M. L. & Smirnova, A. Comparing ChatGPT and Humans on World Knowledge and Common-sense Reasoning Tasks: A case study of the Japanese Winograd Schema Challenge. in Extended Abstracts of the CHI Conference on Human Factors in Computing Systems 1–9 (2024).

Zahraei, P. S., Emami, A. & WSC+ Enhancing The Winograd Schema Challenge Using Tree-of-Experts. arXiv Prepr. arXiv2401.17703 (2024).

Christen, P., Hand, D. J. & Kirielle, N. A. Review of the F-Measure: its history, properties, criticism, and alternatives. ACM Comput. Surv.56, 1–24 (2023).

Jiang, Z., Anastasopoulos, A., Araki, J., Ding, H. & Neubig, G. X-FACTR: Multilingual factual knowledge retrieval from pretrained language models. in Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP) 5943–5959 (2020). (2020).

Shaik, R. & Kishore, K. S. Enhancing Text Generation in Joint NLG/NLU Learning Through Curriculum Learning, Semi-Supervised Training, and Advanced Optimization Techniques. arXiv Prepr. arXiv2410.13498 (2024).

Dong, C. et al. A survey of natural Language generation. ACM Comput. Surv.55, 1–38 (2022).

Kenton, Z. et al. On scalable oversight with weak llms judging strong llms. arXiv Prepr. arXiv2407.04622 (2024).

Huang, Y. et al. New solutions on LLM acceleration, optimization, and application. in Proceedings of the 61st ACM/IEEE Design Automation Conference 1–4 (2024).

Cassano, F. et al. Knowledge transfer from high-resource to low-resource programming languages for code llms. Proc. ACM Program. Lang. 8, 677–708 (2024).

Kazi, N. & Kahanda, I. Enhancing Transfer Learning of LLMs through Fine-Tuning on Task-Related Corpora for Automated Short-Answer Grading. in International Conference on Machine Learning and Applications (ICMLA) 1687–1691 (2023). (2023).

Waisberg, E. et al. GPT-4: a new era of artificial intelligence in medicine. Ir. J. Med. Sci.192, 3197–3200 (2023). PubMed

Liu, X. et al. GPT understands, too. AI Open (2023).

Chitty-Venkata, K. T., Emani, M., Vishwanath, V. & Somani, A. K. Neural architecture search for Transformers: A survey. IEEE Access.10, 108374–108412 (2022).

Wang, B. et al. DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models. in NeurIPS (2023).

Devlin, J., Chang, M. W., Lee, K., Toutanova, K. & Bert Pre-training of deep bidirectional transformers for language understanding. arXiv Prepr. arXiv1810.04805 (2018).

Tan, Y., Jiang, L., Chen, P., Tong, C. & DQMix-BERT Distillation-aware Quantization with Mixed Precision for BERT Compression. in IEEE International Conference on Systems, Man, and Cybernetics (SMC) 311–316 (2023). (2023). 10.1109/SMC53992.2023.10394642

Riaz, M. T., Shah Jahan, M., Khawaja, S. G., Shaukat, A. & Zeb, J. TM-BERT: A Twitter Modified BERT for Sentiment Analysis on Covid-19 Vaccination Tweets. in 2nd International Conference on Digital Futures and Transformative Technologies (ICoDT2) 1–6 (2022). (2022). 10.1109/ICoDT255437.2022.9787395

Anggrainingsih, R., Hassan, G. M. & Datta, A. C. E. B. E. R. T. Concise and efficient BERT-Based model for detecting rumors on Twitter. IEEE Access.11, 80207–80217 (2023).

Sohrab, M. G., Asada, M., Rikters, M., Miwa, M. & BERT-NAR-BERT A Non-Autoregressive Pre-Trained Sequence-to-Sequence model leveraging BERT checkpoints. IEEE Access.12, 23–33 (2024).

Lan, Z. et al. Albert: A lite bert for self-supervised learning of language representations. arXiv Prepr. arXiv11942 (2019). (1909).

Tripathy, J. K., Chakkaravarthy, S. S., Satapathy, S. C. & Sahoo, M. Vaidehi, V. ALBERT-based fine-tuning model for cyberbullying analysis. Multimed Syst.28, 1941–1949 (2022).

Chiang, C. H., Huang, S. F. & Lee, H. Pretrained language model embryology: The birth of ALBERT. arXiv Prepr. arXiv02480 (2020). (2010).

Mastropaolo, A. et al. Studying the usage of text-to-text transfer transformer to support code-related tasks. in. IEEE/ACM 43rd International Conference on Software Engineering (ICSE) 336–347 (2021). (2021).

Ni, J. et al. Sentence-t5: Scalable sentence encoders from pre-trained text-to-text models. arXiv Prepr. arXiv2108.08877 (2021).

Phan, L. N. et al. Scifive: a text-to-text transformer model for biomedical literature. arXiv Prepr. arXiv2106.03598 (2021).

Yang, Z. et al. Xlnet: generalized autoregressive pretraining for Language Understanding. Adv. Neural Inf. Process. Syst.32, 5753–5763 (2019).

Topal, M. O., Bas, A. & van Heerden, I. Exploring transformers in natural language generation: Gpt, bert, and xlnet. arXiv Prepr. arXiv2102.08036 (2021).

Adoma, A. F., Henry, N. M. & Chen, W. Comparative analyses of bert, roberta, distilbert, and xlnet for text-based emotion recognition. in 17th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP) 117–121 (2020). (2020).

Thoppilan, R. et al. Lamda: Language models for dialog applications. arXiv Prepr. arXiv2201.08239 (2022).

Morales, L., Herrera, M., Camacho, O., Leica, P. & Aguilar, J. LAMDA control approaches applied to trajectory tracking for mobile robots. IEEE Access.9, 37179–37195 (2021).

Ruiz, F. A., Isaza, C. V., Agudelo, A. F. & Agudelo, J. R. A new criterion to validate and improve the classification process of LAMDA algorithm applied to diesel engines. Eng. Appl. Artif. Intell.60, 117–127 (2017).

Touvron, H. et al. Llama: Open and efficient foundation language models. arXiv Prepr. arXiv2302.13971 (2023).

Zhang, R. et al. Llama-adapter: Efficient fine-tuning of language models with zero-init attention. arXiv Prepr. arXiv2303.16199 (2023).

Sayin, B., Minervini, P., Staiano, J. & Passerini, A. Can LLMs Correct Physicians, Yet? Investigating Effective Interaction Methods in the Medical Domain. arXiv Prepr. arXiv2403.20288 (2024).

Wang, S., Liu, T., Kinoshita, S. & Yokoyama, H. M. LLMs May improve medical communication: social science perspective. Postgrad. Med. J.101, qgae101 (2024). PubMed

Kim, Y. et al. Adaptive Collaboration Strategy for LLMs in Medical Decision Making. arXiv Prepr. arXiv2404.15155 (2024).

Wang, Y., Ma, X. & Chen, W. Augmenting black-box llms with medical textbooks for clinical question answering. arXiv Prepr. arXiv2309.02233 (2023).

Goel, A. et al. Llms accelerate annotation for medical information extraction. in Machine Learning for Health (ML4H) 82–100 (2023).

Aparicio, V., Gordon, D., Huayamares, S. G., Luo, Y. & BioFinBERT Finetuning Large Language Models (LLMs) to Analyze Sentiment of Press Releases and Financial Text Around Inflection Points of Biotech Stocks. arXiv Prepr. arXiv2401.11011 (2024).

Kumar, R., Gattani, D. R. K. & Singh, K. Enhancing Medical History Collection using LLMs. in Proceedings of the Australasian Computer Science Week 140–143 (2024). (2024).

Wang, Z., Luo, X., Jiang, X., Li, D. & Qiu, L. LLM-RadJudge: Achieving Radiologist-Level Evaluation for X-Ray Report Generation. arXiv Prepr. arXiv2404.00998 (2024).

Garc\’\ia-Ferrero, I. et al. MedMT5: An Open-Source Multilingual Text-to-Text LLM for the Medical Domain. in Proceedings of the Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) 11165–11177 (2024). (2024).

Yang, R. et al. Large Language models in health care: development, applications, and challenges. Heal Care Sci.2, 255–263 (2023). PubMed PMC

Zhou, Z., Yang, T. & Hu, K. Traditional Chinese Medicine Epidemic Prevention and Treatment Question-Answering Model Based on LLMs. in IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 4755–4760 (2023). (2023). 10.1109/BIBM58861.2023.10385748

Wang, Z., Li, K., Ren, Q., Yao, K. & Zhu, Y. Traditional Chinese Medicine Formula Classification Using Large Language Models. in IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 4647–4654 (2023). (2023). 10.1109/BIBM58861.2023.10385776

Dou, Y. et al. ShennongGPT: A Tuning Chinese LLM for Medication Guidance. in. IEEE International Conference on Medical Artificial Intelligence (MedAI) 67–72 (2023). (2023). 10.1109/MedAI59581.2023.00017

Helwan, A., Azar, D. & Ozsahin, D. U. Medical Reports Summarization Using Text-To-Text Transformer. in Advances in Science and Engineering Technology International Conferences (ASET) 1–4 (2023). (2023). 10.1109/ASET56582.2023.10180671

Cardenas, L., Parajes, K., Zhu, M., Zhai, S. & AutoHealth Advanced LLM-Empowered Wearable Personalized Medical Butler for Parkinson’s Disease Management. in IEEE 14th Annual Computing and Communication Workshop and Conference (CCWC) 375–379 (2024). (2024). 10.1109/CCWC60891.2024.10427622

Karttunen, P., Vavekanand, R., Xu, Y., Milani, S. & Li, H. Large Language models in healthcare decision support: A review. Available SSRN4892593 (2023).

Krishnan, G. et al. Artificial intelligence in clinical medicine: catalyzing a sustainable global healthcare paradigm. Front. Artif. Intell.6, 1227091 (2023). PubMed PMC

Kumar, A. et al. A survey on IBM watson and its services. in Journal of Physics: Conference Series vol. 2273 12022 (2022).

Piotrkowicz, A., Johnson, O. & Hall, G. Finding relevant free-text radiology reports at scale with IBM Watson content analytics: a feasibility study in the UK NHS. J. Biomed. Semant.10, 21 (2019). PubMed PMC

Lyu, Y. et al. Gp-gpt: Large language model for gene-phenotype mapping. arXiv Prepr. arXiv2409.09825 (2024).

Kang, I., Van Woensel, W. & Seneviratne, O. Using large Language models for generating smart contracts for health insurance from textual policies. In (Eds. Shaban-Nejad, M. Michalowski, & S. Bianco) AI for Health Equity and Fairness: Leveraging AI To Address Social Determinants of Health 129–146 (Springer, 2024).

Nazi, Z., Al & Peng, W. Large language models in healthcare and medical domain: A review. in Informatics vol. 11 57 (2024).

Nankya, M., Mugisa, A., Usman, Y., Upadhyay, A. & Chataut, R. Security and privacy in E-Health systems: A review of AI and machine learning techniques. IEEE Access.12 (2024).

Reddy, S. Generative AI in healthcare: an implementation science informed translational path on application, integration and governance. Implement. Sci.19, 27 (2024). PubMed PMC

Mennella, C., Maniscalco, U., De Pietro, G. & Esposito, M. Ethical and regulatory challenges of AI technologies in healthcare: A narrative review. Heliyon (2024). PubMed PMC

Bedi, S. et al. Testing and evaluation of health care applications of large Language models: a systematic review. JAMA10 (2024). PubMed PMC

Desai, B. & Patil, K. Secure and scalable Multi-Modal vehicle systems: A Cloud-Based framework for Real-Time LLM-Driven interactions. Innov. Comput. Sci. J.9, 1–11 (2023).

Cheng, Z. Q. et al. SHIELD: LLM-Driven Schema Induction for Predictive Analytics in EV Battery Supply Chain Disruptions. arXiv Prepr. arXiv2408.05357 (2024).

Wase, Z. M., Madisetti, V. K. & Bahga, A. Object detection Meets LLMs: model fusion for safety and security. J. Softw. Eng. Appl.16, 672–684 (2023).

Moeini, M., Ahmadian, R. & Ghatee, M. Calibrated SVM for Probabilistic Classification of In-Vehicle Voices into Vehicle Commands via Voice-to-Text LLM Transformation. in 8th International Conference on Smart Cities, Internet of Things and Applications (SCIoT) 180–188 (2024). (2024).

Bilgram, V. & Laarmann, F. Accelerating innovation with generative AI: AI-augmented digital prototyping and innovation methods. IEEE Eng. Manag Rev.51, 18–25 (2023).

Osten, W., Bett, C. & Situ, G. The challenge of making self-driving cars: may AI help to overcome the risks, or should we focus on reliable sensor technologies? in Interferometry and Structured Light 2024 vol. 13135 8–21 (2024).

Li, L. et al. Data-centric evolution in autonomous driving: A comprehensive survey of big data system, data mining, and closed-loop technologies. arXiv Prepr. arXiv2401.12888 (2024).

Sanders, N. R. Supply Chain Management: A Global Perspective (John Wiley \& Sons, 2025).

Mueller-Saegebrecht, S. & Lippert, I. In Tandem with ChatGPT-4: How LLM Enhance Entrepreneurship Education and Business Model Innovation. in Academy of Management Proceedings vol. 2024 15473 (2024).

Borah, A. & Rutz, O. Enhanced sales forecasting model using textual search data: fusing dynamics with big data. Int. J. Res. Mark.41 (2024).

Yang, Z., Jia, X., Li, H. & Yan, J. Llm4drive: A survey of large language models for autonomous driving. in NeurIPS 2024 Workshop on Open-World Agents (2023).

Baccari, S., Hadded, M., Ghazzai, H., Touati, H. & Elhadef, M. Anomaly detection in connected and autonomous vehicles: A survey, analysis, and research challenges. IEEE Access.12 (2024).

Mart\’\inez, I. The Future of the Automotive Industry (Springer, 2021).

Abdelati, M. H., Mokbel, E. F. F., Abdelwali, H. A., Matar, A. H. & Rabie, M. Revolutionizing automotive engineering with artificial neural networks: applications, challenges, and future directions. J. Sci. Insights. 1, 155–169 (2024).

Garikapati, D. & Shetiya, S. S. Autonomous vehicles: evolution of artificial intelligence and the current industry landscape. Big Data Cogn. Comput.8, 42 (2024).

Muzahid, A. J. M., Zhao, X. & Wang, Z. Survey on Human-Vehicle Interactions and AI Collaboration for Optimal Decision-Making in Automated Driving. arXiv Prepr. arXiv2412.08005 (2024).

Tyagi, A. K., Mishra, A. K. & Kukreja, S. Role of Artificial Intelligence Enabled Internet of Things (IoT) in the Automobile Industry: Opportunities and Challenges for Society. in International Conference on Cognitive Computing and Cyber Physical Systems 379–397 (2023).

Gao, D. et al. LLMs-based machine translation for E-commerce. Expert Syst. Appl.258, 125087 (2024).

Chen, K. et al. General2Specialized LLMs Translation for E-commerce. in Companion Proceedings of the ACM on Web Conference 2024 670–673 (2024).

Fang, C. et al. Llm-ensemble: Optimal large language model ensemble method for e-commerce product attribute value extraction. in Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval 2910–2914 (2024). PubMed PMC

Chen, B., Dai, H., Ma, X., Jiang, W. & Ning, W. Robust Interaction-based Relevance Modeling for Online E-Commerce and LLM-based Retrieval. arXiv Prepr. arXiv2406.02135 (2024).

Dam, S. K., Hong, C. S., Qiao, Y. & Zhang, C. A complete survey on llm-based ai chatbots. arXiv Prepr. arXiv2406.16937 (2024).

Casheekar, A., Lahiri, A., Rath, K., Prabhakar, K. S. & Srinivasan, K. A contemporary review on chatbots, AI-powered virtual conversational agents, ChatGPT: applications, open challenges and future research directions. Comput. Sci. Rev.52, 100632 (2024).

Roumeliotis, K. I., Tselikas, N. D. & Nasiopoulos, D. K. Precision-Driven product recommendation software: unsupervised models, evaluated by GPT-4 LLM for enhanced recommender systems. Software3, 62–80 (2024).

Guyt, J. Y., Datta, H. & Boegershausen, J. Unlocking the potential of web data for retailing research. J. Retail. 100, 130–147 (2024).

Soni, V. Large Language models for enhancing customer lifecycle management. J. Empir. Soc. Sci. Stud.7, 67–89 (2023).

Lin, J. et al. How can recommender systems benefit from large language models: A survey. arXiv Prepr. arXiv2306.05817 (2023).

Roumeliotis, K. I., Tselikas, N. D. & Nasiopoulos, D. K. LLMs in e-commerce: a comparative analysis of GPT and LLaMA models in product review evaluation. Nat. Lang. Process. J.6, 100056 (2024).

Wu, L. et al. A survey on large Language models for recommendation. World Wide Web. 27, 60 (2024).

Provasi, V. The AI revolution: evaluating impact and consequences in copywriting. (2023).

Johnsen, M. AI in Digital Marketing (Walter de Gruyter GmbH \& Co KG, 2024).

Richey, R. G. Jr, Chowdhury, S., Davis-Sramek, B., Giannakis, M. & Dwivedi, Y. K. Artificial intelligence in logistics and supply chain management: A primer and roadmap for research. Journal of Business Logistics vol. 44 532–549 at (2023).

Li, Y. et al. Large language models for manufacturing. arXiv Prepr. arXiv2410.21418 (2024).

Latif, E., Fang, L., Ma, P. & Zhai, X. Knowledge distillation of llm for education. arXiv Prepr. arXiv2312.15842 (2023).

Zhang, Z. et al. Simulating Classroom Education with LLM-Empowered Agents. arXiv Prepr. arXiv2406.19226 (2024).

Chen, L. et al. BIDTrainer: An LLMs-driven Education Tool for Enhancing the Understanding and Reasoning in Bio-inspired Design. in Proceedings of the CHI Conference on Human Factors in Computing Systems 1–20 (2024).

Diez-Rozas, V., Estevez-Ayres, I., Alario-Hoyos, C. & Callejo, P. & Delgado Kloos, C. A Web Application for a Cost-Effective Fine-Tuning of Open-Source LLMs in Education. in International Conference on Artificial Intelligence in Education 267–274 (2024).

Ouyang, Z., Jiang, Y. & Liu, H. The effects of Duolingo, an AI-Integrated technology, on EFL learners’ willingness to communicate and engagement in online classes. Int. Rev. Res. Open. Distrib. Learn.25, 97–115 (2024).

Shahzad, T. et al. A comprehensive review of large Language models: issues and solutions in learning environments. Discov Sustain.6, 27 (2025).

Upadhyay, A., Farahmand, E., Muñoz, I., Akber Khan, M. & Witte, N. Influence of LLMs on learning and teaching in higher education. Available SSRN4716855 (2024).

Chen, Z. et al. Evolution and prospects of foundation models: from large Language models to large multimodal models. Comput. Mater. \& Contin80, 1753 (2024).

Xu, R. et al. Knowledge conflicts for llms: A survey. arXiv Prepr. arXiv2403.08319 (2024).

Dai, S. et al. Bias and unfairness in information retrieval systems: New challenges in the llm era. in Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 6437–6447 (2024).

Razafinirina, M. A., Dimbisoa, W. G. & Mahatody, T. Pedagogical alignment of large Language models (LLM) for personalized learning: A survey, trends and challenges. J. Intell. Learn. Syst. Appl.16, 448–480 (2024).

Liu, S., Guo, X., Hu, X. & Zhao, X. Advancing generative intelligent tutoring systems with GPT-4: design, evaluation, and a modular framework for future learning platforms. Electronics13, 4876 (2024).

Yousefi, M., Mullick, J. & Wang, Q. Preparing teachers and students for the challenges of society 5.0: integration of cognitive computing, learning analytics, and gamification of learning. Preconceptions Policies Strateg Challenges Educ.5.0, 75–99 (2024).

Wu, S. et al. Bloomberggpt: A large language model for finance. arXiv Prepr. arXiv2303.17564 (2023).

de Zarzà, I., de Curtò, J., Roig, G. & Calafate, C. T. Optimized financial planning: integrating individual and cooperative budgeting models with LLM recommendations. AI5, 91–114 (2023).

Zhang, B., Yang, H., Zhou, T., Babar, A. & Liu, X. Y. M. Enhancing financial sentiment analysis via retrieval augmented large language models. in Proceedings of the fourth ACM international conference on AI in finance 349–356 (2023).

de Moraes, D. et al. S. Using Zero-shot Prompting in the Automatic Creation and Expansion of Topic Taxonomies for Tagging Retail Banking Transactions. arXiv Prepr. arXiv2401.06790 (2024).

Policepatil, S. et al. IGI Global,. Financial Sector Hyper-Automation: Transforming Banking and Investing Procedures. in Examining Global Regulations During the Rise of Fintech 299–318 (2025).

Reda, M. A. Intelligent Assistant Agents: Comparative Analysis of Chatbots through Diverse Methodologies. GSJ 12, (2024).

Alagic, A. et al. Machine learning for an enhanced credit risk analysis: A comparative study of loan approval prediction models integrating mental health data. Mach. Learn. Knowl. Extr.6, 53–77 (2024).

Saxena, A., Verma, S. & Mahajan, J. Transforming banking: the next frontier. In (eds. Saxena, A., Verma, S. & Mahajan, J.) Generative AI in Banking Financial Services and Insurance: A Guide To Use Cases, Approaches, and Insights 85–121 (Springer, 2024).

Quinonez, C. & Meij, E. A new era of AI-assisted journalism at Bloomberg. AI Mag45 (2024).

Johnsen, M. Developing AI Applications With Large Language ModelsMaria Johnsen,. (2025).

Abdali, S., He, J., Barberan, C. J. & Anarfi, R. Can llms be fooled? investigating vulnerabilities in llms. arXiv Prepr. arXiv2407.20529 (2024).

Das, B. C., Amini, M. H. & Wu, Y. Security and privacy challenges of large Language models: A survey. ACM Comput. Surv.57 (2024).

Nie, Y. et al. A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges. arXiv Prepr. arXiv2406.11903 (2024).

MindySupport. 9 Cool Case Studies of Global Brands Using LLMs and & Generative, A. I. at (2024). https://hackernoon.com/9-cool-case-studies-of-global-brands-using-llms-and-generative-ai

Carneros-Prado, D. et al. Comparative study of large language models as emotion and sentiment analysis systems: A case-specific analysis of GPT vs. IBM Watson. in International Conference on Ubiquitous Computing and Ambient Intelligence 229–239 (2023).

Chow, J. C. L., Wong, V., Sanders, L. & Li, K. Developing an AI-assisted educational chatbot for radiotherapy using the IBM Watson assistant platform. in Healthcare vol. 11 2417 (2023). PubMed PMC

Dong, X. L., Moon, S., Xu, Y. E., Malik, K. & Yu, Z. Towards next-generation intelligent assistants leveraging llm techniques. in Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 5792–5793 (2023).

Martin, A. Artificial Intelligence Transformations in Digital Advertising: Historical Progression, Emerging Trends, and Strategic Outlook. (2024).

Zhao, H. et al. A Comprehensive Survey of Large Language Models in Management: Applications, Challenges, and Opportunities. Challenges, Oppor. (August 14, (2024). (2024).

Agrawal, S., Trenkle, J. & Kawale, J. Beyond Labels: Leveraging Deep Learning and LLMs for Content Metadata. in Proceedings of the 17th ACM Conference on Recommender Systems 1 (2023).

Jeffri, J. A. C. & Tamizhselvi, A. Enhancing Music Discovery: A Real-Time Recommendation System using Sentiment Analysis and Emotional Matching with Spotify Integration. in 8th International Conference on Electronics, Communication and Aerospace Technology (ICECA) 1365–1373 (2024). (2024).

Pearson, S. Computational advertising for meaningful brands, the public purpose, and a sustainable ecology: A call for research into a systems approach and modeling applications of LLMs in marketing and advertising. J. Curr. Issues \& Res. Advert. 45, 357–367 (2024).

Pesl, R. D., Stötzner, M., Georgievski, I. & Aiello, M. Uncovering LLMs for Service-Composition: Challenges and Opportunities. in International Conference on Service-Oriented Computing 39–48 (2023).

Jiao, J., Afroogh, S., Xu, Y., Phillips, C. & Navigating, L. L. M. Ethics: Advancements, Challenges, and Future Directions. arXiv Prepr. arXiv2406.18841 (2024).

Wu, F., Zhang, N., Jha, S., McDaniel, P. & Xiao, C. A new era in llm security: Exploring security concerns in real-world llm-based systems. arXiv Prepr. arXiv2402.18649 (2024).

Rojas, S. Evaluating the environmental impact of large Language models: sustainable approaches and practices. Innov. Comput. Sci. J.10, 1–6 (2024).

Ivanov, Y. Understanding the inner workings of large Language models: interpretability and explainability. MZ J. Artif. Intell.1, 1–5 (2024).

Chen, C. & Shu, K. Combating misinformation in the age of Llms: opportunities and challenges. AI Mag. 45, 354–368 (2024).

Bowen, D. et al. Data Poisoning in LLMs: Jailbreak-Tuning and Scaling Laws. arXiv Prepr. arXiv2408.02946 (2024).

Musser, M. A cost analysis of generative language models and influence operations. arXiv Prepr. arXiv2308.03740 (2023).

Liu, Y., Cao, J., Liu, C., Ding, K. & Jin, L. Datasets for large language models: A comprehensive survey. arXiv Prepr. arXiv2402.18041 (2024).

Kausik, B. N. Scaling Efficient LLMs. arXiv Prepr. arXiv2402.14746 (2024).

Long, D. X. et al. LLMs Are Biased Towards Output Formats! Systematically Evaluating and Mitigating Output Format Bias of LLMs. arXiv Prepr. arXiv2408.08656 (2024).

Hassani, S. Enhancing Legal Compliance and Regulation Analysis with Large Language Models. arXiv Prepr. arXiv2404.17522 (2024).

Zhang, X., Li, S., Hauer, B., Shi, N. & Kondrak, G. Don’t Trust ChatGPT when Your Question is not in English: A Study of Multilingual Abilities and Types of LLMs. arXiv Prepr. arXiv2305.16339 (2023).

Hamzah, F. & Sulaiman, N. Multimodal integration in large language models: A case study with mistral llm. (2024).

Perković, G., Drobnjak, A. & Botički, I. Hallucinations in llms: Understanding and addressing challenges. in 2024 47th MIPRO ICT and Electronics Convention (MIPRO) 2084–2088 (2024).

Pressman, S. M. et al. AI and ethics: a systematic review of the ethical considerations of large language model use in surgery research. in Healthcare vol. 12 825 (2024). PubMed PMC

Stamboliev, E. & Christiaens, T. How empty is trustworthy AI? A discourse analysis of the ethics guidelines of trustworthy AI. Crit. Policy Stud.19, 1–18 (2024).

D\’\iaz-Rodr\’\iguez, N. et al. Connecting the Dots in trustworthy artificial intelligence: from AI principles, ethics, and key requirements to responsible AI systems and regulation. Inf. Fusion. 99, 101896 (2023).

Hickman, E., Petrin, M. & Trustworthy AI and corporate governance: the EU’s ethics guidelines for trustworthy artificial intelligence from a company law perspective. Eur. Bus. Organ. Law Rev.22, 593–625 (2021).

Najít záznam

Citační ukazatele

Nahrávání dat ...

Možnosti archivace

Nahrávání dat ...