- Review
- Open access
- Published:
Promises and perils of generative artificial intelligence: a narrative review informing its ethical and practical applications in clinical exercise physiology
BMC Sports Science, Medicine and Rehabilitation volume 17, Article number: 131 (2025)
Abstract
Generative Artificial Intelligence (GenAI) is transforming various sectors, including healthcare, offering both promising opportunities and notable risks. The infancy and rapid development of GenAI raises questions regarding its effective, safe, and ethical use by health professionals, including clinical exercise physiologists. This narrative review aims to explore existing interdisciplinary literature and summarise the ethical and practical considerations of integrating GenAI into clinical exercise physiology practice. Specifically, it examines the ‘promises’ of improved exercise programming and healthcare delivery, as well as the ‘perils’ related to data privacy, person-centred care, and equitable access. Recommendations for the responsible integration of GenAI in clinical exercise physiology are described, in addition to recommendations for future research to address gaps in knowledge. Future directions, including the roles and responsibilities of specific stakeholder groups are discussed, highlighting the need for clear professional guidelines in facilitating safe and ethical deployment of GenAI into clinical exercise physiology practice. Synthesis of current literature serves as an essential step in guiding strategies to ensure the safe, ethical, and effective integration of GenAI in clinical exercise physiology, providing a foundation for future guidelines, training, and research to enhance service delivery while maintaining high standards of practice.
Introduction
In today’s rapidly evolving technological landscape, artificial intelligence (AI), particularly Generative AI (GenAI), is profoundly impacting various industries. Excitingly, but also concerningly, the speed at which GenAI is evolving exceeds what many industries can effectively manage. This leads to uncertainties surrounding its utility, including aspects such as implementation, accuracy, appropriateness, risk control, and suitability. The use of GenAI in healthcare presents both significant opportunities and unique challenges, with the potential to either revolutionise clinical practice or jeopardise therapeutic relationships and the quality of care.
Clinical exercise physiology is one healthcare discipline yet to fully exploit the potential of GenAI. Anecdotally, clinical exercise physiologists are adopting GenAI tools in aspects of practice management, program design, and research. Although AI and GenAI assessment of virtual rehabilitation protocols have been trialled with promising outcomes [1], they remain far from mainstream practice. One possible reason for this is the notable infancy of research in the field of AI and GenAI in clinical exercise physiology practice. The infancy of research in this area means there is currently a lack of robust evidence supporting the development of clinical guidelines or position statements on how to ethically, practically, and safely implement GenAI tools in clinical exercise physiology. This uncertainty leaves the current workforce questioning how, or if, they should utilise these technologies in practice.
To help navigate these uncertainties and the infancy of research, this narrative review of the literature addresses two key questions:
-
a)
Does the rapid integration of GenAI technologies pose an unacceptable and unmanageable risk to clinical practice, or should it be embraced as a means to drive innovation and enhance client outcomes?
-
b)
How should clinical exercise physiologists navigate the swift pace of technological advancement if and when integrating GenAI into their practices?
A narrative review is an appropriate approach to answer these questions since they are a form of non-systematic literature synthesis that provides a comprehensive overview of a particular topic by integrating findings from various studies [2]. Unlike systematic reviews, which follow strict protocols for selecting and analysing studies, narrative reviews offer greater flexibility in their approach, allowing authors to explore broader perspectives and the evolving context of a subject [2, 3]. Narrative reviews also inspire future research directions by identifying gaps or inconsistencies in a body of knowledge [4].
Further, a narrative review approach enables the effective capture of insights from diverse disciplines, fostering a more holistic understanding to address our research questions [5]. In this review, studies were selected based on broad manuscript types and methodologies, including clinical trials, observational studies, expert opinions, and reviews, provided they addressed these research questions. Given the interdisciplinary nature of the topic, the review focused on literature at the intersection of clinical exercise physiology, healthcare delivery, and GenAI, while drawing from related fields such as technology, psychology, and exercise science. Relevant articles were identified through a structured but flexible search strategy using common academic databases, including PubMed, Embase, Scopus, and Google Scholar. Search terms included combinations of keywords such as “generative artificial intelligence”, “GenAI”, “clinical exercise physiology”, “AI in healthcare”, “therapeutic communication”, and “AI ethics”. Reference lists of relevant articles were also reviewed to capture additional sources. The literature was synthesised thematically, focusing on three main areas: (1) the promises of GenAI in clinical exercise physiology (2), the potential pitfalls and ethical considerations, and (3) practical implications for integration into clinical practice.
Primarily, this review prioritised studies that explored how GenAI could improve exercise programming, enhance healthcare delivery, or integrate AI tools into clinical practice, ensuring the findings were applicable to the evolving role of GenAI in clinical exercise physiology. Studies published within the last five years were emphasised to reflect the rapidly evolving nature of AI and its increasing relevance to healthcare and exercise interventions. To enhance the rigour of this narrative review, the lead authors (OL, RS, DH, NHH) undertook three rounds of collaborative review and discussion, drawing on their clinical and academic expertise to interpret the literature and ensure alignment with the realities of clinical exercise physiology practice.
Hereafter, this narrative review focuses specifically on clinical exercise physiology, highlighting the unique applications of GenAI within their scope of practice. However, the authors acknowledge that many of the points raised transcend other healthcare disciplines, offering potentially valuable insights applicable to nursing, medicine, public health, and other allied health professions such as psychology and occupational therapy.
Artificial intelligence and generative AI in healthcare
Conceptually, AI was established as an academic discipline in 1951, when the question was posed, “Can machines think?” [6]. Rapid advancements in modern technology have transitioned AI from a theoretical concept to an applied technology integrated into numerous aspects of daily life. Today, AI is utilised across many industries and fields, from construction [7] and agriculture [8] to advertising [9] and engineering [10] as well as higher education [11].
Artificial intelligence has been broadly defined as technology that enables computers and machines to simulate human intelligence and problem-solving capabilities [12]; however, ambiguity and disagreement still exist regarding the exact meaning and definition of AI between fields [13]. The functionality of AI is constantly increasing with its ability to process vast amounts of data at unprecedented speeds, recognise patterns, make decisions, and even predict outcomes, which enhances efficiency, productivity and innovation [14]. In some industries, AI is used for tasks such as automating routine processes, providing insights through data analysis, improving customer interactions through chatbots, and aiding in complex problem-solving scenarios [15]. In healthcare, AI has extended towards supporting specialised tasks such as diagnosis and treatment, image analysis (e.g., skin cancer detection), device automation, and patient monitoring, which seek to increase the efficiency and accuracy of healthcare services [16,17,18]. For example, recent research has demonstrated that AI-enabled robots have the capacity to assist older adults in aged care facilities, facilitating greater independence and reducing the burden on caregivers [19]. Additionally, AI is particularly beneficial in managing unstructured data, making it valuable for administrative and data processing-related tasks [20].
An emerging subset of AI rising in prominence is Generative AI (GenAI), which autonomously generates new content such as text, images, audio, and video [21]. This technology leverages complex algorithms and models, along with extensive databases to produce outputs. By processing input data through advanced machine learning techniques, GenAI creates innovative content and solutions that extend well beyond traditional computational capabilities. GenAI encompasses both open-source systems, which are freely available, and closed or private systems, which have restricted access. GenAI has gained significant attention with the advancement of large language models (LLMs), engineered to understand and generate human-like text. Popular examples of a GenAI LLM platforms can now demonstrate advanced capabilities in natural language processing including a wide range of tasks, like language translation, summarisation, and engaging in coherent, contextually aware conversations [22].
Emerging research highlights several impactful applications of GenAI within the healthcare landscape, including improving administrative workflows, enhancing clinical documentation, and facilitating patient outreach [23]. Recent efforts in applying GenAI to other aspects of healthcare practice range from providing clinical administration support to offering professional education resources for providers and patients [24]. Popular GenAI applications including, speech recognition technology, and other AI-powered solutions help automate clinical documentation by providing real-time transcription and coding capabilities, saving clinicians’ time, improving accuracy, and enhancing clinical workflows [23]. With such applications focusing largely on enhancing administrative functions, as opposed to direct patient care, other studies have started exploring how these advanced language models can adopt best-practice therapeutic communication approaches, employing skills such as person-centred care and expressing empathy to enhance clinical interactions [25]. For example, findings suggest that popular GenAI tools are perceived as supportive and empathetic in contexts like mental healthcare [26]. This integration holds considerable promise, with potential for augmenting client care, facilitating medical research, and alleviating the workload on healthcare professionals. Notably, recent studies have also begun examining the role of GenAI in supporting healthcare delivery within low- and middle-income countries, where such tools have the potential to bridge workforce shortages, enhance triage systems, and improve access to timely health information for underserved populations [27].
Despite GenAI’s potential in healthcare, it poses risks that require careful consideration. Biases and inaccuracy in training data, often derived from mainstream populations, can result in healthcare models that fail to account for the specific genetic, cultural, or socioeconomic factors that influence the health of marginalised communities. This can significantly impact healthcare provision, raising broader ethical concerns, particularly around fairness and equity, as nuances in individual presentations are often overlooked, leading to misdiagnoses and inappropriate care plans for example [28]. This underscores the principle of “poor data in, poor decisions out” and the need to avoid the ‘black-box’ phenomenon, where algorithmic decisions are accepted without understanding the underlying process [29]. Studies have demonstrated this in practice, for instance, GenAI tools have been shown to underrepresent darker skin tones in medical imagery [30], leading to potential diagnostic inaccuracies; demonstrate gendered variation in risk estimation for coronary artery disease [31]; and systematically underperform in predicting outcomes for racially and ethnically marginalised populations [32]. These findings highlight the consequences of biased training data and reinforce the need for transparency and inclusive model development. Other concerns related to GenAI in healthcare include data privacy, particularly given the sensitive nature of personal health information. Unauthorised access and misuse of such information are likely to compromise patient confidentiality and erode trust in AI-driven healthcare solutions [33]. Another identified risk includes the possible over-reliance on such tools without adequate human oversight, potentially diminishing the accountability and transparency of clinical decisions, making identifying and correcting errors challenging [34]. Ethical challenges also arise from the opaque nature of AI algorithms, where the clinical reasoning for specific outputs may not be easily interpretable by clinicians or clients, further complicating their integration into clinical practice [35]. Owing to these potentially serious consequences, the utility of GenAI in healthcare requires careful consideration and risk mitigation through regulation, informed use, and informed consent. Addressing these factors may avoid misguided clinical decisions, compromised patient safety and privacy, and the exacerbation of existing healthcare disparities.
Understanding the perceptions of healthcare recipients’ (i.e. clients or service-users) and health service providers regarding the use of GenAI in healthcare is crucial, as these perspectives offer valuable insights into both the potential benefits and the barriers to adoption. For example, a recent survey of 1000 US adults revealed concerns about the source and reliability of GenAI’s healthcare information, yet 71% of respondents believe that GenAI will be integrated into healthcare provider/patient interactions within the next ten years [36]. Other research, involving qualitative perspectives, found that while many patients appreciated the convenience and accessibility of AI-led chatbot services, concerns remained about impersonality, data privacy, and the inability of such tools to replicate human empathy [37]. From the provider perspective, two recent cross-sectional studies [38, 39] found allied health professionals recognised the potential for GenAI to support administrative tasks but expressed hesitations about its integration into direct patient care, particularly regarding trust, accountability, and the preservation of person-centred practice. These findings suggest that while there is growing openness to GenAI integration, both users and professionals are cautious about its limitations in clinical decision-making, communication, and ethical use. As such, the healthcare sector must be well-equipped and proactive, as well as timely and accurately informed about emerging advances in GenAI and how they may be ethically and practically integrated into care. Accordingly, health professionals must understand the implications of GenAI in their specific disciplines. Clinical exercise physiology is one field where the integration of GenAI holds promise, though it should be approached with caution. The following sections provide a review of the relevant literature and highlight potential applications and challenges of integrating GenAI into clinical exercise physiology practice.
Clinical exercise physiology
Clinical exercise physiologists are allied health professionals who specialise in prescribing exercise and physical activity to help prevent and treat a wide range of health conditions and injuries. With an advanced understanding of anatomy, biomechanics, and physiology as well as skills in health behaviour change, clinical exercise physiologists support individuals who have, or are at risk of acute, sub-acute, or chronic conditions, injuries, and disabilities through evidence-based physical activity interventions. These conditions typically fall under the core domains of cardiovascular, metabolic, neurological, musculoskeletal, cancer, renal, respiratory, and mental health, as well as other emerging and niche areas of practice where exercise has demonstrated benefits, such as gender-specific conditions, autoimmune conditions, and polycystic ovary syndrome [40, 41]. Accordingly, clinical exercise physiologists may operate across a wide variety of clinical settings, ranging from public and private hospital settings (acute and sub-acute), primary care, private practice, not-for-profit organisations, aged-care centres, and community exercise facilities [42]. Similar to clinical psychologists, social workers, physiotherapists, or occupational therapists; clinical exercise physiologists are university-qualified allied health professionals accredited by Exercise and Sports Science Australia (ESSA) in Australia, American College of Sports Medicine (ACSM) in the United States of America, Canadian Society of Exercise Physiology (CSEP) in Canada, British Association for Sport and Exercise Sciences (BASES) in the United Kingdom, and Sport and Exercise Science New Zealand (SESNZ) in New Zealand. Global standards have now been developed for clinical exercise physiology by the International Confederation of Sport and Exercise Science Practice (ICSESP; https://icsesp.global/). In other countries where clinical exercise physiology has yet to gain recognition, the scope of practice overseeing the provision of exercise and physical activity in clinical spaces may be occupied by other allied health professions, such as ‘kinesiologists’, ‘physical therapists’ or ‘psychomotor therapists’. For the purposes of this narrative review, we will refer to the profession as clinical exercise physiologists.
Applications of GenAI in clinical exercise physiology practice
Evidence examining the application of GenAI in clinical exercise physiology practice is lacking, however emerging research has highlighted the use of GenAI in exercise prescription for both clinical and non-clinical populations. For instance, a recent systematic review [43] identified promising applications of GenAI tools for personalising training programs, particularly in sports and exercise contexts, yet emphasised the need for expert oversight due to variability in prescription accuracy and safety considerations. In other studies, Zaleski and colleagues [44] evaluated a GenAI chatbot’s ability to generate exercise recommendations for all clinical and non-clinical populations reported in the American College of Sports Medicine (ACSM) Guidelines for Exercise Testing and Prescription (a gold-standard resource worldwide) [45]. The chatbot in their study achieved 41.2% comprehensiveness (i.e., Chatbot’s ability to report all exercise prescription variables included for each population group in the ACSM guidelines) and 90.7% accuracy (i.e., how well the variables reported aligned with ACSM recommendations), with most inaccuracies related to exercise pre-participation medical clearance and screening. Despite these positive outcomes, recommendations were predominantly targeted at a tertiary-level literacy, highlighting a key limitation in accessibility and potential biases against some populations. With more input and learning, the comprehensiveness and accuracy of these tools may improve, allowing for updates, further training, or modifications to improve the application (i.e. relevance of output) and utility (i.e. accessibility) for priority populations. Importantly, existing guidelines like those from the ACSM in this study, provide a framework, which is then used by exercise professionals to inform their clinical judgment and personalise interventions for individual circumstances and health considerations; function which AI and GenAI cannot yet replicate. Therefore, AI-generated exercise prescriptions should complement, not replace, the essential supervision and oversight required for the safe and effective implementation of exercise interventions.
In another study, Dergaa et al. [46] critically assessed popular GenAI system, ChatGPT-4, for its exercise prescriptions across various health conditions and fitness goals. They found that the AI-generated programs generally prioritised safety over effectiveness, lacked precision in addressing individual health conditions (e.g., asthma, type two diabetes or anxiety/stress), and did not sufficiently adapt recommendations through ongoing interactions with the client. These findings are at odds with Zaleski and colleagues and may be due to differences in the versions of ChatGPT used, the nature and detail of the prompts or instructions provided, or the specific focus of the studies. Dergaa’s (2024) study involved case studies with individual goals, health outcomes, and medications [46], whereas Zaleski’s (2024) study aligned recommendations to general conditions based on the ACSM guidelines [44]. Despite these differences, both studies concluded that GenAI technologies may serve as supplemental tools to enhance accessibility to exercise guidance but are not recommended to replace personalised, progressive prescriptions by exercise professionals.
In non-clinical populations, a study by Masagca (2024) compared the effectiveness of AI-generated versus human-generated calisthenic exercise programs on physical fitness outcomes in 87 untrained college students [47], finding that while AI-designed programs improved certain fitness components (i.e., flexibility and muscular endurance), human-generated programs generally produced more comprehensive and effective results for all participants (i.e., improvements in cardiovascular fitness and greater improvements in flexibility and muscular endurance). These findings align with Dergaa and colleagues (2024), reinforcing concerns about the precision of AI-generated programs in addressing individual health or gender-specific needs [46]. Together, these findings highlight the critical role of qualified and experienced exercise professionals in offering ongoing engagement, observation, communication, demonstration, and adaptation of exercise programming for clinical and non-clinical populations, which, at present, cannot be replicated by GenAI. Given the potential for clinical exercise physiologists to engage and partner with GenAI, and considering the current absence of established guidelines, we aim to explore the potential applications of GenAI in clinical exercise physiology practice. This involves examining how its strengths can be harnessed to enhance clinical exercise physiology services, while addressing the foreseeable challenges and limitations. By analysing the promises and perils of GenAI, this review aims to provide key practical recommendations and guide industry stakeholders - including practising clinical exercise physiologists, organisations, governing bodies, and researchers - in the safe, ethical, and effective use of GenAI tools, ultimately informing the development of robust implementation strategies within this emerging field.
Prospects and considerations of GenAI in clinical exercise physiology practice
In this section, we draw on existing evidence and expert opinion to highlight key prospects and considerations surrounding the utility of GenAI in clinical exercise physiology practice, focusing on promises (i.e., advantages and opportunities) and perils (i.e., risks and challenges).
Promises in clinical exercise physiology practice
Integrating GenAI technology presents many promises for clinical exercise physiologists, spanning from practice management and administrative applications to enhancing the accessibility and effectiveness of client-facing interactions. This section uses currently available evidence to highlight ways this technology may be harnessed to improve the therapeutic experience of clinical exercise physiologists and clients. Please note that the following promises, summarised in Table 1, represent possibilities rather than certainties. These outcomes are based on limited evidence while considering ongoing advancements in GenAI technologies.
Importantly, these promises are contingent on several factors, including the type of GenAI software used, the quality and accuracy of the data provided, and the professional’s knowledge to effectively prompt GenAI tools to generate the intended outcomes. This highlights the need to consider the challenges and limitations associated with integrating GenAI into clinical exercise physiology practice.
Challenges and limitations of generative AI in clinical exercise physiology
Despite the numerous ways in which GenAI could enhance clinical exercise physiology practice and improve health outcomes, several immediate challenges also exist. These challenges and ethical considerations, outlined in Tables 2 and 3, are informed by evidence from the literature, with specific references cited alongside each item where available, and broader syntheses attributed to supporting studies [35, 43, 57,58,59,60]. These are summarised in Table 2.
Given the evolving nature of GenAI technology, it must be acknowledged that challenges and limitations are likely to be actively addressed by developers and users, who aim to minimise these issues and improve the applicability of GenAI tools in clinical exercise physiology practice.
Ethical considerations
Considering the above limitations, there remain many questions and concerns surrounding specific ethical considerations of GenAI. Potential ethical considerations surrounding GenAI in clinical exercise physiology practice are presented in Table 3, drawing from key existing literature previously described.
Despite the significant challenges and ethical considerations surrounding the use of GenAI in healthcare, many clinical exercise physiologists may still choose to incorporate it into their practice, even in the absence of clear guidelines. This reflects both the perceived benefits of its adoption, despite its limitations. Therefore, clinical exercise physiologists should cautiously embrace this technological shift, adhering to emerging regulations as they develop, while recognising the difficulties national and global regulatory bodies face in keeping pace with its rapid evolution. At present, no AI-specific guidelines exist for clinical exercise physiology, unlike other allied health fields such as radiology [61] and physiotherapy [62], which have begun developing frameworks and recommendations for safe and ethical AI use). This raises important considerations for the clinical implications and future directions of responsibly integrating GenAI into clinical exercise physiology practice. Importantly, little is known about how clinical exercise physiologists themselves perceive the use of GenAI in practice with no studies, to date, examining their attitudes, experiences, or decision-making processes when engaging with GenAI tools. Understanding these perspectives represents a critical gap and should be prioritised in future research, particularly through qualitative or mixed methods approaches that capture both clinician and client viewpoints.
Discussion
Generative AI is a relatively new development in healthcare, and its application within the discipline of clinical exercise physiology is still in its infancy, with limited credible research or anecdotal evidence supporting its successful implementation to date. While GenAI functionalities hold promise, they are currently most effective as supplementary tools, given ongoing concerns regarding their suitability, comprehensiveness, and accuracy. These concerns highlight the need for cautious integration of GenAI into clinical exercise physiology practices. Below, we discuss the clinical implications of GenAI use within clinical exercise physiology practice, including key practical recommendations, future directions and calls for research.
Clinical implications
Artificial intelligence technologies, including GenAI, offer significant potential to complement clinical exercise physiology practice. While GenAI can enhance practice by offering tools for risk stratification, personalised exercise programs, and improved accessibility, it is crucial that clinical exercise physiologists maintain their role as the primary decision-makers when providing direct care. Clinical exercise physiologists must ensure AI-driven recommendations are carefully reviewed and aligned with evidence-based practices before implementation. The responsible integration of GenAI into clinical practice requires ongoing education, the development of guidelines, and adherence to ethical standards to safeguard the quality of care and maintain the trust of clients. As such, the future of clinical exercise physiology practice will likely involve a careful balance between leveraging the strengths of GenAI and preserving the essential human elements of personalised care.
Future directions
Key future directions for the integration of GenAI into clinical exercise physiology practice are summarised in Table 4, highlighting the roles and responsibilities of specific stakeholder groups who may contribute to these initiatives.
Table 4. Future directions for the integration of GenAI into Clinical Exercise Physiology practice highlighting the roles and responsibilities of specific stakeholder groups who may contribute to these initiatives. (a) Australian Medical Association (AMA)., (2023). Artificial Intelligence in Healthcare - Position Statement. Retrieved 19 August 2024 from https://www.ama.com.au/sites/default/files/2023-08/Artificial%20Intelligence%20in%20Healthcare %20-%20AMA.pdf. (b) Royal Australian College of General Practitioners., (2024). Artificial Intelligence in Primary Care - Position Statement. Retrieved 12 August from https://www.racgp.org.au/advocacy/position-statements/view-all-position-statements/clinical-and-practice-manage ment/artificial-intelligence-in-primary-care. (c) Tertiary Education Quality and Standards Agency., (2024). Artificial Intelligence. Retrieved 12 August from https://www.teqsa.gov.au/guides-resources/higher-education-good-practice-hub/artificial-intelligence.
Calls for research
Further research is crucial to equip both practitioners and clients with the skills and knowledge required to harness the transformative capabilities of GenAI, thereby enhancing their healthcare interactions. Research into the application of GenAI in clinical exercise physiology should prioritise several key areas listed below.
-
1.
From our search, we found a lack of research exploring the perspectives of clinical exercise physiologists and their clients on the use of GenAI in practice, highlighting the need for mixed methods studies to capture its real-world implications.
-
2.
There is also a need to pilot GenAI tools in clinical settings to assess their integration, ensuring safe, ethical, and person-centred implementation. This may include RCTs comparing AI-generated versus human-supervised exercise prescriptions to evaluate outcomes such as safety, effectiveness, patient satisfaction, and clinician workload.
-
3.
Studies should investigate the feasibility and acceptability of GenAI-driven chatbots for delivering consultations, health education, and motivational interviewing [63]. This includes evaluating the service-user (client) and co-designing GenAI platforms with them, ensuring that these tools are not only informed by best practices but are also practical and beneficial in real-world applications. Further collaborative transdisciplinary research between AI researchers, clinical exercise physiologists, and other healthcare professionals is also necessary to enhance the relevance and applicability of GenAI solutions across various contexts. This will help tailor GenAI tools to the specific needs of diverse populations and care contexts, including culturally diverse communities, people living with chronic conditions, and those experiencing mental illness, including exposure to trauma.
-
4.
It is essential to validate the effectiveness of GenAI-generated exercise physiology-related recommendations, information and considerations, particularly for individuals with complex presentations that mirror real-world scenarios, such as those involving multimorbidity, polypharmacy, and unique psychosocial needs, personal preferences, goals, and resources. This research should compare GenAI outputs against existing evidence-based guidelines to ensure accuracy and clinical relevance [63].
-
5.
Further research into the generation and application of synthetic data is crucial, given its potential to enhance GenAI’s capacity for personalised healthcare solutions while safeguarding patient privacy and ensuring robust, ethically sound practices [56].
-
6.
Integrating diverse client-based datasets, including genetic, cultural, social, lifestyle, and environmental factors, into GenAI systems is imperative. This integration will support the development of more personalised and effective interventions that genuinely address individual client needs, thus advancing the field of clinical exercise physiology.
Conclusion
The integration of GenAI into clinical exercise physiology practice offers substantial promise but also presents several significant challenges and ethical considerations. While GenAI has the potential to enhance the efficiency, accessibility, and scalability of clinical exercise physiology services, the current technology lacks the depth required for fully personalised, person-centred care. The absence of robust guidelines and standardisation exacerbates these challenges, placing additional responsibility on clinical exercise physiologists to navigate the evolving landscape cautiously. By embracing GenAI as a supplementary tool rather than a replacement for clinical judgment, clinical exercise physiologists could potentially harness its benefits while safeguarding the therapeutic alliance and ensuring high-quality care; however, further research to determine its feasibility, acceptability, reliability and safety is needed before it is routinely adopted in clinical practice. Moving forward, it is essential to establish clear guidelines, provide continuous professional development, and promote interdisciplinary research to address existing knowledge gaps. This will ensure that GenAI is integrated into clinical exercise physiology practice responsibly and ethically, contributing to enhanced client outcomes while preserving the core values of personalised care.
Data availability
No datasets were generated or analysed during the current study.
References
Karlov M, Abedi A, Khan SS. Rehabilitation exercise quality assessment through supervised contrastive learning with hard and soft negatives. Medical & Biological Engineering & Computing; 2024.
Sukhera J, Narrative Reviews. Flexible, rigorous, and practical. J Grad Med Educ. 2022;14(4):414–7.
Rother ET. Systematic literature review X narrative review. Acta Paulista De Enfermagem. 2007;20(2):v–vi.
Paré G, Kitsiou S. Methods for literature [internet]eviews. Handbook of [internet]Health [internet]valuation: an [internet]vidence-based approach [Internet]. Victoria (BC): University of Victoria; 2017.
Wiles R, Crow G, Pain H. Innovation in qualitative research methods: a narrative review. Qualitative Res. 2011;11(5):587–604.
Wilkes MV. Can machines think?? Proc IRE. 1953;41:1230–4.
Ghimire P, Kim K, Acharya M. Opportunities and challenges of generative AI in construction industry: focusing on adoption of Text-Based models. Buildings. 2024;14(1):220.
Javaid M, Haleem A, Khan IH, Suman R. Understanding the potential applications of artificial intelligence in agriculture sector. Adv Agrochem. 2023;2(1):15–30.
Ford J, Jain V, Wadhwani K, Gupta DG. AI advertising: an overview and guidelines. J Bus Res. 2023;166:114124.
Nti IK, Adekoya AF, Weyori BA, Nyarko-Boateng O. Applications of artificial intelligence in engineering and manufacturing: a systematic review. J Intell Manuf. 2022;33(6):1581–601.
Bhullar PS, Joshi M, Chugh R. ChatGPT in higher education - a synthesis of the literature and a future research agenda. Educ Inf Technol. 2024;29(16):21501–22.
IBM. Artificial intelligence (AI) 2024 [Available from: https://www.ibm.com/topics/artificial-intelligence
Van Rooij I, Guest O, Adolfi FG, de Haan R, Kolokolova A, Rich P, Reclaiming. AI as a theoretical tool for cognitive science. 2023.
Dwivedi YK, Hughes L, Ismagilova E, Aarts G, Coombs C, Crick T et al. Artificial Intelligence (AI): Multidisciplinary Perspectives on Emerging Challenges, Opportunities, and Agenda for Research, Practice and Policy. 2021.
Russell SJ, Norvig P. Artificial intelligence: a modern approach. Pearson; 2016.
Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115–8.
Marchetti MA, Nazir ZH, Nanda JK, Dusza SW, D’Alessandro BM, DeFazio J, et al. 3D Whole-body skin imaging for automated melanoma detection. J Eur Acad Dermatol Venereol. 2023;37(5):945–50.
Kumar Y, Koul A, Singla R, Ijaz MF. Artificial intelligence in disease diagnosis: a systematic literature review, synthesizing framework and future research agenda. J Ambient Intell Humaniz Comput. 2023;14(7):8459–86.
Sawik B, Tobis S, Baum E, Suwalska A, Kropińska S, Stachnik K, et al. Robots for elderly care: review, multi-criteria optimization model and qualitative case study. Healthcare (Basel). 2023;11(9).
Baviskar D, Ahirrao S, Potdar V, Kotecha K. Efficient automated processing of the unstructured documents using artificial intelligence: A systematic literature review and future directions. IEEE Access. 2021;9:72894–936.
Chavan J, Mankar C, Patil V. Opportunities in research for generative artificial intelligence (GenAI), challenges and future direction: A study. Int Res J Eng Technol. 2024;11(02):446–51.
Open AI. ChatGPT. 2024.
Zhang P, Kamel Boulos MN, Generative. AI in medicine and healthcare: promises, opportunities and challenges. Future Internet. 2023;15(9):286.
Yu P, Xu H, Hu X, Deng C. Leveraging generative AI and large Language models: A comprehensive roadmap for healthcare integration. Healthcare. 2023;11(20):2776.
Ayers JW, Poliak A, Dredze M, Leas EC, Zhu Z, Kelley JB, et al. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med. 2023;183(6):589–96.
Maurya RK, Montesinos S, Bogomaz M, DeDiego AC. Assessing the use of ChatGPT as a psychoeducational tool for mental health practice. Counselling Psychother Res. 2025;25(1):e12759.
Wang X, Sanders HM, Liu Y, Seang K, Tran BX, Atanasov AG, et al. ChatGPT: promise and challenges for deployment in low- and middle-income countries. Lancet Reg Health West Pac. 2023;41:100905.
Abràmoff MD, Tarver ME, Loyo-Berrios N, Trujillo S, Char D, Obermeyer Z, et al. Considerations for addressing bias in artificial intelligence for health equity. NPJ Digit Med. 2023;6(1):170.
Schlagwein D, Willcocks L, ‘ChatGPT, et al. The ethics of using (generative) artificial intelligence in research and science. J Inform Technol. 2023;38(3):232–8.
O’Malley A, Veenhuizen M, Ahmed A. Ensuring appropriate representation in artificial Intelligence-Generated medical imagery: protocol for a methodological approach to address skin tone Bias. Jmir Ai. 2024;3:e58275.
Achtari M, Salihu A, Muller O, Abbé E, Clair C, Schwarz J, Fournier S. Gender Bias in AI’s perception of cardiovascular risk. J Med Internet Res. 2024;26:e54242.
Mihan A, Pandey A, Van Spall HGC. Artificial intelligence bias in the prediction and detection of cardiovascular disease. Npj Cardiovasc Health. 2024;1(1):31.
Chen Y, Esmaeilzadeh P. Generative AI in medical practice: In-Depth exploration of privacy and security challenges. J Med Internet Res. 2024;26:e53008.
Ahmad K, Maabreh M, Ghaly M, Khan K, Qadir J, Al-Fuqaha A. Developing future human-centered smart cities: critical analysis of smart City security, data management, and ethical challenges. Comput Sci Rev. 2022;43:100452.
Gerke S, Minssen T, Cohen G. Chapter 12 - Ethical and legal challenges of artificial intelligence-driven healthcare. In: Bohr A, Memarzadeh K, editors. Artificial intelligence in healthcare. Academic; 2020. pp. 295–336.
Wolters Kluwer. The future of generative AI in healthcare is driven by consumer trust. 2023 [cited 2024 18/08/2024]. Available from: https://www.wolterskluwer.com/en/expert-insights/the-future-of-generative-ai-in-healthcare-is-driven-by-consumer-trust
Nadarzynski T, Miles O, Cowie A, Ridge D. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study. Digit Health. 2019;5:2055207619871808.
Daniyal M, Qureshi M, Marzo RR, Aljuaid M, Shahid D. Exploring clinical specialists’ perspectives on the future role of AI: evaluating replacement perceptions, benefits, and drawbacks. BMC Health Serv Res. 2024;24(1):587.
Hoffman J, Hattingh L, Shinners L, Angus RL, Richards B, Hughes I, Wenke R. Allied health professionals’ perceptions of artificial intelligence in the clinical setting: Cross-Sectional survey. JMIR Form Res. 2024;8:e57204.
Sabag A, Patten RK, Moreno-Asso A, Colombo GE, Dafauce Bouzo X, Moran LJ, et al. Exercise in the management of polycystic ovary syndrome: a position statement from Exercise and sports science Australia. J Sci Med Sport. 2024;27(10):668–77.
ESSA. ESSA Accredited Exercise Physiologist (AEP) Scope of Practice. 2021.
Wang YC, Chou MY, Liang CK, Peng LN, Chen LK, Loh CH. Post-Acute care as a key component in a healthcare system for older adults. Ann Geriatr Med Res. 2019;23(2):54–62.
Puce L, Bragazzi NL, Currà A, Trompetto C. Harnessing generative artificial intelligence for exercise and training prescription: applications and implications in sports and physical Activity—A systematic literature review. Appl Sci. 2025;15(7):3497.
Zaleski AL, Berkowsky R, Craig KJT, Pescatello LS. Comprehensiveness, accuracy, and readability of exercise recommendations provided by an AI-Based chatbot: mixed methods study. JMIR Med Educ. 2024;10:e51308.
Riebe D, Ehrman JK, Liguori G, Magal M. ACSM’s guidelines for exercise testing and prescription. American College of Sports Medicine; 2018.
Dergaa I, Saad HB, El Omri A, Glenn JM, Clark CCT, Washif JA, et al. Using artificial intelligence for exercise prescription in personalised health promotion: A critical evaluation of OpenAI’s GPT-4 model. Biol Sport. 2024;41(2):221–41.
Masagca RC. The AI coach: A 5-week AI-generated calisthenics training program on health-related physical fitness components of untrained collegiate students. J Hum Sport Exerc. 2024;20(1):39–56.
Chamberlin J, Kocher MR, Waltz J, Snoddy M, Stringer NFC, Stephenson J, et al. Automated detection of lung nodules and coronary artery calcium using artificial intelligence on low-dose CT scans for lung cancer screening: accuracy and prognostic value. BMC Med. 2021;19(1):55.
Aggarwal A, Tam CC, Wu D, Li X, Qiao S. Artificial Intelligence–Based chatbots for promoting health behavioral changes: systematic review. J Med Internet Res. 2023;25:e40789.
Zhang J, Oh YJ, Lange P, Yu Z, Fukuoka Y. Artificial intelligence chatbot behavior change model for designing artificial intelligence chatbots to promote physical activity and a healthy diet: viewpoint. J Med Internet Res. 2020;22(9):e22845.
Eftekhari H. Transcribing in the digital age: qualitative research practice utilizing intelligent speech recognition technology. Eur J Cardiovasc Nurs. 2024;23(5):553–60.
Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, Alshaya AI, Almohareb SN, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. 2023;23(1):689.
Nova K. Generative AI in healthcare: advancements in electronic health records, facilitating medical languages, and personalized patient care. J Adv Analytics Healthc Manage. 2023;7(1):115–31.
Huang M-H, Rust RT. Artificial intelligence in service. J Service Res. 2018;21(2):155–72.
Wang YC, Xue J, Wei C, Kuo CCJ. An overview on generative AI at scale with Edge–Cloud computing. IEEE Open J Commun Soc. 2023;4:2952–71.
Giuffrè M, Shung DL. Harnessing the power of synthetic data in healthcare: innovation, application, and privacy. Npj Digit Med. 2023;6(1):186.
Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44–56.
Wilson J, Heinsch M, Betts D, Booth D, Kay-Lambkin F. Barriers and facilitators to the use of e-health by older adults: a scoping review. BMC Public Health. 2021;21(1):1556.
Reddy S, Fox J, Purohit MP. Artificial intelligence-enabled healthcare delivery. J R Soc Med. 2019;112(1):22–8.
Umesh C, Mahendra M, Bej S, Wolkenhauer O, Wolfien M. Challenges and applications in generative AI for clinical tabular data in physiology. Pflügers Archiv - Eur J Physiol. 2025;477(4):531–42.
Royal Australian and New Zealand College of Radiologists (RANZCR). Ethical principles for AI in medicine (Version 2.0) 2023 [Available from: https://www.ranzcr.com/college/document-library/ethical-principles-for-ai-in-medicine
Chartered Society of Physiotherapy (CSP). Statement of principles to apply to the use of artificial intelligence (AI) in physiotherapy: Chartered Society of Physiotherapy. 2025 [Available from: https://www.csp.org.uk/professional-clinical/professional-guidance/statement-principles-apply-use-ai-physiotherapy?check_logged_in=1
Schubert MC, Lasotta M, Sahm F, Wick W, Venkataramani V. Evaluating the multimodal capabilities of generative AI in complex clinical diagnostics. medRxiv. 2023. https://doi.org/10.1101/2023.11.01.23297938.
Acknowledgements
Not applicable.
Funding
No funding was provided for the development of this narrative review.
Author information
Authors and Affiliations
Contributions
OL conceived the study idea, which was further developed in structure with NHH, DH, and RS. The original draft with background literature was performed and written by OL and AL. OL was responsible for the visualization of author feedback. All other authors, including JM, RC, AB, GW and JW provided expert review and commentary on the draft document throughout multiple iterations.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
NHH is a Senior Editorial Board Member of BMC Cancer, and DH is an Editorial Board Member of BMC Psychology, both of which are in the BMC Series. All other authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Lederman, O., Llana, A., Murray, J. et al. Promises and perils of generative artificial intelligence: a narrative review informing its ethical and practical applications in clinical exercise physiology. BMC Sports Sci Med Rehabil 17, 131 (2025). https://doi.org/10.1186/s13102-025-01182-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13102-025-01182-7