Your privacy, your choice

We use essential cookies to make sure the site can function. We also use optional cookies for advertising, personalisation of content, usage analysis, and social media.

By accepting optional cookies, you consent to the processing of your personal data - including transfers to third parties. Some third parties are outside of the European Economic Area, with varying standards of data protection.

See our privacy policy for more information on the use of your personal data.

for further information and to change your choices.

Skip to main content

The surgical clinical training measurement: developing and evaluating the quality of surgical clinical training among Syrian surgical residents

Abstract

Background

Evaluation tools for training programs vary, necessitating a standardized tool for assessing surgical clinical training quality to enhance program effectiveness, pinpoint improvement areas, and ensure resident readiness for independent practice. We present a new tool designed to provide a reliable and consistent framework for evaluating the effectiveness of surgical clinical training.

Methods

The Surgical Clinical Training Measurement (SCTM) was developed using the modified Delphi method to evaluate ten variables, including core competencies specific to surgical training. It employs a 5-point Likert scale, with scores ranging from 40 to 200. General surgery residents completed the SCTM twice to evaluate training levels. Results were categorized based on score ranges. Statistical analysis via SPSS included descriptive statistics, group comparisons, internal consistency assessments, correlations, and reliability tests to evaluate the SCTM scores, demographic characteristics, and language versions. ANOVA, Chi-Square, Cohen Kappa, and Spearman’s rho tests were employed for data analysis.

Results

74 general surgery residents at Aleppo University Hospital have participated in this study. The SCTM scores indicated a mean total score of 131.42, with most residents falling into the good satisfactory category. Analysis showed no significant differences in total scores across specialty years, but post-hoc tests revealed differences between specific years. The SCTM demonstrated strong reliability, with a Kappa value of 0.884 indicating high agreement between English and Arabic versions (p < 0.05). Test-retest reliability was also high (r = 0.964, p < 0.01). Internal consistency was excellent across various domains, reinforcing its validity in surgical education. The analysis of variables showed different levels of reliability and mean scores among the various factors. The Pre-Operative Clinical variable had the highest performance, while the Evidence-Based Quality Clinical Training variable indicated the most potential for improvement. The strong positive correlations between various domains of SCTM emphasize the interconnected nature of skill development, with proficiency in patient care closely linked to competency in other areas such as Medical Knowledge, Practice-based Learning and Improvement, and Evidence-Based Quality Clinical Training.

Conclusion

SCTM offers a standardized and cohesive method for evaluating the quality of surgical clinical training. It’s a valuable resource for program directors, educators, and residents to assess and enhance training programs, and identify specific areas for improvement. Additional research is required to validate the SCTM in different settings and explore its applicability in other fields.

Clinical trial number

Not applicable.

Peer Review reports

Introduction

Surgical residency training is a cornerstone in preparing future surgeons to provide safe and effective patient care. The quality of this training significantly impacts the development of essential skills, knowledge, and attitudes required for independent practice.

Evaluation of the quality of clinical training for surgical residents involves assessing various aspects of their training program to ensure it meets desired standards and produces competent surgeons [1,2,3]. This process aims to measure the training program’s effectiveness and identify improvement areas.

While various assessment tools and methods exist for evaluating surgical training, they often focus on specific aspects of training, such as technical skills or knowledge acquisition, and may lack a comprehensive and integrated approach to assessing the overall quality of training [4, 5].

Furthermore, the subjective nature of evaluations and the variability in training resources and methodologies across institutions complicate the development of universally applicable tools for assessing surgical clinical training quality [6,7,8,9].

A scoping review of 68 articles emphasizes the necessity for evidence-based indicators in surgical training, especially in low-resource settings, through quantitative and qualitative studies. It also focuses on benefits to trainees and patients, prioritizing training success, career progression, and patient safety [10]. Similarly, the systematic review of 42 studies points out a shift towards competency-based training in surgery and the need for further investigation on its impact on clinical outcomes. It advocates for the transition from technical proficiency to clinical competency and the development of validated assessments to support continuous surgical education and skill improvement [11].

Ensuring high-quality surgical training is essential for preparing competent and skilled surgeons. This highlights the need for a comprehensive and standardized tool to evaluate the quality of surgical clinical training. This study aims to develop a tool to effectively measure various dimensions of surgical training, including technical skills, decision-making, communication, professionalism, and patient safety, thereby contributing to the overall enhancement of surgical training standards. To guide this research, the following questions will be addressed: What specific dimensions of surgical training can be accurately measured? How can these measurements influence the improvement of training practices? What benchmarks can be established to ensure consistency in surgical training quality?

Methods

Study design and ethical approval

This study was performed under ethical approval from the ethics committee at the Faculty of Medicine, University of Aleppo, and The Syrian Virtual University (SVU) (Number: 4289/0). Informed consent was obtained from all participants before participating.

The Surgical Clinical Training Measurement (SCTM) was designed using a systematic approach following the modified Delphi method [2, 6], a structured communication technique that gathers expert opinions through a series of sessions. This process involved the following steps:

First, a panel of expert surgeons, educators, and researchers in the field of surgery was identified to participate in the Delphi process based on their expertise and experience in surgical education and training. An initial version of the SCTM was then developed through a comprehensive literature review on surgical training, competency assessment, and clinical skill development, involving the identification of relevant studies, articles, and guidelines on surgical training. This literature search ensured that the SCTM was evidence-based. The methodology for designing the scale and a review of the tools and sources used to define the evaluation criteria in surgical training are in Appendix A [1, 4, 5, 7, 10,11,12,13,14,15,16,17,18].

The Delphi process for the measurement consisted of multiple rounds of data collection and analysis where experts reviewed the SCTM and provided feedback on its content, clarity, comprehensiveness, and relevance. The measurement was revised and refined iteratively based on experts’ feedback until a consensus was reached. Pilot testing was then conducted with surgical residents to evaluate the feasibility, reliability, and validity of the final version of the SCTM.

Furthermore, this study was conducted at Aleppo University Hospital following the STROBE guidelines for cross-sectional studies [19].

Variables defining and measurement

The SCTM consisted of 40 items that assessed ten variables. These included the original six general core competencies developed by the Accreditation Council for Graduate Medical Education (ACGME) and American Board of Medical Specialties (ABMS) for practicing physicians: Patient Care, Medical Knowledge, Practice-Based Learning and Improvement, Interpersonal and Communication Skills, Professionalism, and Systems-Based Practice. [20, 21] Additionally, the SCTM integrates four surgery-specific variables: pre-, peri-, and post-operative care, as well as evidence-based quality clinical training in the Department of Surgery. Definitions for each variable, and their components are detailed in (Appendix A).

The SCTM included specific criteria aligned with each competency. It was designed as a 40-item tool using a 5-point Likert scale (1 = Strongly Disagree, 2 = Disagree, 3 = Uncertain, 4 = Agree, and 5 = Strongly Agree), with overall scores ranging from 40 to 200. Two of the items were negatively worded [22].

Participants

Data collection was conducted through a validated online questionnaire (Google Forms) in both Arabic and English versions between January 31 and February 15, 2024, with a two-week interval for retest reliability. The measurement also gathered demographic details such as age, gender, and residency year.

The overall SCTM scores ranged from 40 to 200 and were divided into four performance levels:

  • 40–80: Low-quality training.

  • 81–120: Moderate training.

  • 121–160: Good training.

  • 161–200: Superior training.

This classification aligns with the Surgical Theatre Educational Environment Measures (STEEM) framework, which measures the surgical work environment [4, 15].

Additionally, each item and variable were also categorized based on the mean score as follows:

  • 1-1.80: Low-quality training.

  • 1.81–2.60: Moderate training.

  • 2.61–3.40: Good training.

  • 3.41–4.20: Superior training.

  • 4.21-5: Excellent training.

The results were interpreted at the item, variable, and overall levels to provide a comprehensive understanding of training quality.

Statistical analysis

The data were analyzed using SPSS software (version 26.0). Descriptive statistics, including mean, standard deviation, frequencies, and percentages, were used to summarize quantitative and categorical variables. Total SCTM scores were analyzed using the following statistical methods: Group Comparison of the total SCTM scores was conducted using the Pearson Chi-Square test to evaluate differences in SCTM scores between male and female residents [23]. One-way ANOVA was employed to assess significant variations in SCTM scores across different residency years. The reliability and consistency of the SCTM were evaluated using multiple statistical tests. The Cohen’s Kappa test measured agreement between the Arabic and English versions of the measurement [24]. Test-retest reliability was assessed using Spearman’s rho. Test-retest reliability was evaluated using Spearman’s rho, while internal consistency for each variable was determined using Cronbach’s Alpha, with a threshold of ≥ 0.6 considered acceptable [25]. Correlations between variables were examined using Pearson Correlation to identify potential associations. These methods ensured a robust and comprehensive analysis of the data, providing insights into the reliability and validity of the SCTM.

Results

Participants and descriptive data

The study included 74 general surgery resident participants from Aleppo University Hospital, with a mean age of 27 years (range: 24–31). The sample comprised 63 males (85%) and 11 females (15%). Participants were distributed across the residency years as follows: 28 (37.8%) in their 1st year, 14 (18.9%) in their 2nd year, 13 (17.6%) in their 3rd year, 14 (18.9%) in their 4th year, and 5 (6.8%) in their 5th year. The mean total SCTM score was 131.42 (SD = 22.64), indicating generally positive satisfaction with the quality of surgical clinical training. Most participants (55.4%) scored in the “Good training” category, 24 residents (32.4%) were classified as “Moderate training,” 7 residents (9.5%) achieved “Superior training,” and 2 residents (2.7%) fell into the “Low-quality training” category. The Surgical Clinical Training Measurement (SCTM) items for each question are summarized in Table 1. Additional details regarding the Arabic version, along with the objective of each item and its corresponding indicator, are presented in (Appendix A).

Table 1 Items’ results

The lowest-scoring items were Q34 (mean = 1.65) and Q39 (mean = 1.49), reflecting the limited involvement of international university consultants in final interviews and the lack of virtual reality simulators for surgical training. Conversely, high scores were observed for items assessing comprehensive history (Q1: 4.27), thorough physical examination (Q2: 4.62), requesting unnecessary tests (Q3: 4.30), active supervisor participation in advanced operations (Q12: 4.36), supervisor oversight during elective surgeries (Q15: 4.45), electronic patient discharge (Q19: 4.43), and resident participation in clinical case presentations (Q31: 4.38). (Appendix A).

The analysis of variance (ANOVA) revealed no statistically significant differences in total SCTM scores across the specialty years (F = 2.002, P = 0.104). However, post hoc analyses (Tukey’s HSD, LSD, and Dunnett T3) identified two significant differences: between 1st and 3rd-year residents (mean difference = -16.047, P = 0.034) and between 3rd and 5th-year residents (mean difference = 26.754, P = 0.017). Pearson’s Chi-Square test showed no significant association between SCTM scores and gender (χ² = 2.249, df = 3, P = 0.522).

Reliability of the measurement

The SCTM demonstrated strong reliability across various measures. The Kappa value of 0.884 indicated a high level of agreement between the English and Arabic versions of the SCTM (p < 0.05). Test-retest reliability, measured by Pearson’s correlation coefficient, was also high (r = 0.964, p < 0.01, 2-tailed), confirming the SCTM’s consistency over time.

Internal consistency analysis using Cronbach’s Alpha showed excellent reliability for Medical Knowledge (α = 0.843), Practice-Based Learning and Improvement (α = 0.829), Interpersonal and Communication Skills (α = 0.803), Systems-Based Practice (α = 0.846), Post-Operative Clinical variable (α = 0.815), and Evidence-Based Clinical Training (α = 0.893). Patient Care (α = 0.677), Professionalism (α = 0.677), and Pre-Operative Clinical variable (α = 0.784) demonstrated good reliability, while the Peri-Operative Clinical variable (α = 0.517) showed acceptable reliability.

Variables’ analysis

The study assessed multiple domains of surgical clinical training, revealing a range of performance levels. The Pre-Operative Clinical domain achieved the highest score (mean = 3.82, SD = 0.59, total percentage = 76.49%), reflecting strong preparation during this phase. Conversely, Evidence-Based Quality Clinical Training recorded the lowest score (mean = 2.69, SD = 0.77, total percentage = 53.80%), indicating a critical area for improvement.

Other domains, such as Patient Care, Medical Knowledge, Interpersonal and Communication Skills, and Practice-Based Learning and Improvement, demonstrated moderate performance, with mean scores ranging from 2.83 to 3.34. The Peri-Operative Clinical domain showed relatively higher performance (mean = 3.47, SD = 0.52, total percentage = 69.31%), highlighting a robust focus on this phase of training (Table 2).

Table 2 Variables analysis

Pearson correlation analysis (significance level = 0.01) revealed strong positive relationships between the evaluated domains, emphasizing the interconnected nature of clinical skills. For instance, Patient Care exhibited significant correlations with Medical Knowledge (r = 0.902), Practice-Based Learning and Improvement (r = 0.961), and Evidence-Based Quality Clinical Training (r = 0.907). Similarly, Medical Knowledge strongly correlated with Evidence-Based Quality Clinical Training (r = 0.940), underscoring the role of evidence-based practices in enhancing knowledge acquisition.

Overall, the correlations highlight those improvements in one domain, such as Evidence-Based Training, will positively impact other aspects of clinical performance, reinforcing the need for a holistic approach to surgical education. (Appendix B).

Discussion

Main results and comparison with prior work

Studies in the medical field frequently emphasize variables such as clinical skills, knowledge acquisition, professionalism, communication skills, and patient care as critical components of quality clinical training. Evaluation methods described in these studies often rely on feedback from trainees, program supervisors, or a combination of both, underscoring the value of multiple perspectives in assessing the effectiveness and impact of clinical training programs [10, 18, 26, 27]. We propose that the SCTM serves as a unifying framework that evaluates individual competencies and emphasizes their interconnectedness. By employing a comprehensive and standardized approach, SCTM overcomes the limitations of traditional tools that frequently focus on isolated aspects of a trainee’s development.

The findings of this study highlight the SCTM as an impactful tool in evaluating the quality of surgical education, providing a comprehensive and standardized approach that addresses limitations in existing evaluation methods. Unlike other conventional evaluation methods in surgical education, such as STEEM [4], JCST Trainee Survey [12], and universal global rating [5], tend to focus on singular aspects of a trainee’s development. These tools often measure specific competencies in isolation, such as clinical skills or knowledge acquisition, neglecting the interconnected nature of surgical training. This narrow focus can lead to incomplete evaluations, hindering the ability to appreciate how multiple competencies work together in real-world surgical practice.

Moreover, the existing tools often lack standardization and fail to provide a comprehensive framework for assessors. This inconsistency can result in subjective interpretations of trainee performance and inadequate feedback, making it difficult for programs to identify strengths and areas for improvement effectively.

The study revealed an overall good satisfaction level (mean SCTM score = 131.42), indicating the general effectiveness of the training program. However, significant gaps were identified in domains such as Evidence-Based Quality Clinical Training (mean = 2.69) and Medical Knowledge (mean = 2.83). These results underscore the pressing need to incorporate evidence-based practices into surgical education, which is critical for equipping residents with decision-making capabilities grounded in research and best practices. For instance, the low scores in Evidence-Based Quality Clinical Training stem from insufficient curriculum emphasis on Evidence-Based practices, limited research exposure, lack of mentorship, and inadequate assessment methods. To improve, programs should revise curricula, offer workshops, create research opportunities, utilize advanced assessments, encourage quality improvement projects, and foster an Evidence-Based practices culture.

On the other hand, strong performance was observed in the Pre-Operative Clinical domain (mean = 3.82, 76.49%), reflecting active preparation during this phase. Similarly, the Peri-Operative Clinical domain (mean = 3.47, 69.31%) demonstrated relatively high scores, emphasizing effective surgical theater training. These findings suggest that while foundational skills and operative preparation are adequately addressed, advanced competencies such as integrating evidence-based approaches remain areas for targeted improvement.

The SCTM’s rigorous design is based on established educational frameworks, including the Accreditation Council for Graduate Medical Education (ACGME) core competencies. It incorporates ten key variables, including examples such as Patient Care and Interpersonal and Communication Skills, as well as surgical-specific domains like Pre-, Peri-, and Post-Operative care. The SCTM ensures a balanced evaluation of technical, cognitive, and professional aspects of surgical training. This multidimensionality is pivotal for capturing the complex nature of surgical education, which demands simultaneous proficiency in diverse competencies.

Moreover, the SCTM demonstrated strong psychometric properties, with reliability measures (Cronbach’s Alpha values ranging from 0.517 to 0.893) supporting its consistency and accuracy. High test-retest reliability (r = 0.964, p < 0.01) and strong agreement between the English and Arabic versions (Kappa = 0.884) affirm its adaptability across different linguistic and cultural contexts, making it a versatile tool for global implementation.

The correlations observed between domains, such as Patient Care and Evidence-Based Training (r = 0.907), highlight the interconnected nature of surgical competencies. These relationships suggest that enhancing evidence-based practices could positively influence other critical domains, such as medical knowledge and patient outcomes. The SCTM’s ability to identify these interrelationships enables educators to develop targeted interventions that enhance the effectiveness of training across all domains. Furthermore, the identification of significant differences between training years (e.g., 1st and 3rd years, P = 0.034) underscores the importance of tailoring educational strategies to the evolving needs of residents. Programs must prioritize stage-specific enhancements, such as early integration of technology-based learning tools (e.g., virtual reality simulators) and fostering international collaborations to bridge identified gaps in resources and mentorship. Importantly, the Pearson’s Chi-Square test results indicate no significant association between SCTM scores and gender, suggesting that self-assessments of competency are relatively uniform across genders within this sample.

The findings of this study also underscore the SCTM’s role as a transformative tool in global surgical education, offering a comprehensive evaluation that bridges existing gaps in assessment methods. By integrating essential competencies and surgical-specific variables, the SCTM provides a holistic view of clinical training quality, essential for enhancing surgical programs worldwide.

Limitations

The measurement was applied exclusively to general surgery residents in a single institutional setting, which limits its generalizability to other specialties or training programs. Furthermore, the study relied solely on trainee self-assessments, excluding input from program supervisors, which could provide a more balanced and comprehensive evaluation. Self-assessments may be subject to biases such as overconfidence, social desirability, or lack of self-awareness, which can distort the accuracy of the results. The measurement did not assess the surgical technical skills, such as the ability to handle instruments and the efficiency in performing procedures. Additionally, it did not differentiate between surgical environments in terms of experience and resources for laparoscopic and open surgery.

To address these limitations, future research should focus on validating the SCTM across diverse specialties and clinical settings, integrating supervisor perspectives, and ensuring its comprehensiveness in assessing surgical skills across different surgical environments, including both laparoscopic and open surgery.

Conclusions

The SCTM provides a standardized and cohesive method for evaluating the quality of surgical clinical training. It is a valuable resource for program directors, educators, and residents to assess and enhance training programs, and identify specific areas for improvement. It helps provide accessible, high-quality training for residents, thoroughly preparing them for independent surgical practice. Additional research is necessary to validateits use in diverse settings, explore its applicability to other medical fields, and assess its impact on surgical performance and patient outcomes.

Data availability

The data used for this article is shown in Appendix C.

Abbreviations

SCTM:

Surgical Clinical Training Measurement

ACGME:

Accreditation Council for Graduate Medical Education

ABMS:

American Board of Medical Specialties

STEEM:

The Surgical Theater Educational Environment Measures

JCST:

The Joint Committee on Surgical Training

References

  1. Yousef MA, Dashash M. A suggested scientific research environment measure SREM in medical faculties. Heliyon. 2023;9:e12701. https://www.ncbi.nlm.nih.gov/pubmed/36685438.

    Google Scholar 

  2. Ataya J, Jamous I, Dashash M. Measurement of humanity among health professionals: development and validation of the medical humanity scale using the Delphi method. JMIR Formative Res. 2023;7:e44241. https://www.ncbi.nlm.nih.gov/pubmed/37129940.

    Google Scholar 

  3. Dashash M, Boubou M. Measurement of empathy among health professionals during Syrian crisis using the Syrian empathy scale. BMC Med Educ. 2021;21:409. https://www.ncbi.nlm.nih.gov/pubmed/34325698.

    Google Scholar 

  4. Dimoliatis DK, Jelastopulu I. Surgical theatre (Operating Room) measure STEEM (OREEM) scoring overestimates educational environment: the 1-to-L bias. Univers J Educational Res. 2013;1:247–54.

    Google Scholar 

  5. Doyle JD, Webber EM, Sidhu RS. A universal global rating scale for the evaluation of technical skills in the operating room. Am J Surg. 2007;193:551–5. 5 SPEC. ISS. https://www.ncbi.nlm.nih.gov/pubmed/17434353.

    Google Scholar 

  6. Gawande AA, Zinner MJ, Studdert DM, Brennan TA. Analysis of errors reported by surgeons at three teaching hospitals. Surgery. 2003;133:614–21. https://www.ncbi.nlm.nih.gov/pubmed/12796727.

    Google Scholar 

  7. Morrison J. ABC of learning and teaching in medicine evaluation. BMJ. 2003;326:385–7. https://www.wiley.com/en-us/ABC+of+Learning+and+Teaching+in+Medicine%2C+3rd+Edition-p-9781118892176.

    Google Scholar 

  8. Marwan Y, Luo L, Toobaie A, Benaroch T, Snell L. Operating room educational environment in Canada: perceptions of surgical residents. J Surg Educ. 2021;78:60–8. https://www.ncbi.nlm.nih.gov/pubmed/32741693.

    Google Scholar 

  9. Berrani H, Abouqal R, Izgua AT. Moroccan residents’ perceptions of the hospital learning environment measured with the French version of the postgraduate hospital educational environment measure. J Educational Evaluation Health Professions. 2020;17:4. https://www.ncbi.nlm.nih.gov/pubmed/32000301.

    Google Scholar 

  10. Shaban L, Mkandawire P, O’Flynn E, Mangaoang D, Mulwafu W, Stanistreet D. Quality metrics and indicators for surgical training: A scoping review. J Surg Educ. 2023;80:1302–10. https://www.ncbi.nlm.nih.gov/pubmed/37481412.

    Google Scholar 

  11. Pakkasjärvi N, Anttila H, Pyhältö K. What are the learning objectives in surgical training– a systematic literature review of the surgical competence framework. BMC Med Educ. 2024;24:119. https://www.ncbi.nlm.nih.gov/pubmed/38321437.

    Google Scholar 

  12. Jon Lund Keith Jones, et al. DH. JCST Trainee Survey - JCST et al. 2011. https://www.jcst.org/quality-assurance/trainee-survey/

  13. ACGME. ACGME Program Requirements for Graduate Medical Education in General Surgery. ACGME Website. 2017;:1–31. http://www.acgme.org/portals/0/pfassets/programrequirements/440_general_surgery_2016.pdf

  14. Somani B, Brouwers T, Veneziano D, Gözen A, Ahmed K, Liatsikos E, et al. Standardization in surgical education (SISE): development and implementation of an innovative training program for urologic surgery residents and trainers by the European school of urology in collaboration with the ESUT and EULIS sections of the EAU. Eur Urol. 2021;79:433–4. https://linkinghub.elsevier.com/retrieve/pii/S0302-2838(20)30951-9.

    Google Scholar 

  15. Al-Qahtani MF, Al-Sheikh M. Assessment of educational environment of surgical theatre at a teaching hospital of a Saudi University: using surgical theatre educational environment measures. Oman Med J. 2012;27:217–23. https://www.ncbi.nlm.nih.gov/pubmed/22811771.

    Google Scholar 

  16. Benamer HT, Alsuwaidi L, Khan N, Jackson L, Lakshmanan J, Ho SB, et al. Clinical learning environments across two different healthcare settings using the undergraduate clinical education environment measure. BMC Med Educ. 2023;23:495. https://www.ncbi.nlm.nih.gov/pubmed/37407987.

    Google Scholar 

  17. Singh P, Aggarwal R, Pucher PH, Duisberg AL, Arora S, Darzi A. Defining quality in surgical training: perceptions of the profession. Am J Surg. 2014;207:628–36. https://www.ncbi.nlm.nih.gov/pubmed/24300670.

    Google Scholar 

  18. Miles S, Swift L, Leinster SJ. The Dundee ready education environment measure (DREEM): A review of its adoption and use. Med Teach. 2012;34:e620–34. https://www.ncbi.nlm.nih.gov/pubmed/22471916.

    Google Scholar 

  19. Vandenbroucke JP, Von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, Pocock SJ, et al. Strengthening the reporting of observational studies in epidemiology (STROBE): explanation and elaboration. PLoS Med. 2007;4:1628–54. https://pubmed.ncbi.nlm.nih.gov/25046751/.

    Google Scholar 

  20. Kavic MS. Competency and the six core competencies. JSLS: journal of the society of laparoendoscopic surgeons / society of laparoendoscopic surgeons. 2002;6:95–7. https://www.ncbi.nlm.nih.gov/pubmed/12113429

  21. Competency-Based Medical Education (CBME). 2024. https://www.aamc.org/about-us/mission-areas/medical-education/cbme

  22. Jebb AT, Ng V, Tay L. A review of key likert scale development advances: 1995–2019. Front Psychol. 2021;12:637547. https://pubmed.ncbi.nlm.nih.gov/34017283/.

    Google Scholar 

  23. Morshed S, Tornetta P, Bhandari M. Analysis of observational studies: A guide to Understanding statistical methods. J Bone Joint Surg. 2009;91(SUPPL 3):50–60. https://journals.lww.com/jbjsjournal/fulltext/2009/05003/analysis_of_observational_studies__a_guide_to.9.aspx.

    Google Scholar 

  24. Chang CH. Cohen’s kappa for capturing discrimination. Int Health. 2014;6:125–9. https://pubmed.ncbi.nlm.nih.gov/24691677/.

    Google Scholar 

  25. Bujang MA, Omar ED, Baharum NA. A review on sample size determination for Cronbach’s alpha test: A simple guide for researchers. Malaysian J Med Sci. 2018;25:85–99. https://pmc.ncbi.nlm.nih.gov/articles/PMC6422571/.

    Google Scholar 

  26. Soemantri D, Herrera C, Riquelme A. Measuring the educational environment in health professions studies: a systematic review. Med Teach. 2010;32:947–52. https://www.ncbi.nlm.nih.gov/pubmed/21090946.

    Google Scholar 

  27. Karami S, Sadati L, Khanegha ZN, Rahimzadeh M. Iranian measure of operating theatre educational climate (IMOTEC): validity and reliability study. Strides Dev Med Educ J. 2022;19. https://sdme.kmu.ac.ir/article_92071.html

Download references

Acknowledgements

The authors would like to thank Dr. Lama kadoura and Dr. Ahmad Y. Arnaout for their assistance, and all participants who agreed to participate in this study.

Funding

None.

Author information

Authors and Affiliations

Authors

Contributions

Ahmad Ghazal; Study Coordinator, Study Design, methodology, validation, data analysis, data interpretation, writing– original draft and reviewing.Mayssoon Dashash; Scientific supervision, and validation.

Corresponding author

Correspondence to Ahmad Ghazal.

Ethics declarations

Ethics approval and consent to participate

This study was performed under ethical approval from the ethics committee at the Faculty of Medicine, University of Aleppo, and The Syrian Virtual University (SVU) (Number: 4289/0). Informed consent was obtained from all participants before participating. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ghazal, A., Dashash, M. The surgical clinical training measurement: developing and evaluating the quality of surgical clinical training among Syrian surgical residents. BMC Med Educ 25, 459 (2025). https://doi.org/10.1186/s12909-025-07043-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-025-07043-8

Keywords