Comparison of AI-generated and clinician-designed multiple-choice questions in emergency medicine exam: a psychometric analysis. [PDF]
Kaya M +4 more
europepmc +1 more source
GPT-4's performance in supporting physician decision-making in nephrology multiple-choice questions. [PDF]
Noda R +3 more
europepmc +1 more source
Use of artificial intelligence-generated multiple-choice questions for the examination of surgical subspecialty residents Report of feasibility and psychometric analysis. [PDF]
Kim JK +8 more
europepmc +1 more source
Guideline-enhanced large language models outperform physician-test takers on EASL Campus quizzes multiple choice questions. [PDF]
Giuffrè M +5 more
europepmc +1 more source
Prompt engineering for single-best-answer multiple-choice questions in licensing examinations: a narrative review with a case study involving the Korean Medical Licensing Examination. [PDF]
Kim B, Kang J, Kim MY, Ahn J.
europepmc +1 more source
Benchmarking open-source large language models on Portuguese Revalida multiple-choice questions. [PDF]
Bruneti Severino JV +8 more
europepmc +1 more source
Item analysis of multiple choice questions from assessment of health sciences students, Tigray, Ethiopia. [PDF]
Gebremichael MW +3 more
europepmc +1 more source
Automatic distractor generation in multiple-choice questions: a systematic literature review. [PDF]
Awalurahman HW, Budi I.
europepmc +1 more source
Can ChatGPT generate surgical multiple-choice questions comparable to those written by a surgeon? [PDF]
Kıyak YS +4 more
europepmc +1 more source
Comparative performance of ChatGPT, Gemini, and final-year emergency medicine clerkship students in answering multiple-choice questions: implications for the use of AI in medical education. [PDF]
Al-Thani SN +6 more
europepmc +1 more source

