Results 11 to 20 of about 1,038,246 (291)
This work presents a comparative analysis of various machine learning (ML) methods for predicting item difficulty in English reading comprehension tests using text features extracted from item wordings.
Lubomír Štěpánek +2 more
doaj +3 more sources
The development and evaluation of valid assessments of scientific reasoning are an integral part of research in science education. In the present study, we used the linear logistic test model (LLTM) to analyze how item features related to text complexity
Moritz Krell, Samia Khan, Jan van Driel
doaj +1 more source
Extending Item Response Theory to Online Homework [PDF]
Item Response Theory becomes an increasingly important tool when analyzing ``Big Data'' gathered from online educational venues. However, the mechanism was originally developed in traditional exam settings, and several of its assumptions are infringed ...
Kortemeyer, Gerd
core +7 more sources
This study aims to analyze an assessment instrument, mainly the characteristics of the test items, by using a Quest program. This study is a descriptive quantitative study in one school in Yogyakarta.
Ikhsanudin Ikhsanudin +3 more
doaj +1 more source
Parameters and Models of Item Response Theory (IRT): A Review of Literature
Introduction: Item response theory (IRT) has received much attention in validation of assessment instrument because it allows the estimation of students’ ability from any set of the items.
Gyamfi Abraham, Acquaye Rosemary
doaj +1 more source
Repeated retrieval practice and item difficulty: Does criterion learning eliminate item difficulty effects? [PDF]
A wealth of previous research has established that retrieval practice promotes memory, particularly when retrieval is successful. Although successful retrieval promotes memory, it remains unclear whether successful retrieval promotes memory equally well for items of varying difficulty. Will easy items still outperform difficult items on a final test if
Kalif E, Vaughn +2 more
openaire +2 more sources
Psychometric Properties of 3-, 4-, and 5-Option Item Tests: Do Test Takers’ Personality Traits Make a Difference? [PDF]
Prior research has yielded mixed results regarding what contributes psychometrically sound multiple-choice (MC) items. The purpose of the present study was, therefore, twofold: (a) to compare 3-, 4-, and 5-option multiple-choice (MC) tests in terms of ...
Fatemeh Khaleghi, Rajab Esfandiari
doaj +1 more source
Item learning in cognitive skill training: Effects of item difficulty [PDF]
Item difficulty effects in skill learning were examined by giving participants extensive training with repeated alphabet arithmetic problems that varied in addend size (e.g., C-D = ? is easy; C-J = ? is harder). Recognition memory for the items, as measured by interpolated recognition tests, was acquired early in training and was unaffected by item ...
William J, Hoyer +2 more
openaire +2 more sources
ANALYSIS OF DIFFICULTY LEVEL OF PHYSICS NATIONAL EXAMINATION’S QUESTIONS
This study aimed to determine: (1) the difficulty level of items in physics National Exam of 2013 (2) physics materials that were difficult and very difficult.
Yusrizal Yusrizal
doaj +1 more source
Exploring the Influence of Item Characteristics in a Spatial Reasoning Task
Well-designed spatial assessments can incorporate multiple sources of complexity that reflect important aspects of spatial reasoning. When these aspects are systematically included in spatial reasoning items, researchers can use psychometric models to ...
Qingzhou Shi +2 more
doaj +1 more source

