Results 1 to 10 of about 27,265 (232)

Language models outperform cloze predictability in a cognitive model of reading. [PDF]

open access: yesPLoS Computational Biology
Although word predictability is commonly considered an important factor in reading, sophisticated accounts of predictability in theories of reading are lacking.
Adrielli Tina Lopes Rego   +2 more
doaj   +3 more sources

Mask and Cloze: Automatic Open Cloze Question Generation Using a Masked Language Model [PDF]

open access: yesIEEE Access, 2023
This paper conducts the first trial to apply a masked language AI model and the “Gini coefficient” to the field of English study. We propose an algorithm named CLOZER that generates open cloze questions that inquiry knowledge of English ...
Shoya Matsumori   +5 more
doaj   +4 more sources

Cloze probability, predictability ratings, and computational estimates for 205 English sentences, aligned with existing EEG and reading time data. [PDF]

open access: yesBehav Res Methods, 2023
We release a database of cloze probability values, predictability ratings, and computational estimates for a sample of 205 English sentences (1726 words), aligned with previously released word-by-word reading time data (both self-paced reading and eye ...
de Varda AG, Marelli M, Amenta S.
europepmc   +2 more sources

Interferência da habilidade de escrita no resultado do teste de Cloze por razão fixa Impact of writing skills on the fixed-ratio Cloze test scores

open access: yesRevista Iberoamericana de Psicología, 2021
O teste de Cloze é amplamente utilizado na avaliação da compreensão de leitura de escolares. Estudos indicam que os resultados do Cloze por razão fixa se relacionam com a escrita, sugerindo que esse formato é dependente dessa habilidade linguística ...
Adriana Satico Ferraz   +2 more
doaj   +2 more sources

Linguistic Prediction in Autism Spectrum Disorder [PDF]

open access: yesBrain Sciences
Background: Autism spectrum disorder has been argued to involve impairments in domain-general predictive abilities. There is strong evidence that individuals with ASD have trouble navigating the dynamic world due to an inability to predict the outcomes ...
Aimee O’Shea, Paul E. Engelhardt
doaj   +2 more sources

Cloze testing for comprehension assessment: The HyTeC-cloze [PDF]

open access: yesLanguage Testing, 2019
Although there are many methods available for assessing text comprehension, the cloze test is not widely acknowledged as one of them. Critiques on cloze testing center on its supposedly limited ability to measure comprehension beyond the sentence. However, these critiques do not hold for all types of cloze tests; the particular configuration of a ...
Suzanne Kleijn   +2 more
openaire   +4 more sources

Morphosyntactic but not lexical corpus-based probabilities can substitute for cloze probabilities in reading experiments.

open access: yesPLoS ONE, 2021
During reading or listening, people can generate predictions about the lexical and morphosyntactic properties of upcoming input based on available context.
Anastasiya Lopukhina   +2 more
doaj   +2 more sources

Gotta: Generative Few-shot Question Answering by Prompt-based Cloze Data Augmentation [PDF]

open access: yesSDM, 2023
Few-shot question answering (QA) aims at precisely discovering answers to a set of questions from context passages while only a few training samples are available.
Xiusi Chen   +4 more
semanticscholar   +1 more source

PASTA: Table-Operations Aware Fact Verification via Sentence-Table Cloze Pre-training [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2022
Fact verification has attracted a lot of attention recently, e.g., in journalism, marketing, and policymaking, as misinformation and dis- information can sway one’s opinion and affect one’s actions.
Zihui Gu   +5 more
semanticscholar   +1 more source

Exploiting Cloze-Questions for Few-Shot Text Classification and Natural Language Inference [PDF]

open access: yesConference of the European Chapter of the Association for Computational Linguistics, 2020
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with “task descriptions” in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this
Timo Schick, Hinrich Schütze
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy