Results 11 to 20 of about 27,265 (232)

Capital Gains: Effects of Word Class and Sentence Position on Capitalization Use Across Age. [PDF]

open access: yesChild Dev
ABSTRACT Learning to capitalize in English requires identifying a word's type and sentence position. In two cloze studies (2021–2022), Australian students of all genders (95% White, monolingual) spelled words with one and two capitalization cues (proper nouns, sentence‐initial words) and no‐cue control words.
Hawkey E, Palmer MA, Kemp N.
europepmc   +2 more sources

Entity Cloze By Date: What LMs Know About Unseen Entities [PDF]

open access: yesNAACL-HLT, 2022
Language models (LMs) are typically trained once on a large-scale corpus and used for years without being updated. However, in a dynamic world, new entities constantly arise.
Yasumasa Onoe   +3 more
semanticscholar   +1 more source

Cloze Test Helps: Effective Video Anomaly Detection via Learning to Complete Video Events [PDF]

open access: yesACM Multimedia, 2020
As a vital topic in media content interpretation, video anomaly detection (VAD) has made fruitful progress via deep neural network (DNN). However, existing methods usually follow a reconstruction or frame prediction routine. They suffer from two gaps: (1)
Guang Yu   +6 more
semanticscholar   +1 more source

Generación de código para la elaboración de preguntas tipo cloze en Moodle usando Wolfram Mathematica (Code generation for creating cloze questions in Moodle usingWolfram Mathematica)

open access: yesRevista Digital Matemática, Educación e Internet, 2023
El presente trabajo comparte el uso de un paquete de software elaborado por los autores mediante Wolfram Language para la generación automática de código, con el objetivo de diseñar preguntas incrustadas (cloze) en plataformas de aprendizaje Moodle.
Enrique Vílchez Quesada   +1 more
doaj   +1 more source

Video Cloze Procedure for Self-Supervised Spatio-Temporal Learning [PDF]

open access: yesAAAI Conference on Artificial Intelligence, 2020
We propose a novel self-supervised method, referred to as Video Cloze Procedure (VCP), to learn rich spatial-temporal representations. VCP first generates “blanks” by withholding video clips and then creates “options” by applying spatio-temporal ...
Dezhao Luo   +6 more
semanticscholar   +1 more source

Zero-shot Commonsense Question Answering with Cloze Translation and Consistency Optimization [PDF]

open access: yesAAAI Conference on Artificial Intelligence, 2022
Commonsense question answering (CQA) aims to test if models can answer questions regarding commonsense knowledge that everyone knows. Prior works that incorporate external knowledge bases have shown promising results, but knowledge bases are expensive to
Zi-Yi Dou, Nanyun Peng
semanticscholar   +1 more source

Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward [PDF]

open access: yesAnnual Meeting of the Association for Computational Linguistics, 2020
Sequence-to-sequence models for abstractive summarization have been studied extensively, yet the generated summaries commonly suffer from fabricated content, and are often found to be near-extractive.
Luyang huang, Lingfei Wu, Lu Wang
semanticscholar   +1 more source

A Corpus and Cloze Evaluation for Deeper Understanding of Commonsense Stories [PDF]

open access: yesNorth American Chapter of the Association for Computational Linguistics, 2016
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events ...
N. Mostafazadeh   +7 more
semanticscholar   +1 more source

Pre-Training Transformers as Energy-Based Cloze Models [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2020
We introduce Electric, an energy-based cloze model for representation learning over text. Like BERT, it is a conditional generative model of tokens given their contexts.
Kevin Clark   +3 more
semanticscholar   +1 more source

Debiasing the Cloze Task in Sequential Recommendation with Bidirectional Transformers [PDF]

open access: yesKnowledge Discovery and Data Mining, 2022
Bidirectional Transformer architectures are state-of-the-art sequential recommendation models that use a bi-directional representation capacity based on the Cloze task, a.k.a. Masked Language Modeling.
Khalil Damak, Sami Khenissi, O. Nasraoui
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy