Results 11 to 20 of about 22,713,326 (323)

Generative Multimodal Models are In-Context Learners [PDF]

open access: yesComputer Vision and Pattern Recognition, 2023
The human ability to easily solve multimodal tasks in context (i.e., with only a few demonstrations or simple instructions), is what current multimodal systems have largely struggled to imitate.
Quan Sun   +10 more
semanticscholar   +1 more source

Identifying Opportunities for Collective Curation During Archaeological Excavations

open access: yesInternational Journal of Digital Curation, 2020
Archaeological excavations are comprised of interdisciplinary teams that create, manage, and share data as they unearth and analyse material culture. These team-based settings are ripe for collective curation during these data lifecycle stages.
Ixchel Faniel   +5 more
doaj   +1 more source

Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection [PDF]

open access: yesNeural Information Processing Systems, 2023
Neural sequence models based on the transformer architecture have demonstrated remarkable \emph{in-context learning} (ICL) abilities, where they can perform new tasks when prompted with training and test examples, without any parameter update to the ...
Yu Bai   +4 more
semanticscholar   +1 more source

Teachers’ Responses to Bullying Questionnaire: A Validation Study in Two Educational Contexts

open access: yesFrontiers in Psychology, 2022
Given the high prevalence and dramatic impact of being bullied at school, it is crucial to get more insight into how teachers can reduce bullying. So far, few instruments have measured elementary teachers’ responses to bullying.
Fleur Elisabeth van Gils   +6 more
doaj   +1 more source

Peer Effects on Engagement and Disengagement: Differential Contributions From Friends, Popular Peers, and the Entire Class

open access: yesFrontiers in Psychology, 2021
School engagement and disengagement are important predictors of school success that are grounded in the social context of the classroom. This study used multilevel analysis to examine the contributions of the descriptive norms of friends, popular ...
Nina Steenberghs   +3 more
doaj   +1 more source

Secondary traumatisation, burn-out and functional impairment: findings from a study of Danish child protection workers

open access: yesEuropean Journal of Psychotraumatology, 2020
Background: Child-protection workers are at elevated risk for secondary traumatization. However, research in the area of secondary traumatization has been hampered by two major obstacles: the use of measures that have unclear or inadequate psychometric ...
M. Louison Vang   +6 more
doaj   +1 more source

Psychosocial changes during COVID-19 lockdown on nursing home residents, their relatives and clinical staff: a prospective observational study

open access: yesBMC Geriatrics, 2023
Background Previous works have observed an increase of depression and other psychological disorders on nursing home residents as a consequence of coronavirus disease 2019 (COVID-19) lockdown; however, there are few studies that have performed a ...
Adriana Catarina De Souza Oliveira   +6 more
doaj   +1 more source

What Can Transformers Learn In-Context? A Case Study of Simple Function Classes [PDF]

open access: yesNeural Information Processing Systems, 2022
In-context learning refers to the ability of a model to condition on a prompt sequence consisting of in-context examples (input-output pairs corresponding to some task) along with a new query input, and generate the corresponding output.
Shivam Garg   +3 more
semanticscholar   +1 more source

Transformers learn in-context by gradient descent [PDF]

open access: yesInternational Conference on Machine Learning, 2022
At present, the mechanisms of in-context learning in Transformers are not well understood and remain mostly an intuition. In this paper, we suggest that training Transformers on auto-regressive objectives is closely related to gradient-based meta ...
J. Oswald   +6 more
semanticscholar   +1 more source

In-context Learning and Induction Heads [PDF]

open access: yesarXiv.org, 2022
"Induction heads"are attention heads that implement a simple algorithm to complete token sequences like [A][B] ... [A] ->[B]. In this work, we present preliminary and indirect evidence for a hypothesis that induction heads might constitute the mechanism ...
Catherine Olsson   +25 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy