Results 1 to 10 of about 13,789,012 (283)

The interpretation of computational model parameters depends on the context [PDF]

open access: yeseLife, 2022
Reinforcement Learning (RL) models have revolutionized the cognitive and brain sciences, promising to explain behavior from simple conditioning to complex problem solving, to shed light on developmental and individual differences, and to anchor cognitive
Maria Katharina Eckstein   +5 more
doaj   +6 more sources

Investigating Context Parameters in Technology Term Recognition [PDF]

open access: yesProceedings of the COLING Workshop on Synchronic and Diachronic Approaches to Analyzing Technical Language, 2014
We propose and evaluate the task of technology term recognition: a method to extract technology terms at a synchronic level from a corpus of scientific publications. The proposed method is built on the principles of terminology extraction and distributional semantics. It is realized as a regression task in a vector space model.
QasemiZadeh, Behrang   +1 more
openaire   +2 more sources

A history of masculinities in South Africa: Context and parameters

open access: yesContree, 1999
Die geskiedenis van manlikheid is 'n relatief nuwe genre waarin Suid-Afrikaanse academici begin beweeg. Geslagsidentiteit en identiteitsvorming het vanaf die 1980 wêreldwyd meer aandag begin geniet.
Ron Viney
doaj   +4 more sources

Influence of Context on Item Parameters in Forced-Choice Personality Assessments [PDF]

open access: yesEducational and Psychological Measurement, 2016
A fundamental assumption in computerized adaptive testing is that item parameters are invariant with respect to context—items surrounding the administered item. This assumption, however, may not hold in forced-choice (FC) assessments, where explicit comparisons are made between items included in the same block. We empirically examined the influence of
Lin, Yin, Brown, Anna
openaire   +3 more sources

Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning [PDF]

open access: yesNeural Information Processing Systems, 2022
Few-shot in-context learning (ICL) enables pre-trained language models to perform a previously-unseen task without any gradient-based training by feeding a small number of training examples as part of the input.
Haokun Liu   +6 more
semanticscholar   +1 more source

Generative Multimodal Models are In-Context Learners [PDF]

open access: yesComputer Vision and Pattern Recognition, 2023
The human ability to easily solve multimodal tasks in context (i.e., with only a few demonstrations or simple instructions), is what current multimodal systems have largely struggled to imitate.
Quan Sun   +10 more
semanticscholar   +1 more source

IDENTIFICATION OF E-LEARNING QUALITY PARAMETERS IN INDIAN CONTEXT TO MAKE IT MORE EFFECTIVE AND ACCEPTABLE [PDF]

open access: yesProceedings on Engineering Sciences, 2020
In India, eLearning is showing an increasing trend over the past two decades. Cost-effectiveness and learning flexibility are the key differentiators in favor of Indian eLearning progress.
Shirshendu Roy   +2 more
doaj   +1 more source

Can We Edit Factual Knowledge by In-Context Learning? [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2023
Previous studies have shown that large language models (LLMs) like GPTs store massive factual knowledge in their parameters. However, the stored knowledge could be false or out-dated.
Ce Zheng   +6 more
semanticscholar   +1 more source

Learning To Retrieve Prompts for In-Context Learning [PDF]

open access: yesNorth American Chapter of the Association for Computational Linguistics, 2021
In-context learning is a recent paradigm in natural language understanding, where a large pre-trained language model (LM) observes a test instance and a few training examples as its input, and directly decodes the output without any update to its ...
Ohad Rubin   +2 more
semanticscholar   +1 more source

In-Context Unlearning: Language Models as Few Shot Unlearners [PDF]

open access: yesInternational Conference on Machine Learning, 2023
Machine unlearning, the study of efficiently removing the impact of specific training instances on a model, has garnered increased attention in recent years due to regulatory guidelines such as the \emph{Right to be Forgotten}.
Martin Pawelczyk   +2 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy