Results 61 to 70 of about 517,396 (274)
Few-Shot Class-Incremental Learning [PDF]
The ability to incrementally learn new classes is crucial to the development of real-world artificial intelligence systems. In this paper, we focus on a challenging but practical few-shot class-incremental learning (FSCIL) problem. FSCIL requires CNN models to incrementally learn new classes from very few labelled samples, without forgetting the ...
Tao, Xiaoyu +5 more
openaire +2 more sources
Screening Routine Clinical Notes for Epilepsy Surgery Candidates Using Large Language Models
ABSTRACT Objective Epilepsy surgery is severely underutilized despite proven efficacy, with substantial under‐referral of eligible patients in routine clinical practice. This study evaluated the potential role of large language models (LLMs) as decision‐support tools for screening unstructured clinical notes to identify epilepsy surgery candidates and ...
Uriel Fennig +9 more
wiley +1 more source
Hybrid attentive prototypical network for few-shot action recognition
Most previous few-shot action recognition works tend to process video temporal and spatial features separately, resulting in insufficient extraction of comprehensive features.
Zanxi Ruan +3 more
doaj +1 more source
Plain Template Insertion: Korean-Prompt-Based Engineering for Few-Shot Learners
Prompt-based learning is a method used for language models to interpret natural language by remembering the prior knowledge acquired and the training objective.
Jaehyung Seo +7 more
doaj +1 more source
Few-Shot Bayesian Imitation Learning with Logical Program Policies
Humans can learn many novel tasks from a very small number (1--5) of demonstrations, in stark contrast to the data requirements of nearly tabula rasa deep learning methods.
Allen, Kelsey R. +4 more
core +1 more source
What Do Large Language Models Know About Materials?
If large language models (LLMs) are to be used inside the material discovery and engineering process, they must be benchmarked for the accurateness of intrinsic material knowledge. The current work introduces 1) a reasoning process through the processing–structure–property–performance chain and 2) a tool for benchmarking knowledge of LLMs concerning ...
Adrian Ehrenhofer +2 more
wiley +1 more source
Graph representation learning has attracted tremendous attention due to its remarkable performance in many real-world applications. However, prevailing supervised graph representation learning models for specific tasks often suffer from label sparsity issue as data labeling is always time and resource consuming.
Zhang, Chuxu +6 more
openaire +2 more sources
Additive manufacturing provides precise control over the placement of continuous fibres within polymer matrices, enabling customised mechanical performance in composite components. This article explores processing strategies, mechanical testing, and modelling approaches for additive manufactured continuous fibre‐reinforced composites.
Cherian Thomas, Amir Hosein Sakhaei
wiley +1 more source
CLIP-Driven Prototype Network for Few-Shot Semantic Segmentation
Recent research has shown that visual–text pretrained models perform well in traditional vision tasks. CLIP, as the most influential work, has garnered significant attention from researchers.
Shi-Cheng Guo +4 more
doaj +1 more source
A Meta-Learning Approach for Custom Model Training
Transfer-learning and meta-learning are two effective methods to apply knowledge learned from large data sources to new tasks. In few-class, few-shot target task settings (i.e.
Abrishami, Mohammad Saeed +3 more
core +1 more source

