Results 121 to 130 of about 339,627 (329)
LLM‐based keyword augmentation for title‐driven evidence selection: A practical approach
Abstract Keyword‐based search is widely used in digital forensic investigations, yet its effectiveness depends strongly on investigator experience, leading to inconsistent results and missed evidence. While previous studies have explored machine learning and large language models (LLMs) to address this, practical deployment is often constrained by ...
Sanghyun Yoo, Doowon Jeong
wiley +1 more source
Generalisable deep Learning framework to overcome catastrophic forgetting
Generalisation across multiple tasks is a major challenge in deep learning for medical imaging applications, as it can cause a catastrophic forgetting problem.
Zaenab Alammar +5 more
doaj +1 more source
Alleviating Catastrophic Forgetting in Continual Learning
Machine learning has enjoyed rapid and substantial advances in the past few years. However, machine learning models cannot learn continually as we humans do. Humans are continual learners, meaning they can accumulate knowledge, use the previous knowledge to learn from new experiences better, and retain knowledge from previous experiences.
openaire +2 more sources
On the Dangers of Large‐Language Model Mediated Learning for Human Capital
ABSTRACT Against the dominant view in HRM concerning the value‐creating use of large language models (LLMs) in relation to Human Capital, our provocation asks whether LLMs will enhance or compromise Human Capital at work in the long‐run. We feel compelled to ask this question because Human Capital represents employees' accumulated learning experiences,
Dirk Lindebaum +2 more
wiley +1 more source
Class incremental learning (CIL) is a specific scenario in incremental learning. It aims to continuously learn new classes from the data stream, which suffers from the challenge of catastrophic forgetting.
Yan Xian, Hong Yu, Ye Wang, Guoyin Wang
doaj +1 more source
Continual Learning (CL), the ability of a model to learn new tasks without forgetting previously acquired knowledge, remains a critical challenge in artificial intelligence. This is particularly true for Vision Transformers (ViTs) that utilize Multilayer
Zahid Ullah, Jihie Kim
doaj +1 more source
ABSTRACT In 2019, the Dadan Archaeological Project (CNRS/RCU/AFALULA) identified a Late Antique village 1 km south of ancient Dadan in the al‐ʿUlā valley (northwest Saudi Arabia). Three excavation seasons at this site (2021–2023) have uncovered a massive building constructed in the late third or early fourth cent.
Jérôme Rohmer +11 more
wiley +1 more source
Map-based experience replay: a memory-efficient solution to catastrophic forgetting in reinforcement learning. [PDF]
Hafez MB, Immisch T, Weber T, Wermter S.
europepmc +1 more source
What is (de)politicization and what is wrong with it?
Abstract This article attempts to clarify the meaning of (de)politicization. Politicization sometimes refers to the inappropriate intrusion of partisan loyalties in nonpolitical social domains (affective politicization). Politicization can also constitute an ideal of civic agency and energy (contestatory politicization).
Dimitrios Halikias
wiley +1 more source
With dynamically evolving indoor environments, class-incremental learning (CIL) plays a crucial role in enabling indoor localization systems to adapt to new indoor areas.
Akhil Singampalli +2 more
doaj +1 more source

