Results 61 to 70 of about 1,257,085 (204)

Recognize nuance when interpreting monitoring results

open access: yesMethods in Ecology and Evolution
We recently published a study discussing the pitfalls of non‐probability sampling when selecting monitoring sites. We demonstrated that selecting sites based on abundance can often lead to biased inference, and we suggested that researchers use ...
Christopher J. W. McClure   +1 more
doaj   +1 more source

Entropy-Guided KV Caching for Efficient LLM Inference

open access: yesMathematics
Large language models (LLMs), built upon Transformer architectures, have demonstrated remarkable performance in a wide range of natural language processing tasks.
Heekyum Kim, Yuchul Jung
doaj   +1 more source

Hierarchical Context enabled Recurrent Neural Network for Recommendation

open access: yes, 2019
A long user history inevitably reflects the transitions of personal interests over time. The analyses on the user history require the robust sequential model to anticipate the transitions and the decays of user interests.
Ji, Mingi   +3 more
core   +1 more source

ResDecode: Accelerating Large Language Models Inference via Residual Decoding Heads

open access: yesBig Data Mining and Analytics
Large language Models (LLMs) have immense potential to enhance the capabilities of Cyber-Physical-Social Intelligence (CPSI) systems, enabling them to better engage with complex cyber, physical, and social environments.
Ziqian Zeng   +7 more
doaj   +1 more source

Edge-Ready Romanian Language Models: Training, Quantization, and Deployment

open access: yesAI
We present RoBaseLM-S (125 M) and RoBaseLM-M (260 M), two compact Romanian decoder-only language models trained from scratch on a 4.3 B-token curated corpus.
T. A. Diac   +5 more
doaj   +1 more source

Membership Inference Attack against Long-Context Large Language Models

open access: yes
Recent advances in Large Language Models (LLMs) have enabled them to overcome their context window limitations, and demonstrate exceptional retrieval and reasoning capacities on longer context. Quesion-answering systems augmented with Long-Context Language Models (LCLMs) can automatically search massive external data and incorporate it into their ...
Wang, Zixiong   +3 more
openaire   +2 more sources

DCDRNet: Detail–Context Decoupled Representation Learning Network for Efficient Crack Segmentation

open access: yesAlgorithms
Accurate crack segmentation is critical for automated infrastructure inspection but remains challenging due to the inherent conflict between preserving fine-grained geometric details and modeling global semantic context. Existing deep learning approaches
Rihua Huang, Miaolin Feng, Yandong Hu
doaj   +1 more source

ICNet for Real-Time Semantic Segmentation on High-Resolution Images

open access: yes, 2018
We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference.
C Liu   +8 more
core   +1 more source

FlashBack:Efficient Retrieval-Augmented Language Modeling for Long Context Inference

open access: yes
Retrieval-Augmented Language Modeling (RALM) by integrating large language models (LLM) with relevant documents from an external corpus is a proven method for enabling the LLM to generate information beyond the scope of its pre-training corpus. Previous work utilizing retrieved content by simply prepending it to the input poses a high runtime issue ...
Liu, Runheng   +4 more
openaire   +2 more sources

LSTMConvSR: Joint Long–Short-Range Modeling via LSTM-First–CNN-Next Architecture for Remote Sensing Image Super-Resolution

open access: yesRemote Sensing
The inability of existing super-resolution methods to jointly model short-range and long-range spatial dependencies in remote sensing imagery limits reconstruction efficacy.
Qiwei Zhu   +3 more
doaj   +1 more source

Home - About - Disclaimer - Privacy