Results 11 to 20 of about 8,464,096 (315)
Tuning Retinex Parameters [PDF]
Our goal is to understand how the Retinex parameters affect the predictions of the model. A simplified Retinex computation is specified in the recent MATLAB™ implementation; however, there remain several free parameters that introduce significant variability into the model’s predictions. We extend previous work on specifying these parameters.
Ciurea, Florian, Funt, Brian
openaire +3 more sources
Automatic configuration of the Cassandra database using irace [PDF]
Database systems play a central role in modern data-centered applications. Their performance is thus a key factor in the efficiency of data processing pipelines. Modern database systems expose several parameters that users and database administrators can
Moisés Silva-Muñoz +2 more
doaj +2 more sources
Global warming is currently progressing worldwide, and it is important to control greenhouse gas emissions from the perspective of adaptation and mitigation.
Kaito Furuhashi, Takashi Nakaya
doaj +1 more source
Full Parameter Fine-tuning for Large Language Models with Limited Resources [PDF]
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) but demand massive GPU resources for training. Lowering the threshold for LLMs training would encourage greater participation from researchers, benefiting both academia ...
Kai Lv +5 more
semanticscholar +1 more source
Parameter-efficient fine-tuning of large-scale pre-trained language models
With the prevalence of pre-trained language models (PLMs) and the pre-training–fine-tuning paradigm, it has been continuously shown that larger models tend to yield better performance. However, as PLMs scale up, fine-tuning and storing all the parameters
Ning Ding +19 more
semanticscholar +1 more source
Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks [PDF]
State-of-the-art parameter-efficient fine-tuning methods rely on introducing adapter modules between the layers of a pretrained language model. However, such modules are trained separately for each task and thus do not enable sharing information across ...
Rabeeh Karimi Mahabadi +3 more
semanticscholar +1 more source
AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient Hyper-parameter Tuning [PDF]
Deep neural networks have seen great success in recent years; however, training a deep model is often challenging as its performance heavily depends on the hyper-parameters used.
Krishnateja Killamsetty +6 more
semanticscholar +1 more source
Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient MoE for Instruction Tuning [PDF]
The Mixture of Experts (MoE) is a widely known neural architecture where an ensemble of specialized sub-models optimizes overall performance with a constant computational cost.
Ted Zadouri +5 more
semanticscholar +1 more source
A Self-Adaptive Heuristic Algorithm for Combinatorial Optimization Problems [PDF]
This paper introduces a new self-tuning mechanism to the local search heuristic for solving of combinatorial optimization problems. Parameter tuning of heuristics makes them difficult to apply, as parameter tuning itself is an optimization problem.
Cigdem Alabas-Uslu, Berna Dengiz
doaj +1 more source
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning [PDF]
Prompt tuning, in which a base pretrained model is adapted to each task via conditioning on learned prompt vectors, has emerged as a promising approach for efficiently adapting large language models to multiple downstream tasks. However, existing methods
Zhen Wang +5 more
semanticscholar +1 more source

