Results 11 to 20 of about 8,464,096 (315)

Tuning Retinex Parameters [PDF]

open access: yesJournal of Electronic Imaging, 2004
Our goal is to understand how the Retinex parameters affect the predictions of the model. A simplified Retinex computation is specified in the recent MATLAB™ implementation; however, there remain several free parameters that introduce significant variability into the model’s predictions. We extend previous work on specifying these parameters.
Ciurea, Florian, Funt, Brian
openaire   +3 more sources

Automatic configuration of the Cassandra database using irace [PDF]

open access: yesPeerJ Computer Science, 2021
Database systems play a central role in modern data-centered applications. Their performance is thus a key factor in the efficiency of data processing pipelines. Modern database systems expose several parameters that users and database administrators can
Moisés Silva-Muñoz   +2 more
doaj   +2 more sources

Investigating the Effects of Parameter Tuning on Machine Learning for Occupant Behavior Analysis in Japanese Residential Buildings

open access: yesBuildings, 2023
Global warming is currently progressing worldwide, and it is important to control greenhouse gas emissions from the perspective of adaptation and mitigation.
Kaito Furuhashi, Takashi Nakaya
doaj   +1 more source

Full Parameter Fine-tuning for Large Language Models with Limited Resources [PDF]

open access: yesAnnual Meeting of the Association for Computational Linguistics, 2023
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) but demand massive GPU resources for training. Lowering the threshold for LLMs training would encourage greater participation from researchers, benefiting both academia ...
Kai Lv   +5 more
semanticscholar   +1 more source

Parameter-efficient fine-tuning of large-scale pre-trained language models

open access: yesNature Machine Intelligence, 2023
With the prevalence of pre-trained language models (PLMs) and the pre-training–fine-tuning paradigm, it has been continuously shown that larger models tend to yield better performance. However, as PLMs scale up, fine-tuning and storing all the parameters
Ning Ding   +19 more
semanticscholar   +1 more source

Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks [PDF]

open access: yesAnnual Meeting of the Association for Computational Linguistics, 2021
State-of-the-art parameter-efficient fine-tuning methods rely on introducing adapter modules between the layers of a pretrained language model. However, such modules are trained separately for each task and thus do not enable sharing information across ...
Rabeeh Karimi Mahabadi   +3 more
semanticscholar   +1 more source

AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient Hyper-parameter Tuning [PDF]

open access: yesNeural Information Processing Systems, 2022
Deep neural networks have seen great success in recent years; however, training a deep model is often challenging as its performance heavily depends on the hyper-parameters used.
Krishnateja Killamsetty   +6 more
semanticscholar   +1 more source

Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient MoE for Instruction Tuning [PDF]

open access: yesInternational Conference on Learning Representations, 2023
The Mixture of Experts (MoE) is a widely known neural architecture where an ensemble of specialized sub-models optimizes overall performance with a constant computational cost.
Ted Zadouri   +5 more
semanticscholar   +1 more source

A Self-Adaptive Heuristic Algorithm for Combinatorial Optimization Problems [PDF]

open access: yesInternational Journal of Computational Intelligence Systems, 2014
This paper introduces a new self-tuning mechanism to the local search heuristic for solving of combinatorial optimization problems. Parameter tuning of heuristics makes them difficult to apply, as parameter tuning itself is an optimization problem.
Cigdem Alabas-Uslu, Berna Dengiz
doaj   +1 more source

Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning [PDF]

open access: yesInternational Conference on Learning Representations, 2023
Prompt tuning, in which a base pretrained model is adapted to each task via conditioning on learned prompt vectors, has emerged as a promising approach for efficiently adapting large language models to multiple downstream tasks. However, existing methods
Zhen Wang   +5 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy