Results 281 to 290 of about 8,464,096 (315)
Some of the next articles are maybe not open access.
Conference on Empirical Methods in Natural Language Processing, 2023
Federated learning (FL) is a promising paradigm to enable collaborative model training with decentralized data. However, the training process of Large Language Models (LLMs) generally incurs the update of significant parameters, which limits the ...
Tianshi Che +7 more
semanticscholar +1 more source
Federated learning (FL) is a promising paradigm to enable collaborative model training with decentralized data. However, the training process of Large Language Models (LLMs) generally incurs the update of significant parameters, which limits the ...
Tianshi Che +7 more
semanticscholar +1 more source
Bridging Vision and Language Encoders: Parameter-Efficient Tuning for Referring Image Segmentation
IEEE International Conference on Computer Vision, 2023Parameter Efficient Tuning (PET) has gained attention for reducing the number of parameters while maintaining performance and providing better hardware resource savings, but few studies investigate dense prediction tasks and interaction between ...
Zunnan Xu +5 more
semanticscholar +1 more source
A Summary of Parameter Tuning of Active Disturbance Rejection Controller
Recent Advances in Electrical & Electronic Engineering (Formerly Recent Patents on Electrical & Electronic Engineering), 2022ADRC (active disturbance rejection controller) technology is a new practical technology that does not rely on the mathematical model of the controlled object and has strong robustness.
Bing-Tuan Gao +3 more
semanticscholar +1 more source
When parameter tuning actually is parameter control
Proceedings of the 13th annual conference on Genetic and evolutionary computation, 2011In this paper, we show that sequential parameter optimization (SPO), a method that was designed for (offline) parameter tuning, can be successfully used as a controller for multistart approaches of evolutionary algorithms (EA). We demonstrate this by replacing the restart heuristic of the IPOP-CMA-ES with the SPO algorithm. Experiments on the BBOB 2010
Simon Wessing +2 more
openaire +1 more source
Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning
arXiv.orgExisting research has shown that large language models (LLMs) exhibit remarkable performance in language understanding and generation. However, when LLMs are continuously fine-tuned on complex and diverse domain-specific downstream tasks, the inference ...
Weijieying Ren +4 more
semanticscholar +1 more source
International Journal of Web Services Research, 2019
QoS-aware service composition problem has been drawn great attention in recent years. As an NP-hard problem, high time complexity is inevitable if global optimization algorithms (such as integer programming) are adopted. Researchers applied various evolutionary algorithms to decrease the time complexity by looking for a near-optimum solution.
Ruilin Liu, Zhongjie Wang, Xiaofei Xu
openaire +1 more source
QoS-aware service composition problem has been drawn great attention in recent years. As an NP-hard problem, high time complexity is inevitable if global optimization algorithms (such as integer programming) are adopted. Researchers applied various evolutionary algorithms to decrease the time complexity by looking for a near-optimum solution.
Ruilin Liu, Zhongjie Wang, Xiaofei Xu
openaire +1 more source
SplitLoRA: A Split Parameter-Efficient Fine-Tuning Framework for Large Language Models
arXiv.orgThe scalability of large language models (LLMs) in handling high-complexity models and large-scale datasets has led to tremendous successes in pivotal domains.
Zheng Lin +8 more
semanticscholar +1 more source
Parameter-Efficient Fine-Tuning with Discrete Fourier Transform
International Conference on Machine LearningLow-rank adaptation~(LoRA) has recently gained much interest in fine-tuning foundation models. It effectively reduces the number of trainable parameters by incorporating low-rank matrices $A$ and $B$ to represent the weight change, i.e., $\Delta W=BA ...
Ziqi Gao +6 more
semanticscholar +1 more source
arXiv.org
This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies - full-parameter fine-tuning and parameter-efficient tuning - within the context of medical Large Language Models (LLMs).
Cl'ement Christophe +15 more
semanticscholar +1 more source
This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies - full-parameter fine-tuning and parameter-efficient tuning - within the context of medical Large Language Models (LLMs).
Cl'ement Christophe +15 more
semanticscholar +1 more source
2016
Having shown that EMAS approaches are effective in solving selected benchmark and real-life problems, it would be interesting to take an insight into the exact features of the most important mechanism of EMAS, i.e. the distributed selection based on existence of non-renewable resource.
Aleksander Byrski +1 more
openaire +1 more source
Having shown that EMAS approaches are effective in solving selected benchmark and real-life problems, it would be interesting to take an insight into the exact features of the most important mechanism of EMAS, i.e. the distributed selection based on existence of non-renewable resource.
Aleksander Byrski +1 more
openaire +1 more source

