Results 101 to 110 of about 226,203 (262)
PARA: Parameter-Efficient Fine-tuning with Prompt-Aware Representation Adjustment
accepted by ACL ...
Liu, Zequan +4 more
openaire +2 more sources
Biofabrication aims at providing innovative technologies and tools for the fabrication of tissue‐like constructs for tissue engineering and regenerative medicine applications. By integrating multiple biofabrication technologies, such as 3D (bio) printing with fiber fabrication methods, it would be more realistic to reconstruct native tissue's ...
Waseem Kitana +2 more
wiley +1 more source
Pre-trained foundation models, trained on large-scale datasets, have demonstrated significant success in a variety of downstream vision tasks. Parameter-efficient fine-tuning (PEFT) methods aim to adapt these foundation models to new domains by updating ...
Jiuyu Zhang, Fan Lei, Xijian Fan
doaj +1 more source
GeoLoRA: Geometric integration for parameter efficient fine-tuning
Low-Rank Adaptation (LoRA) has become a widely used method for parameter-efficient fine-tuning of large-scale, pre-trained neural networks. However, LoRA and its extensions face several challenges, including the need for rank adaptivity, robustness, and computational efficiency during the fine-tuning process. We introduce GeoLoRA, a novel approach that
Schotthöfer, Steffen +4 more
openaire +2 more sources
Quantum Emitters in Hexagonal Boron Nitride: Principles, Engineering and Applications
Quantum emitters in hexagonal boron nitride have emerged as a promising candidate for quantum information science. This review examines the fundamentals of these quantum emitters, including their level structures, defect engineering, and their possible chemical structures.
Thi Ngoc Anh Mai +8 more
wiley +1 more source
Background. Building upon previous research, this study conducts an exploration into Large Language Models (LLMs), with an emphasis on the fine-tuning and assessment of LLaMA-3.1 for instructional tasks. LLaMA-3.1, which is a new generation model and has
Bohdan Pavlyshenko, Ivan Bulka
doaj +1 more source
Multi-modal parameter-efficient fine-tuning via graph neural network
With the advent of the era of foundation models, pre-training and fine-tuning have become common paradigms. Recently, parameter-efficient fine-tuning has garnered widespread attention due to its better balance between the number of learnable parameters and performance.
Bin Cheng, Jiaxuan Lu
openaire +2 more sources
This study presents novel anti‐counterfeiting tags with multilevel security features that utilize additional disguise features. They combine luminescent nanosized Ln‐MOFs with conductive polymers to multifunctional mixed‐matrix membranes and powder composites. The materials exhibit visible/NIR emission and matrix‐based conductivity even as black bodies.
Moritz Maxeiner +9 more
wiley +1 more source
Efficient Adaptation: Enhancing Multilingual Models for Low-Resource Language Translation
This study focuses on the neural machine translation task for the TR-EN language pair, which is considered a low-resource language pair. We investigated fine-tuning strategies for pre-trained language models.
Ilhami Sel, Davut Hanbay
doaj +1 more source
Parameter Efficient Fine-tuning via Explained Variance Adaptation
Foundation models (FMs) are pre-trained on large-scale datasets and then fine-tuned for a specific downstream task. The most common fine-tuning method is to update pretrained weights via low-rank adaptation (LoRA). Existing initialization strategies for LoRA often rely on singular value decompositions (SVD) of gradients or weight matrices.
Paischer, Fabian +5 more
openaire +2 more sources

