Results 101 to 110 of about 226,203 (262)

PARA: Parameter-Efficient Fine-tuning with Prompt-Aware Representation Adjustment

open access: yesProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
accepted by ACL ...
Liu, Zequan   +4 more
openaire   +2 more sources

3D (Bio) Printing Combined Fiber Fabrication Methods for Tissue Engineering Applications: Possibilities and Limitations

open access: yesAdvanced Functional Materials, EarlyView.
Biofabrication aims at providing innovative technologies and tools for the fabrication of tissue‐like constructs for tissue engineering and regenerative medicine applications. By integrating multiple biofabrication technologies, such as 3D (bio) printing with fiber fabrication methods, it would be more realistic to reconstruct native tissue's ...
Waseem Kitana   +2 more
wiley   +1 more source

Parameter-Efficient Fine-Tuning for Individual Tree Crown Detection and Species Classification Using UAV-Acquired Imagery

open access: yesRemote Sensing
Pre-trained foundation models, trained on large-scale datasets, have demonstrated significant success in a variety of downstream vision tasks. Parameter-efficient fine-tuning (PEFT) methods aim to adapt these foundation models to new domains by updating ...
Jiuyu Zhang, Fan Lei, Xijian Fan
doaj   +1 more source

GeoLoRA: Geometric integration for parameter efficient fine-tuning

open access: yes
Low-Rank Adaptation (LoRA) has become a widely used method for parameter-efficient fine-tuning of large-scale, pre-trained neural networks. However, LoRA and its extensions face several challenges, including the need for rank adaptivity, robustness, and computational efficiency during the fine-tuning process. We introduce GeoLoRA, a novel approach that
Schotthöfer, Steffen   +4 more
openaire   +2 more sources

Quantum Emitters in Hexagonal Boron Nitride: Principles, Engineering and Applications

open access: yesAdvanced Functional Materials, EarlyView.
Quantum emitters in hexagonal boron nitride have emerged as a promising candidate for quantum information science. This review examines the fundamentals of these quantum emitters, including their level structures, defect engineering, and their possible chemical structures.
Thi Ngoc Anh Mai   +8 more
wiley   +1 more source

PARAMETER EFFICIENT FINE-TUNING AND OVERFITTING IN GPT LARGE LANGUAGE MODELS: A METRIC-BASED COMPARISON

open access: yesЕлектроніка та інформаційні технології
Background. Building upon previous research, this study conducts an exploration into Large Language Models (LLMs), with an emphasis on the fine-tuning and assessment of LLaMA-3.1 for instructional tasks. LLaMA-3.1, which is a new generation model and has
Bohdan Pavlyshenko, Ivan Bulka
doaj   +1 more source

Multi-modal parameter-efficient fine-tuning via graph neural network

open access: yesApplied Intelligence
With the advent of the era of foundation models, pre-training and fine-tuning have become common paradigms. Recently, parameter-efficient fine-tuning has garnered widespread attention due to its better balance between the number of learnable parameters and performance.
Bin Cheng, Jiaxuan Lu
openaire   +2 more sources

NanoMOF‐Based Multilevel Anti‐Counterfeiting by a Combination of Visible and Invisible Photoluminescence and Conductivity

open access: yesAdvanced Functional Materials, EarlyView.
This study presents novel anti‐counterfeiting tags with multilevel security features that utilize additional disguise features. They combine luminescent nanosized Ln‐MOFs with conductive polymers to multifunctional mixed‐matrix membranes and powder composites. The materials exhibit visible/NIR emission and matrix‐based conductivity even as black bodies.
Moritz Maxeiner   +9 more
wiley   +1 more source

Efficient Adaptation: Enhancing Multilingual Models for Low-Resource Language Translation

open access: yesMathematics
This study focuses on the neural machine translation task for the TR-EN language pair, which is considered a low-resource language pair. We investigated fine-tuning strategies for pre-trained language models.
Ilhami Sel, Davut Hanbay
doaj   +1 more source

Parameter Efficient Fine-tuning via Explained Variance Adaptation

open access: yes
Foundation models (FMs) are pre-trained on large-scale datasets and then fine-tuned for a specific downstream task. The most common fine-tuning method is to update pretrained weights via low-rank adaptation (LoRA). Existing initialization strategies for LoRA often rely on singular value decompositions (SVD) of gradients or weight matrices.
Paischer, Fabian   +5 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy