Results 81 to 90 of about 33,388 (259)
GeoLoRA: Geometric integration for parameter efficient fine-tuning
Low-Rank Adaptation (LoRA) has become a widely used method for parameter-efficient fine-tuning of large-scale, pre-trained neural networks. However, LoRA and its extensions face several challenges, including the need for rank adaptivity, robustness, and computational efficiency during the fine-tuning process. We introduce GeoLoRA, a novel approach that
Schotthöfer, Steffen +4 more
openaire +2 more sources
Quantum Emitters in Hexagonal Boron Nitride: Principles, Engineering and Applications
Quantum emitters in hexagonal boron nitride have emerged as a promising candidate for quantum information science. This review examines the fundamentals of these quantum emitters, including their level structures, defect engineering, and their possible chemical structures.
Thi Ngoc Anh Mai +8 more
wiley +1 more source
Background. Building upon previous research, this study conducts an exploration into Large Language Models (LLMs), with an emphasis on the fine-tuning and assessment of LLaMA-3.1 for instructional tasks. LLaMA-3.1, which is a new generation model and has
Bohdan Pavlyshenko, Ivan Bulka
doaj +1 more source
PARA: Parameter-Efficient Fine-tuning with Prompt-Aware Representation Adjustment
accepted by ACL ...
Liu, Zequan +4 more
openaire +2 more sources
This study presents novel anti‐counterfeiting tags with multilevel security features that utilize additional disguise features. They combine luminescent nanosized Ln‐MOFs with conductive polymers to multifunctional mixed‐matrix membranes and powder composites. The materials exhibit visible/NIR emission and matrix‐based conductivity even as black bodies.
Moritz Maxeiner +9 more
wiley +1 more source
Efficient Adaptation: Enhancing Multilingual Models for Low-Resource Language Translation
This study focuses on the neural machine translation task for the TR-EN language pair, which is considered a low-resource language pair. We investigated fine-tuning strategies for pre-trained language models.
Ilhami Sel, Davut Hanbay
doaj +1 more source
Parameter Efficient Fine-tuning via Explained Variance Adaptation
Foundation models (FMs) are pre-trained on large-scale datasets and then fine-tuned for a specific downstream task. The most common fine-tuning method is to update pretrained weights via low-rank adaptation (LoRA). Existing initialization strategies for LoRA often rely on singular value decompositions (SVD) of gradients or weight matrices.
Paischer, Fabian +5 more
openaire +2 more sources
Synchrotron Radiation for Quantum Technology
Materials and interfaces underpin quantum technologies, with synchrotron and FEL methods key to understanding and optimizing them. Advances span superconducting and semiconducting qubits, 2D materials, and topological systems, where strain, defects, and interfaces govern performance.
Oliver Rader +10 more
wiley +1 more source
Atomic Size Misfit for Electrocatalytic Small Molecule Activation
This review explores the application and mechanisms of atomic size misfit in catalysis for small molecule activation, focusing on how structural defects and electronic properties can effectively lower the energy barriers of chemical bonds in molecules like H2O, CO2, and N2.
Ping Hong +3 more
wiley +1 more source
MPVT: An Efficient Multi-Modal Prompt Vision Tracker for Visual Target Tracking
Visual target tracking is a fundamental task in computer vision. Combining multi-modal information with tracking leverages complementary information, which improves the precision and robustness of trackers.
Jianyu Xie +6 more
doaj +1 more source

