Results 91 to 100 of about 33,388 (259)

Parameter-Efficient Fine-Tuning with Discrete Fourier Transform

open access: yes
Accepted by ICML ...
Gao, Ziqi   +6 more
openaire   +2 more sources

Multi-modal parameter-efficient fine-tuning via graph neural network

open access: yesApplied Intelligence
With the advent of the era of foundation models, pre-training and fine-tuning have become common paradigms. Recently, parameter-efficient fine-tuning has garnered widespread attention due to its better balance between the number of learnable parameters and performance.
Bin Cheng, Jiaxuan Lu
openaire   +2 more sources

Copper‐based Materials for Photo and Electrocatalytic Process: Advancing Renewable Energy and Environmental Applications

open access: yesAdvanced Functional Materials, EarlyView.
Cu‐based catalysts as a cornerstone in advancing sustainable energy technologies are fully reviewed in this manuscript, highlighting their potential in photo‐ and electrocatalysis. It includes metallic copper, copper oxides, copper sulfides, copper halide perovskites, copper‐based metal–organic frameworks (MOFs), and covalent organic frameworks (COFs),
Jéssica C. de Almeida   +16 more
wiley   +1 more source

PRISM-Med: Parameter-Efficient Robust Interdomain Specialty Model for Medical Language Tasks

open access: yesIEEE Access
Language Models (LMs) have shown remarkable potential in healthcare applications, yet their widespread adoption faces challenges in achieving consistent performance across diverse medical specialties while maintaining parameter efficiency.
Jieui Kang, Hyungon Ryu, Jaehyeong Sim
doaj   +1 more source

Parameter-Efficient Subspace Optimization for LLM Fine-Tuning

open access: yes
This paper develops a new perspective on parameter-efficient fine-tuning for LLMs, inspired by the classical theory of subspace minimization. We introduce a unifying framework, Parameter-Efficient Subspace Optimization (PESO), which not only recovers many existing methods such as LoRA but also bridges them with the principled algorithmic and ...
Lou, Yuchen, Ye, Zeqi, Chen, Minshuo
openaire   +2 more sources

Engineering Porous Hollow Metal‐Poly(Heptazine Imide) Spheres: An Optimized Synthetic Strategy for Controlling Surface, Morphology, and Properties

open access: yesAdvanced Functional Materials, EarlyView.
Hollow poly(heptazine imide) spheres are prepared through a novel approach that integrates hard templating with ionothermal synthesis. This method enables precise control over surface area, pore volume, hydrophilicity, light absorption, band position, and metal composition. These tunable properties facilitate the customized design of semiconductors for
Lingli Ni   +10 more
wiley   +1 more source

SumLLaMA: Efficient Contrastive Representations and Fine-Tuned Adapters for Bug Report Summarization

open access: yesIEEE Access
In software maintenance, concise summaries of bug reports are crucial, significantly enhancing developer efficiency and ultimately improving software quality and user experience. Large language models (LLMs) have become the standard method for bug report
Bangmeng Xiang, Yunna Shao
doaj   +1 more source

SAM-PARSER: Fine-Tuning SAM Efficiently by Parameter Space Reconstruction

open access: yesProceedings of the AAAI Conference on Artificial Intelligence
Segment Anything Model (SAM) has received remarkable attention as it offers a powerful and versatile solution for object segmentation in images. However, fine-tuning SAM for downstream segmentation tasks under different scenarios remains a challenge, as the varied characteristics of different scenarios naturally requires diverse model parameter spaces.
Peng, Zelin   +4 more
openaire   +2 more sources

All‐in‐One Analog AI Hardware: On‐Chip Training and Inference with Conductive‐Metal‐Oxide/HfOx ReRAM Devices

open access: yesAdvanced Functional Materials, EarlyView.
An all‐in‐one analog AI accelerator is presented, enabling on‐chip training, weight retention, and long‐term inference acceleration. It leverages a BEOL‐integrated CMO/HfOx ReRAM array with low‐voltage operation (<1.5 V), multi‐bit capability over 32 states, low programming noise (10 nS), and near‐ideal weight transfer.
Donato Francesco Falcone   +11 more
wiley   +1 more source

Task-Agnostic Adaptive Activation Scaling Network for LLMs

open access: yesIEEE Access
The advent of Large Language Models (LLMs) has revolutionized Natural Language Processing (NLP), offering unprecedented capabilities for understanding and generating human-like texts.
Ni Jia   +4 more
doaj   +1 more source

Home - About - Disclaimer - Privacy