Results 211 to 220 of about 3,342,957 (272)
Some of the next articles are maybe not open access.
The Expressive Power of Low-Rank Adaptation
International Conference on Learning Representations, 2023Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning method that leverages low-rank adaptation of weight matrices, has emerged as a prevalent technique for fine-tuning pre-trained models such as large language models and diffusion models ...
Yuchen Zeng, Kangwook Lee
semanticscholar +1 more source
Rank Transformations as a Bridge between Parametric and Nonparametric Statistics
American Statistician, 1981W. Conover, R. Iman
exaly +2 more sources
DoRA: Weight-Decomposed Low-Rank Adaptation
International Conference on Machine LearningAmong the widely used parameter-efficient fine-tuning (PEFT) methods, LoRA and its variants have gained considerable popularity because of avoiding additional inference costs.
Shih-Yang Liu +6 more
semanticscholar +1 more source
Rankings and Ranking Functions
Canadian Journal of Mathematics, 1981Suppose that n competitors compete in r races and in each race they are awarded placings l, 2, 3, …, n – 1, n. After the r races each competitor has a result consisting of his r placings. Let such a result be written (αj)1≦j≦r where for convenience the positive integers αj are arranged in ascending order.
openaire +1 more source
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
International Conference on Machine LearningTraining Large Language Models (LLMs) presents significant memory challenges, predominantly due to the growing size of weights and optimizer states. Common memory-reduction approaches, such as low-rank adaptation (LoRA), add a trainable low-rank matrix ...
Jiawei Zhao +5 more
semanticscholar +1 more source
Background: Research rankings based on bibliometrics today dominate governance in academia and determine careers in universities. Method: Analytical approach to capture the incentives by users of rankings and by suppliers of rankings, both on an individual and an aggregate level. Result: Rankings may produce unintended negative side effects.
Bruno S. Frey, Margit Osterloh
openaire +3 more sources
LoRA+: Efficient Low Rank Adaptation of Large Models
International Conference on Machine LearningIn this paper, we show that Low Rank Adaptation (LoRA) as originally introduced in Hu et al. (2021) leads to suboptimal finetuning of models with large width (embedding dimension). This is due to the fact that adapter matrices A and B in LoRA are updated
Soufiane Hayou, Nikhil Ghosh, Bin Yu
semanticscholar +1 more source
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017
Most existing bottom-up algorithms measure the foreground saliency of a pixel or region based on its contrast within a local context or the entire image, whereas a few methods focus on segmenting out background regions and thereby salient objects.
Lihe, Zhang +4 more
openaire +2 more sources
Most existing bottom-up algorithms measure the foreground saliency of a pixel or region based on its contrast within a local context or the entire image, whereas a few methods focus on segmenting out background regions and thereby salient objects.
Lihe, Zhang +4 more
openaire +2 more sources
Semi-Orthogonal Low-Rank Matrix Factorization for Deep Neural Networks
Interspeech, 2018Time Delay Neural Networks (TDNNs), also known as one-dimensional Convolutional Neural Networks (1-d CNNs), are an efficient and well-performing neural network architecture for speech recognition.
Daniel Povey +6 more
semanticscholar +1 more source
Learning a Low Tensor-Train Rank Representation for Hyperspectral Image Super-Resolution
IEEE Transactions on Neural Networks and Learning Systems, 2019Hyperspectral images (HSIs) with high spectral resolution only have the low spatial resolution. On the contrary, multispectral images (MSIs) with much lower spectral resolution can be obtained with higher spatial resolution.
Renwei Dian, Shutao Li, Leyuan Fang
semanticscholar +1 more source

