Results 1 to 10 of about 12,418,444 (305)
Several recent empirical studies demonstrate that important machine learning tasks such as training deep neural networks, exhibit a low-rank structure, where most of the variation in the loss function occurs only in a few directions of the input space ...
Romain Cosson +4 more
doaj +2 more sources
Neural oscillation in low-rank SNNs: bridging network dynamics and cognitive function [PDF]
Neural oscillation, particularly gamma oscillation, are fundamental to cognitive processes such as attention, perception, and decision-making. Experimental studies have shown that the phase of gamma oscillation modulates neuronal response selectivity ...
Bin Li +5 more
doaj +2 more sources
Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models [PDF]
Public large-scale text-to-image diffusion models, such as Stable Diffusion, have gained significant attention from the community. These models can be easily customized for new concepts using low-rank adaptations (LoRAs).
Yuchao Gu +12 more
semanticscholar +1 more source
Suppressing the spontaneous combustion of lignite is of great significance for safe transportation and efficient utilization of lignite. Taking the Shengli lignite as the research object, two different macerals, inertinite and huminite, were selected by ...
Yanjun WANG +7 more
doaj +1 more source
LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning [PDF]
The low-rank adaptation (LoRA) method can largely reduce the amount of trainable parameters for fine-tuning large language models (LLMs), however, it still requires expensive activation memory to update low-rank weights.
Longteng Zhang +4 more
semanticscholar +1 more source
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models [PDF]
Recently years have witnessed a rapid development of large language models (LLMs). Despite the strong ability in many language-understanding tasks, the heavy computational burden largely restricts the application of LLMs especially when one needs to ...
Yuhui Xu +8 more
semanticscholar +1 more source
DyLoRA: Parameter-Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation [PDF]
With the ever-growing size of pretrained models (PMs), fine-tuning them has become more expensive and resource-hungry. As a remedy, low-rank adapters (LoRA) keep the main pretrained weights of the model frozen and just introduce some learnable truncated ...
Mojtaba Valipour +3 more
semanticscholar +1 more source
Robust Recovery of Subspace Structures by Low-Rank Representation [PDF]
In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as ...
Guangcan Liu +5 more
semanticscholar +1 more source
LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation [PDF]
Transformer models have achieved remarkable results in various natural language tasks, but they are often prohibitively large, requiring massive memories and computational resources. To reduce the size and complexity of these models, we propose LoSparse (
Yixiao Li +6 more
semanticscholar +1 more source
Research Progress on Electro-Chemical Oxidation of Low-Rank Coal to Humic Acid
[Introduction] With the vigorous development of renewable energy, clean and efficient utilization of low-rank coal not only improves resource utilization and economic value but also has great social significance.
Ruzhan BAI +6 more
doaj +1 more source

