Results 291 to 300 of about 13,997,719 (325)
Some of the next articles are maybe not open access.
Compressing Large Language Models using Low Rank and Low Precision Decomposition
Neural Information Processing SystemsThe prohibitive sizes of Large Language Models (LLMs) today make it difficult to deploy them on memory-constrained edge devices. This work introduces $\rm CALDERA$ -- a new post-training LLM compression algorithm that harnesses the inherent low-rank ...
R. Saha +4 more
semanticscholar +1 more source
Mixture-of-Subspaces in Low-Rank Adaptation
Conference on Empirical Methods in Natural Language ProcessingIn this paper, we introduce a subspace-inspired Low-Rank Adaptation (LoRA) method, which is computationally efficient, easy to implement, and readily applicable to large language, multimodal, and diffusion models. Initially, we equivalently decompose the
Taiqiang Wu +3 more
semanticscholar +1 more source
Loki: Low-Rank Keys for Efficient Sparse Attention
Neural Information Processing SystemsInference on large language models (LLMs) can be expensive in terms of the compute and memory costs involved, especially when long sequence lengths are used.
Prajwal Singhania +4 more
semanticscholar +1 more source
Low-rank physical model recovery from low-rank signal approximation
2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017This work presents a mathematical approach for recovering a physical model from a low-rank approximation of measured data obtained via the singular value decomposition (SVD). The general form of a low-rank physical model of the data is often known, so the presented approach learns the proper rotation and scaling matrices from the singular vectors and ...
Charles Ethan Hayes +2 more
openaire +1 more source
Robust Kernel Low-Rank Representation
IEEE Transactions on Neural Networks and Learning Systems, 2016Recently, low-rank representation (LRR) has shown promising performance in many real-world applications such as face clustering. However, LRR may not achieve satisfactory results when dealing with the data from nonlinear subspaces, since it is originally designed to handle the data from linear subspaces in the input space.
Shijie, Xiao +3 more
openaire +2 more sources
Low-rank revealing UTV decompositions
Numerical Algorithms, 1997Much attention has been paid to the \(UTV\) decompositions of a high-rank matrix, however, only a little to the low-rank case. Low-rank matrices arise when a small number of parameters suffices to describe a system. The high-rank revealing algorithms are not suited for such problems, and hence there is a need for algorithms which handle the low-rank ...
Fierro, Ricardo D. +1 more
openaire +2 more sources
Low Rank Solution of Lyapunov Equations
SIAM Journal on Matrix Analysis and Applications, 2002The Cholesky factor-alternating direction implicit algorithm is presented to compute a low rank approximation to the solution \(X\) of the Lyapunov equation \(AX+XA^T=-BB^T\) with large matrix \(A\) and right hand side of low rank. The algorithm requires only matrix-vector products and linear solvers.
Li, Jing-Rebecca, White, Jacob
openaire +2 more sources
2014
This note reviews differential equations on manifolds of matrices or tensors of low rank. They serve to approximate, in a low-rank format, large time-dependent matrices and tensors that are either given explicitly via their increments or are unknown solutions of differential equations.
openaire +1 more source
This note reviews differential equations on manifolds of matrices or tensors of low rank. They serve to approximate, in a low-rank format, large time-dependent matrices and tensors that are either given explicitly via their increments or are unknown solutions of differential equations.
openaire +1 more source
2019
A principal components analysis models high dimensional data points with an accurate, low dimensional, model. Now form a data matrix from the approximate points. This data matrix must have low rank (because the model is low dimensional) and it must be close to the original data matrix (because the model is accurate). This suggests modelling data with a
openaire +1 more source
A principal components analysis models high dimensional data points with an accurate, low dimensional, model. Now form a data matrix from the approximate points. This data matrix must have low rank (because the model is low dimensional) and it must be close to the original data matrix (because the model is accurate). This suggests modelling data with a
openaire +1 more source
Nonlinearly Structured Low-Rank Approximation
2014Polynomially structured low-rank approximation problems occur in algebraic curve fitting, e.g., conic section fitting, subspace clustering (generalized principal component analysis), and nonlinear and parameter-varying system identification.
Markovsky, Ivan, Usevich, Konstantin
openaire +2 more sources

