Results 251 to 260 of about 189,088 (295)
Some of the next articles are maybe not open access.

Algorithm for sparse representation minimizing mean square error of power spectrograms

2015 IEEE International Conference on Digital Signal Processing (DSP), 2015
Sparse representation is an idea to approximate a target signal by a linear combination of a small number of sample signals, and it is utilized in various research fields. In this paper, we evaluate the approximation error of signals by the mean square error of power spectrograms (P-MSE).
Yuma Tanaka   +2 more
openaire   +1 more source

Bias Adjustment Minimizing the Asymptotic Mean Square Error

Communications in Statistics - Theory and Methods, 2013
A method of bias adjustment which minimizes the asymptotic mean square error is presented for an estimator typically given by maximum likelihood. Generally, this adjustment includes unknown population values. However, in some examples, the adjustment can be done without population values.
openaire   +1 more source

Steady-state error in adaptive mean-square minimization

IEEE Transactions on Information Theory, 1970
This paper considers the steady-state mean-square error when an adaptive linear estimator is used on a stationary time series. The estimator weights are adjusted periodically by moving a small increment in the direction of the estimated gradient. Under very general conditions the asymptotic mean-square error is bounded and under more restrictive ...
openaire   +1 more source

Supermodular mean squared error minimization for sensor scheduling in optimal Kalman Filtering

2017 American Control Conference (ACC), 2017
We consider the problem of scheduling a set of sensors to observe the state of a discrete-time linear system subject to a limited energy budget. Our goal is to devise a sensor schedule that minimizes the mean squared error (MSE) of an optimal estimator (i.e., the Kalman Filter). Both the minimum-MSE and the minimum-cardinality optimal sensor scheduling
Prince Singh   +5 more
openaire   +1 more source

Minimization of mean-square error for data transmitted via group codes

IEEE Transactions on Information Theory, 1969
We show how to find solutions to the problem considered by Mitryayev [l ] in the case where the loss power function is quadratic. This problem is to minimize mean-square error when digital data is represented by group code combinations and the a priori probability distribution is uniform.
Crimmins, T. R.   +3 more
openaire   +2 more sources

An iterative approach to minimize the mean squared error in ridge regression

Computational Statistics, 2015
zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Wong, Ka Yiu, Chiu, Sung Nok
openaire   +2 more sources

Error Entropy and Mean Square Error Minimization Algorithms for Neural Identification of Supercritical Extraction Process

2008 10th Brazilian Symposium on Neural Networks, 2008
In this paper, artificial neural networks (ANN) are used to model an extraction process that uses a supercritical fluid as solvent which its pilot installation is located at the Institute of Experimental and Technological Biology - IBET in Oeiras - Lisbon - Portugal.
Rosana Paula de Oliveira Soares   +3 more
openaire   +1 more source

Proportionate-Type Normalized Least Mean Square Algorithms With Gain Allocation Motivated by Mean-Square-Error Minimization for White Input

IEEE Transactions on Signal Processing, 2011
In the past, ad hoc methods have been used to choose gains in proportionate-type normalized least mean-square algorithms without strong theoretical under-pinnings. In this correspondence, a theoretical framework and motivation for adaptively choosing gains is presented, such that the mean-square error will be minimized at any given time. As a result of
Kevin Wagner, Milos Doroslovacki
openaire   +1 more source

PtNLMS algorithm obtained by minimization of mean square error modeled by exponential functions

2010 Conference Record of the Forty Fourth Asilomar Conference on Signals, Systems and Computers, 2010
Using the proportionate-type steepest descent algorithm we represent the current weight deviations in terms of initial weight deviations. Then we attempt to minimize the mean square output error with respect to the gains at a given instant. The corresponding optimal average gains are found using a water-filling procedure.
Kevin T. Wagner, Milos I. Doroslovacki
openaire   +1 more source

Linear model averaging by minimizing mean-squared forecast error unbiased estimator

Model Assisted Statistics and Applications, 2016
This paper presents a new ordinary least squares model averaging method which is proposed to be a preferable alternative to Mallows Model Averaging (MMA), Bayesian Model Averaging (BMA) and naïve simple forecast average. The method is developed to deal with possibly non-nested models and selects forecast weights by minimizing the unbiased estimator of ...
openaire   +1 more source

Home - About - Disclaimer - Privacy