Results 301 to 310 of about 1,223,755 (370)
Some of the next articles are maybe not open access.
Maximum Likelihood Estimation and Quasi-Maximum Likelihood Estimation
Foundations of Modern Econometrics, 2020openaire +2 more sources
International Journal of Robust and Nonlinear Control, 2021
Variable‐gain nonlinearity is a piecewise‐linear characteristic to describe the process with different gains in different input regions. This article studies the parameter estimation issue of the input nonlinear controlled autoregressive moving average ...
Ximei Liu, Yamin Fan
semanticscholar +1 more source
Variable‐gain nonlinearity is a piecewise‐linear characteristic to describe the process with different gains in different input regions. This article studies the parameter estimation issue of the input nonlinear controlled autoregressive moving average ...
Ximei Liu, Yamin Fan
semanticscholar +1 more source
Chinese Sociological Review, 2013
Advanced statistical models rely on maximum likelihood (ML) estimators to estimate unknown parameters. Given the complexity and highly technical nature of the numerical approaches embedded in ML, textbooks typically offer oversimplified descriptions of ML, omitting important details from the discussion.
openaire +1 more source
Advanced statistical models rely on maximum likelihood (ML) estimators to estimate unknown parameters. Given the complexity and highly technical nature of the numerical approaches embedded in ML, textbooks typically offer oversimplified descriptions of ML, omitting important details from the discussion.
openaire +1 more source
2019
This chapter recalls the basics of the estimation method consisting in maximizing the likelihood associated to the observations. The resulting estimators enjoy convenient theoretical properties, being optimal in a wide variety of situations. The maximum likelihood principle will be used throughout the next chapters to fit the supervised learning models.
Michel Denuit +2 more
openaire +2 more sources
This chapter recalls the basics of the estimation method consisting in maximizing the likelihood associated to the observations. The resulting estimators enjoy convenient theoretical properties, being optimal in a wide variety of situations. The maximum likelihood principle will be used throughout the next chapters to fit the supervised learning models.
Michel Denuit +2 more
openaire +2 more sources
Moment Estimators and Maximum Likelihood
Biometrika, 1958where J'q2(x) P(x; 0) dx = Or, J'q(x) qq(x) P(x; 0) dx = 0 (r+ s). (2) To avoid undue complication at this stage we assume P(x; 0) is continuous throughout its range. We reconsider the restrictions on P in a subsequent section.
openaire +1 more source
International Journal of Adaptive Control and Signal Processing, 2019
For a special class of nonlinear systems (ie, bilinear systems) with autoregressive moving average noise, this paper gives the input‐output representation of the bilinear systems through eliminating the state variables in the model. Based on the obtained
Meihang Li, Ximei Liu, F. Ding
semanticscholar +1 more source
For a special class of nonlinear systems (ie, bilinear systems) with autoregressive moving average noise, this paper gives the input‐output representation of the bilinear systems through eliminating the state variables in the model. Based on the obtained
Meihang Li, Ximei Liu, F. Ding
semanticscholar +1 more source
2014
In Chap. 2 you learned that ordinary least squares (OLS) estimation minimizes the squared discrepancy between observed values and fitted ones. This procedure is primarily a descriptive tool, as it identifies the weights we use in our sample to best predict y from x.
openaire +2 more sources
In Chap. 2 you learned that ordinary least squares (OLS) estimation minimizes the squared discrepancy between observed values and fitted ones. This procedure is primarily a descriptive tool, as it identifies the weights we use in our sample to best predict y from x.
openaire +2 more sources
1997
In the last chapter attention was given to the determination of the state vector1 ξ for given observations Y and known parameters A. In this chapter the maximum likelihood estimation of the parameters \(\lambda = (\theta \prime ,\rho \prime ,\xi {\prime _0})\prime \) of an MS-VAR model is considered.
openaire +1 more source
In the last chapter attention was given to the determination of the state vector1 ξ for given observations Y and known parameters A. In this chapter the maximum likelihood estimation of the parameters \(\lambda = (\theta \prime ,\rho \prime ,\xi {\prime _0})\prime \) of an MS-VAR model is considered.
openaire +1 more source
Estimating unknown parameters in uncertain differential equation by maximum likelihood estimation
Soft Computing - A Fusion of Foundations, Methodologies and Applications, 2022Yang Liu, Baoding Liu
semanticscholar +1 more source
1982
This chapter deals with maximum likelihood estimation based on n independent observations X1,...,Xn from the distribution N ⊣ (λ, χ, Ψ).
openaire +1 more source
This chapter deals with maximum likelihood estimation based on n independent observations X1,...,Xn from the distribution N ⊣ (λ, χ, Ψ).
openaire +1 more source

