Results 101 to 110 of about 199,140 (151)
Some of the next articles are maybe not open access.
Related searches:
Related searches:
Robust Maximum Likelihood Estimation
INFORMS Journal on Computing, 2019In many applications, statistical estimators serve to derive conclusions from data, for example, in finance, medical decision making, and clinical trials. However, the conclusions are typically dependent on uncertainties in the data. We use robust optimization principles to provide robust maximum likelihood estimators that are protected against data ...
Dimitris Bertsimas, Omid Nohadani
openaire +2 more sources
Chinese Sociological Review, 2013
Advanced statistical models rely on maximum likelihood (ML) estimators to estimate unknown parameters. Given the complexity and highly technical nature of the numerical approaches embedded in ML, textbooks typically offer oversimplified descriptions of ML, omitting important details from the discussion.
openaire +1 more source
Advanced statistical models rely on maximum likelihood (ML) estimators to estimate unknown parameters. Given the complexity and highly technical nature of the numerical approaches embedded in ML, textbooks typically offer oversimplified descriptions of ML, omitting important details from the discussion.
openaire +1 more source
2019
This chapter recalls the basics of the estimation method consisting in maximizing the likelihood associated to the observations. The resulting estimators enjoy convenient theoretical properties, being optimal in a wide variety of situations. The maximum likelihood principle will be used throughout the next chapters to fit the supervised learning models.
Michel Denuit +2 more
openaire +2 more sources
This chapter recalls the basics of the estimation method consisting in maximizing the likelihood associated to the observations. The resulting estimators enjoy convenient theoretical properties, being optimal in a wide variety of situations. The maximum likelihood principle will be used throughout the next chapters to fit the supervised learning models.
Michel Denuit +2 more
openaire +2 more sources
Moment Estimators and Maximum Likelihood
Biometrika, 1958where J'q2(x) P(x; 0) dx = Or, J'q(x) qq(x) P(x; 0) dx = 0 (r+ s). (2) To avoid undue complication at this stage we assume P(x; 0) is continuous throughout its range. We reconsider the restrictions on P in a subsequent section.
openaire +1 more source
2014
In Chap. 2 you learned that ordinary least squares (OLS) estimation minimizes the squared discrepancy between observed values and fitted ones. This procedure is primarily a descriptive tool, as it identifies the weights we use in our sample to best predict y from x.
openaire +2 more sources
In Chap. 2 you learned that ordinary least squares (OLS) estimation minimizes the squared discrepancy between observed values and fitted ones. This procedure is primarily a descriptive tool, as it identifies the weights we use in our sample to best predict y from x.
openaire +2 more sources
1997
In the last chapter attention was given to the determination of the state vector1 ξ for given observations Y and known parameters A. In this chapter the maximum likelihood estimation of the parameters \(\lambda = (\theta \prime ,\rho \prime ,\xi {\prime _0})\prime \) of an MS-VAR model is considered.
openaire +1 more source
In the last chapter attention was given to the determination of the state vector1 ξ for given observations Y and known parameters A. In this chapter the maximum likelihood estimation of the parameters \(\lambda = (\theta \prime ,\rho \prime ,\xi {\prime _0})\prime \) of an MS-VAR model is considered.
openaire +1 more source
1982
This chapter deals with maximum likelihood estimation based on n independent observations X1,...,Xn from the distribution N ⊣ (λ, χ, Ψ).
openaire +1 more source
This chapter deals with maximum likelihood estimation based on n independent observations X1,...,Xn from the distribution N ⊣ (λ, χ, Ψ).
openaire +1 more source
1971
It is possible to develop a number of systems of estimation and nowhere does this seem to be more true than for the estimation of genetic crossover fractions. Several of these fail dismally because of inaccuracy and inefficiency (Fisher and Balmukand, 1928).
openaire +1 more source
It is possible to develop a number of systems of estimation and nowhere does this seem to be more true than for the estimation of genetic crossover fractions. Several of these fail dismally because of inaccuracy and inefficiency (Fisher and Balmukand, 1928).
openaire +1 more source

