Results 251 to 260 of about 1,930,208 (286)
Some of the next articles are maybe not open access.

The Uniform Distribution as a Universal Prior

IEEE Transactions on Information Theory, 2004
In this correspondence, we discuss the properties of the uniform prior as a universal prior, i.e., a prior that induces a mutual information that is simultaneously close to the capacity for all channels. We determine bounds on the amount of the mutual information loss in using the uniform prior instead of the capacity-achieving prior. Specifically, for
N. Shulman, M. Feder
openaire   +3 more sources

A Note on the Uniform Prior Distribution for Reliability

IEEE Transactions on Reliability, 1970
The uniform prior distribution is a mathematically acceptable prior distribution for reliability R(t) = exp (-?t). Certain other considerations, however, lead to the conclusion that the uniform prior distribution on R(t) should be used with extreme caution.
openaire   +3 more sources

Uniform Priors for Impulse Responses

Econometrica, 2022
There has been a call for caution regarding the standard procedure for Bayesian inference in set‐identified structural vector autoregressions on the grounds that the common practice of using a uniform prior over the set of orthogonal matrices induces a non‐uniform prior for individual impulse responses or other quantities of interest.
Arias, Jonas E.   +2 more
openaire   +3 more sources

DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS

AIP Conference Proceedings, 2011
In problems of model comparison between competing regression models, one must take care not to use improper priors. Improper priors introduce inverse infinities in the evidence factors, which do not cancel if one proceeds to compute the posterior probabilities of models which have different numbers of regression coefficients.
N. van Erp   +4 more
openaire   +1 more source

Bayesian classification and feature reduction using uniform dirichlet priors

IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics), 2003
In this paper, a method of classification referred to as the Bayesian data reduction algorithm (BDRA) is developed. The algorithm is based on the assumption that the discrete symbol probabilities of each class are a priori uniformly Dirichlet distributed, and it employs a "greedy" approach (which is similar to a backward sequential feature search) for ...
R R, Lynch, P K, Willett
openaire   +2 more sources

Strong Inconsistency from Uniform Priors

Journal of the American Statistical Association, 1976
Behavior of this type will be called strong inconsistency. Box and Tiao [2, p. 310] might, apparently, tolerate such behavior by the simple device of ignoring (1.3), which, based on a sampling distribution, is regarded as irrelevant to the Bayesian inference. In support orf this, Box and Tiao [2, p.
openaire   +1 more source

Bayesian Learning for Classification using a Uniform Dirichlet Prior

2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2019
In Bayesian learning, designs based on noninformative priors are appropriate when the user cannot confidently identify the data-generating distribution. While such learners cannot achieve the performance of those based on a well-matched subjective prior, they impart a robustness against poor prior selection.
Paul Rademacher, Milos Doroslovacki
openaire   +1 more source

Uniform priors on convex sets improve risk

Statistics & Probability Letters, 2004
zbMATH Open Web Interface contents unavailable due to conflicting licenses.
openaire   +2 more sources

On the implicit uniform BIC prior [PDF]

open access: possibleEconomics Bulletin, 2014
I show how to find the uniform prior implicit in using the Bayesian Information Criterion to consider a hypothesis about a single normally distributed parameter. The ratio of the width of the implicit prior to the standard deviation of the parameter estimate is √2πn for large samples.
openaire  

Home - About - Disclaimer - Privacy