Results 61 to 70 of about 807,232 (206)
Bayesian Information Extraction Network
Dynamic Bayesian networks (DBNs) offer an elegant way to integrate various aspects of language in one model. Many existing algorithms developed for learning and inference in DBNs are applicable to probabilistic language modeling.
Peshkin, Leonid, Pfeffer, Avi
core +5 more sources
Reliability is critical for complex engineering systems. Traditionally, reliability analysis and fault diagnosis of complex engineering systems is based on reliability block diagram and fault tree.
Shen Chen+4 more
doaj +1 more source
A Bayesian Approach to Learning Bayesian Networks with Local Structure [PDF]
Recently several researchers have investigated techniques for using data to learn Bayesian networks containing compact representations for the conditional probability distributions (CPDs) stored at each node. The majority of this work has concentrated on using decision-tree representations for the CPDs.
arxiv
The Relevance of Bayesian Layer Positioning to Model Uncertainty in Deep Bayesian Active Learning [PDF]
One of the main challenges of deep learning tools is their inability to capture model uncertainty. While Bayesian deep learning can be used to tackle the problem, Bayesian neural networks often require more time and computational power to train than deterministic networks.
arxiv
An Empirical-Bayes Score for Discrete Bayesian Networks
Bayesian network structure learning is often performed in a Bayesian setting, by evaluating candidate structures using their posterior probabilities for a given data set.
Scutari, Marco
core
Bayesian networks provide an elegant formalism for representing and reasoning about uncertainty using probability theory. Theyare a probabilistic extension of propositional logic and, hence, inherit some of the limitations of propositional logic, such as the difficulties to represent objects and relations.
arxiv
On the Relative Expressiveness of Bayesian and Neural Networks [PDF]
A neural network computes a function. A central property of neural networks is that they are "universal approximators:" for a given continuous function, there exists a neural network that can approximate it arbitrarily well, given enough neurons (and some additional assumptions).
arxiv
A review of word-sense disambiguation methods and algorithms: Introduction
The word-sense disambiguation task is a classification task, where the goal is to predict the meaning of words and phrases with the help of surrounding text.
Tatiana Kaushinis+14 more
doaj +1 more source
A Bayesian method for the induction of probabilistic networks from data [PDF]
Gregory F. Cooper, Edward H. Herskovits
openalex +1 more source
Bayesian topology identification of linear dynamic networks [PDF]
In networks of dynamic systems, one challenge is to identify the interconnection structure on the basis of measured signals. Inspired by a Bayesian approach in [1], in this paper, we explore a Bayesian model selection method for identifying the connectivity of networks of transfer functions, without the need to estimate the dynamics.
arxiv