Results 321 to 330 of about 6,198,727 (341)
Some of the next articles are maybe not open access.
Strong Logic for Weak Memory: Reasoning About Release-Acquire Consistency in Iris (Artifact)
Dagstuhl Artifacts Ser., 2017This artifact provides the soundness proofs for the encodings in Iris the RSL and GPS logics, as well as the verification for all standard examples known to be verifiable in those logics.
Jan-Oliver Kaiser+4 more
semanticscholar +1 more source
On the Strong Time Consistency of the Core
Automation and Remote Control, 2018Time consistency is one of desirable properties for any solution of a cooperative dynamic game. If a solution is time-consistent, the players do not need to break a cooperative agreement. In this paper, we consider the core as the solution and establish conditions for its strong time consistency.
openaire +1 more source
, 2013
We establish strong consistency of the least squares estimates in multiple regression models discarding the usual assumption of the errors having null mean value. Thus, we required them to be i.i.d. with absolute moment of order r, 01.
J. Lita da Silva, J. Mexia
semanticscholar +1 more source
We establish strong consistency of the least squares estimates in multiple regression models discarding the usual assumption of the errors having null mean value. Thus, we required them to be i.i.d. with absolute moment of order r, 01.
J. Lita da Silva, J. Mexia
semanticscholar +1 more source
Spatial autoregression model: strong consistency
Statistics & Probability Letters, 2003Abstract Let ( α n , β n ) denote the Gauss–Newton estimator of the parameter (α,β) in the autoregression model Zij=αZi−1,j+βZi,j−1−αβZi−1,j−1+eij. It is shown in an earlier paper that when α=β=1, {n 3/2 ( α n −α, β n −β)} converges in distribution to a bivariate normal random vector.
G.D. Richardson+3 more
openaire +2 more sources
Strong cache consistency in integration systems
2010 International Conference on Mechanical and Electrical Technology, 2010Nowadays, a common problem to which many corporations or organizations face is how to easily and uniformly access over multiple, disparate information sources which includes database, object store, knowledge bases, file system, digital libraries, information retrieval system, and electronic mail system.
Xiong Yongchun, Zhang Wei
openaire +2 more sources
On strong Hellinger consistency of posterior distributions
Journal of Nonparametric Statistics, 2012We establish a sufficient condition ensuring strong Hellinger consistency of posterior distributions. We also prove a strong Hellinger consistency theorem for the pseudoposterior distributions based on the likelihood ratio with power ...
Bo Ranneby, Yang Xing
openaire +2 more sources
Strong consistency of factorial $$K$$K-means clustering
, 2013Factorial $$k$$k-means (FKM) clustering is a method for clustering objects in a low-dimensional subspace. The advantage of this method is that the partition of objects and the low-dimensional subspace reflecting the cluster structure are obtained ...
Y. Terada
semanticscholar +1 more source
On the rate of strong consistency of Lorenz curves
Statistics & Probability Letters, 1997Abstract Assuming the finiteness of only the second moment, we prove that LIL for Lorenz curves holds true provided that the underlying distribution function and its inverse are continuous. The proof is crucially based on a limit theorem for the general Vervaat process.
Miklos Csorgo, Ričardas Zitikis
openaire +2 more sources
Strong, Balanced, and Consistent Lattices
1991FAIGLE [1980 b] introduced the notion of a strong join-irreducible in lattices of finite length (s. Definition 16.2 below). Using the concepts a′ and a+ (s. Section 10) we present two extensions of Faigle’s concept of a strong join-irreducible, namely the notion of a strong element and the notion of a strict element.
openaire +2 more sources
Strong universal consistency of neural network classifiers
IEEE Transactions on Information Theory, 1993In statistical pattern recognition, a classifier is called universally consistent if its error probability converges to the Bayes-risk as the size of the training data grows for all possible distributions of the random variable pair of the observation vector and its class. It is proven that if a one-layered neural network with properly chosen number of
Gábor Lugosi, András Faragó
openaire +2 more sources