Results 31 to 40 of about 3,789,491 (216)
An important step when designing an empirical study is to justify the sample size that will be collected. The key aim of a sample size justification for such studies is to explain how the collected data is expected to provide valuable information given the inferential goals of the researcher.
openaire +3 more sources
Sample Size Considerations for Hierarchical Populations [PDF]
In applied sciences in general, and in particular when dealing with animal health and welfare assessments, one is often confronted with the collection of correlated data (not independent data).
European Food Safety Authority
doaj +1 more source
A systematic review of the “promising zone” design
Introduction Sample size calculations require assumptions regarding treatment response and variability. Incorrect assumptions can result in under- or overpowered trials, posing ethical concerns.
Julia M. Edwards+3 more
doaj +1 more source
High-low level support vector regression prediction approach (HL-SVR) for data modeling with input parameters of unequal sample sizes [PDF]
Support vector regression (SVR) has been widely used to reduce the high computational cost of computer simulation. SVR assumes the input parameters have equal sample sizes, but unequal sample sizes are often encountered in engineering practices. To solve this issue, a new prediction approach based on SVR, namely as high-low-level SVR approach (HL-SVR ...
arxiv +1 more source
Sample size estimation in research: Necessity or compromise?
An adequately powered sample is essential for accurate parameter estimation and meaningful significance testing. It is important to balance sample size with practical considerations such as cost and feasibility.
Samir Kumar Praharaj, Shahul Ameen
doaj +1 more source
Sample size calculations for indirect standardization
Indirect standardization, and its associated parameter the standardized incidence ratio, is a commonly-used tool in hospital profiling for comparing the incidence of negative outcomes between an index hospital and a larger population of reference ...
Yifei Wang, Philip Chu
doaj +1 more source
Permutation-based group sequential analyses for cognitive neuroscience
Cognitive neuroscientists have been grappling with two related experimental design problems. First, the complexity of neuroimaging data (e.g. often hundreds of thousands of correlated measurements) and analysis pipelines demands bespoke, non-parametric ...
John P. Veillette+2 more
doaj
The rate of phase III trials failures is approximately 42-45%, and most of them are due to a lack of efficacy. Some of the failures for a lack of efficacy are expected, due to type I errors in phase II and type II errors in phase III.
Daniele De Martini
doaj +1 more source
Exploring Consequences of Simulation Design for Apparent Performance of Statistical Methods. 1: Results from simulations with constant sample sizes [PDF]
Contemporary statistical publications rely on simulation to evaluate performance of new methods and compare them with established methods. In the context of meta-analysis of log-odds-ratios, we investigate how the ways in which simulations are implemented affect such conclusions.
arxiv
Mesoscopic sensitivity of speckles in disordered nonlinear media to changes of disordered potential [PDF]
We show that the sensitivity of wave speckle patterns in disordered nonlinear media to changes of scattering potential increases with sample size. For large enough sample size this quantity diverges, which implies that at given coherent wave incident on a sample there are multiple solutions for the spatial distribution of the wave's density. The number
arxiv +1 more source