Results 221 to 230 of about 30,302 (260)
Unraveling Cytomegalovirus Drug Resistance in Transplant Patients by Targeting Deep Sequencing. [PDF]
Alemán S +7 more
europepmc +1 more source
Optimising HIV drug resistance testing laboratory networks in Kenya: insights from systems engineering modelling. [PDF]
Wang Y +7 more
europepmc +1 more source
Optimizing concrete pump maintenance in the construction sector using enhanced MCDM techniques. [PDF]
Deepak D +6 more
europepmc +1 more source
Some of the next articles are maybe not open access.
Related searches:
Related searches:
Proceedings of the 2003 International Conference on Machine Learning and Cybernetics (IEEE Cat. No.03EX693), 2004
When designing steady-state computer simulation experiments, one may be faced with the choice of batching observations in one long run or replicating a number of smaller runs. Both methods are potentially useful in the course of undertaking simulation output analysis.
Christos Alexopoulos, David Goldsman
openaire +1 more source
When designing steady-state computer simulation experiments, one may be faced with the choice of batching observations in one long run or replicating a number of smaller runs. Both methods are potentially useful in the course of undertaking simulation output analysis.
Christos Alexopoulos, David Goldsman
openaire +1 more source
Batch-to-batch Optimization of Batch Crystallization Processes
Chinese Journal of Chemical Engineering, 2008Abstract It is the fact that several process parameters are either unknown or uncertain. Therefore, an optimal control profile calculated with developed process models with respect to such process parameters may not give an optimal performance when implemented to real processes.
Woranee Paengjuntuek +2 more
openaire +1 more source
Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017
Efficiency of large-scale learning is a hot topic in both academic and industry. The stochastic gradient descent (SGD) algorithm, and its extension mini-batch SGD, allow the model to be updated without scanning the whole data set. However, the use of approximate gradient leads to the uncertainty issue, slowing down the decreasing of objective function.
Peifeng Yin, Ping Luo, Taiga Nakamura
openaire +1 more source
Efficiency of large-scale learning is a hot topic in both academic and industry. The stochastic gradient descent (SGD) algorithm, and its extension mini-batch SGD, allow the model to be updated without scanning the whole data set. However, the use of approximate gradient leads to the uncertainty issue, slowing down the decreasing of objective function.
Peifeng Yin, Ping Luo, Taiga Nakamura
openaire +1 more source

