Results 91 to 100 of about 524,166 (313)
Succinct Explanations With Cascading Decision Trees [PDF]
The decision tree is one of the most popular and classical machine learning models from the 1980s. However, in many practical applications, decision trees tend to generate decision paths with excessive depth. Long decision paths often cause overfitting problems, and make models difficult to interpret.
arxiv
Permutation Decision Trees [PDF]
Decision Tree is a well understood Machine Learning model that is based on minimizing impurities in the internal nodes. The most common impurity measures are Shannon entropy and Gini impurity. These impurity measures are insensitive to the order of training data and hence the final tree obtained is invariant to any permutation of the data.
arxiv
In pursuit of modern data management techniques, this study presents an in‐lab pipeline combining electronic laboratory notebooks (eLabFTW) and Python scripts for creating semantically enriched, interoperable, machine‐actionable data. Automating data mapping enhances usability, collaboration, and unified knowledge representation.
Markus Schilling+7 more
wiley +1 more source
In this study, the mechanical response of Y‐shaped core sandwich beams under compressive loading is investigated, using deep feed‐forward neural networks (DFNNs) for predictive modeling. The DFNN model accurately captures stress–strain behavior, influenced by design parameters and loading rates.
Ali Khalvandi+4 more
wiley +1 more source
Decision Tree Instability and Active Learning [PDF]
Decision tree learning algorithms produce accurate models that can be interpreted by domain experts. However, these algorithms are known to be unstable --- they can produce drastically different hypotheses from training sets that differ just slightly. This instability undermines the objective of extracting knowledge from the trees.
Robert C. Holte, Kenneth Dwyer
openaire +2 more sources
Machine Learning‐Guided Discovery of Factors Governing Deformation Twinning in Mg–Y Alloys
This study uses interpretable machine learning to identify key microstructural and processing parameters related to twinning in magnesium‐yttrium (Mg–Y) alloys. It is identified that using only grain size, grain orientation, and total applied strain, grains can be classified with 84% accuracy based on whether the grain contains a twin.
Peter Mastracco+8 more
wiley +1 more source
PAC-learning a decision tree with pruning
Abstract Empirical studies have shown that the performance of decision tree induction usually improves when the trees are pruned. Whether these results hold in general and to what extent pruning improves the accuracy of a concept have not been investigated theoretically. This paper provides a theoretical study of pruning.
Department of Management Information Systems, College of Economics and Business Administration, Kookmin University Chongnung-dong Sungbuk-gu, Seoul South Korea ( host institution )+2 more
openaire +3 more sources
End-to-End Learning of Decision Trees and Forests [PDF]
Abstract Conventional decision trees have a number of favorable properties, including a small computational footprint, interpretability, and the ability to learn from little training data. However, they lack a key quality that has helped fuel the deep learning revolution: that of being end-to-end trainable. Kontschieder et al.
Thomas M. Hehn+2 more
openaire +3 more sources
Beyond Order: Perspectives on Leveraging Machine Learning for Disordered Materials
This article explores how machine learning (ML) revolutionizes the study and design of disordered materials by uncovering hidden patterns, predicting properties, and optimizing multiscale structures. It highlights key advancements, including generative models, graph neural networks, and hybrid ML‐physics methods, addressing challenges like data ...
Hamidreza Yazdani Sarvestani+4 more
wiley +1 more source
Learning Decision Trees for Unbalanced Data [PDF]
Learning from unbalanced datasets presents a convoluted problem in which traditional learning algorithms may perform poorly. The objective functions used for learning the classifiers typically tend to favor the larger, less important classes in such problems.
David A. Cieslak, Nitesh V. Chawla
openaire +2 more sources