Results 161 to 170 of about 343,225 (391)
Structurally Prune Anything: Any Architecture, Any Framework, Any Time [PDF]
Neural network pruning serves as a critical technique for enhancing the efficiency of deep learning models. Unlike unstructured pruning, which only sets specific parameters to zero, structured pruning eliminates entire channels, thus yielding direct computational and storage benefits.
arxiv
Irrespective of the specific definition of fairness in a machine learning application, pruning the underlying model affects it. We investigate and document the emergence and exacerbation of undesirable per-class performance imbalances, across tasks and architectures, for almost one million categories considered across over 100K image classification ...
openaire +2 more sources
STUN: Structured-Then-Unstructured Pruning for Scalable MoE Pruning [PDF]
Mixture-of-experts (MoEs) have been adopted for reducing inference costs by sparsely activating experts in Large language models (LLMs). Despite this reduction, the massive number of experts in MoEs still makes them expensive to serve. In this paper, we study how to address this, by pruning MoEs.
arxiv
A new spin on chemotaxonomy: Using non‐proteogenic amino acids as a test case
Abstract Premise Specialized metabolites serve various roles for plants and humans. Unlike core metabolites, specialized metabolites are restricted to certain plant lineages; thus, in addition to their ecological functions, specialized metabolites can serve as diagnostic markers of plant lineages.
Makenzie Gibson+4 more
wiley +1 more source
Constructive negation by pruning
AbstractWe show that a simple concurrent pruning mechanism over standard SLD derivation trees, called constructive negation by pruning, provides a complete operational semantics for normal constraint logic programs (CLP) w.r.t. Fitting-Kunen's three-valued logic semantics. The principle of concurrent pruning is the only extra machinery needed to handle
openaire +2 more sources
ICE-Pruning: An Iterative Cost-Efficient Pruning Pipeline for Deep Neural Networks [PDF]
Pruning is a widely used method for compressing Deep Neural Networks (DNNs), where less relevant parameters are removed from a DNN model to reduce its size. However, removing parameters reduces model accuracy, so pruning is typically combined with fine-tuning, and sometimes other operations such as rewinding weights, to recover accuracy.
arxiv
Abstract Premise Recently, plant science has seen transformative advances in scalable data collection for sequence and chemical data. These large datasets, combined with machine learning, have demonstrated that conducting plant metabolic research on large scales yields remarkable insights.
Rachel Knapp+2 more
wiley +1 more source
Two cases of prune belly syndrome in Black infants are presented. The prune belly syndrome, or congenital absence of abdominal muscles, is accompanied by hydro-ureter, hydronephrosis, megalocystis and usually undescended testes. Other associated congenital defects occur, of which orthopaedic defects appear to be the most prevalent.
Hammond, J.A.+3 more
openaire +3 more sources
FlexiPrune: A Pytorch tool for flexible CNN pruning policy selection
The application of pruning techniques to convolutional neural networks has made it possible to reduce the size of the model and the time required for inference. However, determining the best pruning policy, i.e.
Cesar G. Pachon+2 more
doaj
Disparity of turbinal bones in placental mammals
Abstract Turbinals are key bony elements of the mammalian nasal cavity, involved in heat and moisture conservation as well as olfaction. While turbinals are well known in some groups, their diversity is poorly understood at the scale of placental mammals, which span 21 orders.
Quentin Martinez+11 more
wiley +1 more source