Results 11 to 20 of about 2,238,851 (367)

Evolutional Deep Neural Network [PDF]

open access: yesPhysical Review E, 2021
The notion of an evolutional deep neural network (EDNN) is introduced for the solution of partial differential equations (PDE). The parameters of the network are trained to represent the initial state of the system only and are subsequently updated ...
Yifan Du, T. Zaki
semanticscholar   +6 more sources

Orthogonal Deep Neural Networks [PDF]

open access: yesIEEE Transactions on Pattern Analysis and Machine Intelligence, 2021
In this paper, we introduce the algorithms of Orthogonal Deep Neural Networks (OrthDNNs) to connect with recent interest of spectrally regularized deep learning methods. OrthDNNs are theoretically motivated by generalization analysis of modern DNNs, with the aim to find solution properties of network weights that guarantee better generalization.
Shuai Li   +4 more
openaire   +4 more sources

Deep Polynomial Neural Networks [PDF]

open access: yesIEEE Transactions on Pattern Analysis and Machine Intelligence, 2021
Deep Convolutional Neural Networks (DCNNs) are currently the method of choice both for generative, as well as for discriminative learning in computer vision and machine learning. The success of DCNNs can be attributed to the careful selection of their building blocks (e.g., residual blocks, rectifiers, sophisticated normalization schemes, to mention ...
Yannis Panagakis   +5 more
openaire   +3 more sources

Approximation Spaces of Deep Neural Networks [PDF]

open access: yesConstructive Approximation, 2021
We study the expressivity of deep neural networks. Measuring a network's complexity by its number of connections or by its number of neurons, we consider the class of functions for which the error of best approximation with networks of a given complexity decays at a certain rate when increasing the complexity budget.
Gribonval, Rémi   +3 more
openaire   +9 more sources

Tweaking Deep Neural Networks [PDF]

open access: yesIEEE Transactions on Pattern Analysis and Machine Intelligence, 2021
Deep neural networks are trained so as to achieve a kind of the maximum overall accuracy through a learning process using given training data. Therefore, it is difficult to fix them to improve the accuracies of specific problematic classes or classes of interest that may be valuable to some users or applications.
Kim, Jinwook   +2 more
openaire   +4 more sources

Probabilistic Models with Deep Neural Networks [PDF]

open access: yesEntropy, 2021
Recent advances in statistical inference have significantly expanded the toolbox of probabilistic modeling. Historically, probabilistic modeling has been constrained to very restricted model classes, where exact or approximate probabilistic inference is feasible.
Andrés R. Masegosa   +4 more
openaire   +6 more sources

Deep Randomized Neural Networks [PDF]

open access: yes, 2020
Randomized Neural Networks explore the behavior of neural systems where the majority of connections are fixed, either in a stochastic or a deterministic fashion. Typical examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization.
Gallicchio C., Scardapane S.
openaire   +4 more sources

A survey of uncertainty in deep neural networks [PDF]

open access: yesArtificial Intelligence Review, 2023
AbstractOver the last decade, neural networks have reached almost every field of science and become a crucial part of various real world applications. Due to the increasing spread, confidence in neural network predictions has become more and more important.
Gawlikowski, Jakob   +13 more
openaire   +3 more sources

Multi-Parameter Inversion of AIEM by Using Bi-Directional Deep Neural Network

open access: yesRemote Sensing, 2022
A novel multi-parameter inversion method is proposed for the Advanced Integral Equation Model (AIEM) by using bi-directional deep neural network. There is a very complex nonlinear relationship between the surface parameters (dielectric constant and ...
Yu Wang   +5 more
doaj   +1 more source

ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression [PDF]

open access: yesIEEE International Conference on Computer Vision, 2017
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages.
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy