Results 31 to 40 of about 1,637,948 (328)
Some Upper Bounds for RKHS Approximation by Bessel Functions
A reproducing kernel Hilbert space (RKHS) approximation problem arising from learning theory is investigated. Some K-functionals and moduli of smoothness with respect to RKHSs are defined with Fourier–Bessel series and Fourier–Bessel transforms ...
Mingdang Tian +2 more
doaj +1 more source
Nonlinear tensor product approximation of functions [PDF]
We are interested in approximation of a multivariate function $f(x_1,\dots,x_d)$ by linear combinations of products $u^1(x_1)\cdots u^d(x_d)$ of univariate functions $u^i(x_i)$, $i=1,\dots,d$.
Bazarkhanov, D., Temlyakov, V.
core +1 more source
Rates of approximation by ReLU shallow neural networks
Neural networks activated by the rectified linear unit (ReLU) play a central role in the recent development of deep learning. The topic of approximating functions from Hölder spaces by these networks is crucial for understanding the efficiency of the induced learning algorithms.
Mao, Tong, Zhou, Ding-Xuan
openaire +3 more sources
Approximation in variation by the Kantorovich operators; pp. 201–209 [PDF]
We discuss the rate of approximation of the Kantorovich operators. The rate of approximation is given with respect to the variation seminorm.
Andi Kivinukk, Tarmo Metsmägi
doaj +1 more source
Multipoint Padé-Type Approximants. Exact Rate of Convergence [PDF]
14 pages, no figures.-- MSC1991 codes: Primary 41A21, 41A25; Secondary 30E10, 42C05. MR#: MR1606915 (99i:41016) Zbl#: Zbl 0896.41010 We study the rate with which sequences of interpolating rational functions, whose poles are partially fixed, approximate Markov-type analytic functions. Applications to interpolating quadratures are given.
Cala Rodríguez, F. +1 more
openaire +3 more sources
Neural network approximation [PDF]
Neural networks (NNs) are the method of choice for building learning algorithms. They are now being investigated for other numerical tasks such as solving high-dimensional partial differential equations.
R. DeVore, Boris Hanin, G. Petrova
semanticscholar +1 more source
We extend the results of Xh. Z. Krasniqi (Acta Comment. Univ. Tartu Math. 17:89–101, 2013) and the authors (Acta Comment. Univ. Tartu Math. 13:11–24, 2009; Proc. Est. Acad. Sci.
Mateusz Kubiak +2 more
doaj +1 more source
Exact estimates of the rate of approximation of convolution operators
The paper presents a method for establishing direct and strong converse inequalities in terms of K-functionals for convolution operators acting in homogeneous Banach spaces of multivariate functions.
B. Draganov
semanticscholar +1 more source
Deep Network Approximation for Smooth Functions [PDF]
This paper establishes optimal approximation error characterization of deep ReLU networks for smooth functions in terms of both width and depth simultaneously.
Jianfeng Lu +3 more
semanticscholar +1 more source
On the convergence rates of Gauss and Clenshaw-Curtis quadrature for functions of limited regularity [PDF]
We study the optimal general rate of convergence of the n-point quadrature rules of Gauss and Clenshaw-Curtis when applied to functions of limited regularity: if the Chebyshev coefficients decay at a rate O(n^{-s-1}) for some s > 0, Clenshaw-Curtis and ...
Bornemann, Folkmar, Xiang, Shuhuang
core +2 more sources

