Results 91 to 100 of about 142,362 (315)

Beyond Convexity: Stochastic Quasi-Convex Optimization

open access: yes, 2015
Stochastic convex optimization is a basic and well studied primitive in machine learning. It is well known that convex and Lipschitz functions can be minimized efficiently using Stochastic Gradient Descent (SGD).
Hazan, Elad   +2 more
core  

Stochastic Modified Flows for Riemannian Stochastic Gradient Descent

open access: yesSIAM Journal on Control and Optimization
We give quantitative estimates for the rate of convergence of Riemannian stochastic gradient descent (RSGD) to Riemannian gradient flow and to a diffusion process, the so-called Riemannian stochastic modified flow (RSMF). Using tools from stochastic differential geometry we show that, in the small learning rate regime, RSGD can be approximated by the ...
Benjamin Gess   +2 more
openaire   +4 more sources

In Materia Shaping of Randomness with a Standard Complementary Metal‐Oxide‐Semiconductor Transistor for Task‐Adaptive Entropy Generation

open access: yesAdvanced Functional Materials, EarlyView.
This study establishes a materials‐driven framework for entropy generation within standard CMOS technology. By electrically rebalancing gate‐oxide traps and Si‐channel defects in foundry‐fabricated FDSOI transistors, the work realizes in‐materia control of temporal correlation – achieving task adaptive entropy optimization for reinforcement learning ...
Been Kwak   +14 more
wiley   +1 more source

Parle: parallelizing stochastic gradient descent

open access: yes, 2017
We propose a new algorithm called Parle for parallel training of deep networks that converges 2-4x faster than a data-parallel implementation of SGD, while achieving significantly improved error rates that are nearly state-of-the-art on several benchmarks including CIFAR-10 and CIFAR-100, without introducing any additional hyper-parameters.
Chaudhari, Pratik   +5 more
openaire   +2 more sources

Lithium Intercalation in the Anisotropic Van Der Waals Semiconductor CrSBr

open access: yesAdvanced Functional Materials, EarlyView.
We report the lithium intercalation in the layered van der Waals crystal CrSBr, revealing strongly anisotropic ion‐migration dynamics. Optical and electrical characterization of exfoliated CrSBr shows lithium diffusion coefficients that differ by more than an order of magnitude along a‐ and b‐directions, consistent with molecular dynamics simulations ...
Kseniia Mosina   +13 more
wiley   +1 more source

Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch Mode

open access: yesApplied Sciences, 2019
Gradient descent method is an essential algorithm for learning of neural networks. Among diverse variations of gradient descent method that have been developed for accelerating learning speed, the natural gradient learning is based on the theory of ...
Hyeyoung Park, Kwanyong Lee
doaj   +1 more source

Federated Stochastic Gradient Descent Begets Self-Induced Momentum [PDF]

open access: green, 2022
Howard H. Yang   +4 more
openalex   +1 more source

High Entropy Wide‐Bandgap Borates with Broadband Luminescence and Large Nonlinear Optical properties

open access: yesAdvanced Functional Materials, EarlyView.
High‐entropy rare‐earth borates exhibit excellent nonlinear optical and broadband luminescence properties arising from multi‐component doping, chemical disorder, increased configurational entropy, and increased lattice and electronic anharmonicity. This formulation enabled us to obtain a large, environmentally stable single crystal with 3X higher laser‐
Saugata Sarker   +14 more
wiley   +1 more source

Intermediate Resistive State in Wafer‐Scale Vertical MoS2 Memristors Through Lateral Silver Filament Growth for Artificial Synapse Applications

open access: yesAdvanced Functional Materials, EarlyView.
In MOCVD MoS2 memristors, a current compliance‐regulated Ag filament mechanism is revealed. The filament ruptures spontaneously during volatile switching, while subsequent growth proceeds vertically through the MoS2 layers and then laterally along the van der Waals gaps during nonvolatile switching.
Yuan Fa   +19 more
wiley   +1 more source

Variance Reduced Stochastic Gradient Descent with Neighbors

open access: yes, 2015
Stochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its slow convergence can be a computational bottleneck. Variance reduction techniques such as SAG, SVRG and SAGA have been proposed to overcome this weakness, achieving linear ...
Hofmann, Thomas   +3 more
core   +1 more source

Home - About - Disclaimer - Privacy