Results 71 to 80 of about 142,362 (315)
Stochastic Gradient Descent Tricks [PDF]
Chapter 1 strongly advocates the stochastic back-propagation method to train neural networks. This is in fact an instance of a more general technique called stochastic gradient descent (SGD). This chapter provides background material, explains why SGD is a good learning algorithm when the training set is large, and provides useful recommendations.
openaire +1 more source
Stochastic Compositional Gradient Descent Under Compositional Constraints
A part of this work is submitted in Asilomar Conference on Signals, Systems, and ...
Srujan Teja Thomdapu +2 more
openaire +2 more sources
Relationship between Batch Size and Number of Steps Needed for Nonconvex Optimization of Stochastic Gradient Descent using Armijo Line Search [PDF]
Yuki Tsukada, Hideaki Iiduka
openalex +1 more source
Controlled laser‐drilling of embedded HfO2 membranes creates three layer nanopores with Gaussian‐shaped cavities sculptured in the supporting layers. These embedded solid‐state nanopores slow DNA translocation by 12‐fold compared to SiNx pores, enabling high‐resolution, label‐free detection of short DNAs, RNAs, and proteins.
Jostine Joby +4 more
wiley +1 more source
Machine Learning Empowered Brain Tumor Segmentation and Grading Model for Lifetime Prediction
An uncontrolled growth of brain cells is known as a brain tumor. When brain tumors are accurately and promptly diagnosed using magnetic resonance imaging scans, it is easier to start the right treatment, track the tumor’s development over time ...
M. Renugadevi +9 more
doaj +1 more source
Attentional-Biased Stochastic Gradient Descent
In this paper, we present a simple yet effective provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning. Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
Qi, Qi +4 more
openaire +2 more sources
Why random reshuffling beats stochastic gradient descent [PDF]
We analyze the convergence rate of the random reshuffling (RR) method, which is a randomized first-order incremental algorithm for minimizing a finite sum of convex component functions. RR proceeds in cycles, picking a uniformly random order (permutation) and processing the component functions one at a time according to this order, i.e., at each cycle,
M. Gürbüzbalaban +2 more
openaire +4 more sources
A roll‐to‐roll exfoliation method is demonstrated that preserves the crystallographic alignment of anisotropic 2D materials over large areas, enabling scalable fabrication of directional electronic and optoelectronic devices. Abstract Anisotropic 2D materials such as black phosphorus (BP), GeS or CrSBr, exhibit direction‐dependent optical and ...
Esteban Zamora‐Amo +14 more
wiley +1 more source
Single Solid‐State Ion Channels as Potentiometric Nanosensors
Single gold nanopores functionalized with mixed self‐assembled monolayers act as solid‐state ion channels for direct, selective potentiometric sensing of inorganic ions (Ag⁺). The design overcomes key miniaturization barriers of conventional ion‐selective electrodes by combining low resistivity with suppressed loss of active components, enabling robust
Gergely T. Solymosi +4 more
wiley +1 more source
Wide-Field Telescope Alignment Using the Model-Based Method Combined with the Stochastic Parallel Gradient Descent Algorithm [PDF]
Min Li, Ang Zhang, Junbo Zhang, Hao Xian
openalex +1 more source

