Results 71 to 80 of about 142,362 (315)

Stochastic Gradient Descent Tricks [PDF]

open access: yes, 2012
Chapter 1 strongly advocates the stochastic back-propagation method to train neural networks. This is in fact an instance of a more general technique called stochastic gradient descent (SGD). This chapter provides background material, explains why SGD is a good learning algorithm when the training set is large, and provides useful recommendations.
openaire   +1 more source

Stochastic Compositional Gradient Descent Under Compositional Constraints

open access: yesIEEE Transactions on Signal Processing, 2023
A part of this work is submitted in Asilomar Conference on Signals, Systems, and ...
Srujan Teja Thomdapu   +2 more
openaire   +2 more sources

Laser‐Based Sculpturing of Embedded Ultrathin Metal‐Oxide Nanopores for Enhanced Biomolecular Sensing

open access: yesAdvanced Functional Materials, EarlyView.
Controlled laser‐drilling of embedded HfO2 membranes creates three layer nanopores with Gaussian‐shaped cavities sculptured in the supporting layers. These embedded solid‐state nanopores slow DNA translocation by 12‐fold compared to SiNx pores, enabling high‐resolution, label‐free detection of short DNAs, RNAs, and proteins.
Jostine Joby   +4 more
wiley   +1 more source

Machine Learning Empowered Brain Tumor Segmentation and Grading Model for Lifetime Prediction

open access: yesIEEE Access, 2023
An uncontrolled growth of brain cells is known as a brain tumor. When brain tumors are accurately and promptly diagnosed using magnetic resonance imaging scans, it is easier to start the right treatment, track the tumor’s development over time ...
M. Renugadevi   +9 more
doaj   +1 more source

Attentional-Biased Stochastic Gradient Descent

open access: yes, 2020
In this paper, we present a simple yet effective provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning. Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
Qi, Qi   +4 more
openaire   +2 more sources

Why random reshuffling beats stochastic gradient descent [PDF]

open access: yesMathematical Programming, 2019
We analyze the convergence rate of the random reshuffling (RR) method, which is a randomized first-order incremental algorithm for minimizing a finite sum of convex component functions. RR proceeds in cycles, picking a uniformly random order (permutation) and processing the component functions one at a time according to this order, i.e., at each cycle,
M. Gürbüzbalaban   +2 more
openaire   +4 more sources

Roll‐to‐Roll Mechanical Exfoliation for Large‐Area van der Waals Films with Preserved Crystallographic Alignment

open access: yesAdvanced Functional Materials, EarlyView.
A roll‐to‐roll exfoliation method is demonstrated that preserves the crystallographic alignment of anisotropic 2D materials over large areas, enabling scalable fabrication of directional electronic and optoelectronic devices. Abstract Anisotropic 2D materials such as black phosphorus (BP), GeS or CrSBr, exhibit direction‐dependent optical and ...
Esteban Zamora‐Amo   +14 more
wiley   +1 more source

Single Solid‐State Ion Channels as Potentiometric Nanosensors

open access: yesAdvanced Functional Materials, EarlyView.
Single gold nanopores functionalized with mixed self‐assembled monolayers act as solid‐state ion channels for direct, selective potentiometric sensing of inorganic ions (Ag⁺). The design overcomes key miniaturization barriers of conventional ion‐selective electrodes by combining low resistivity with suppressed loss of active components, enabling robust
Gergely T. Solymosi   +4 more
wiley   +1 more source

Home - About - Disclaimer - Privacy