Results 31 to 40 of about 295,233 (270)
Learning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global
Chen, Yutian +6 more
openaire +2 more sources
ADADELTA: An Adaptive Learning Rate Method [PDF]
We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent.
Zeiler, Matthew D.
core
This study reveals how the mitochondrial protein Slm35 is regulated in Saccharomyces cerevisiae. The authors identify stress‐responsive DNA elements and two upstream open reading frames (uORFs) in the 5′ untranslated region of SLM35. One uORF restricts translation, and its mutation increases Slm35 protein levels and mitophagy.
Hernán Romo‐Casanueva +5 more
wiley +1 more source
The Ile181Asn variant of human UDP‐xylose synthase (hUXS1), associated with a short‐stature genetic syndrome, has previously been reported as inactive. Our findings demonstrate that Ile181Asn‐hUXS1 retains catalytic activity similar to the wild‐type but exhibits reduced stability, a looser oligomeric state, and an increased tendency to precipitate ...
Tuo Li +2 more
wiley +1 more source
Exponentiated Gradient Meets Gradient Descent
The (stochastic) gradient descent and the multiplicative update method are probably the most popular algorithms in machine learning. We introduce and study a new regularization which provides a unification of the additive and multiplicative updates.
Ghai, Udaya, Hazan, Elad, Singer, Yoram
openaire +2 more sources
Bone metastasis in prostate cancer (PCa) patients is a clinical hurdle due to the poor understanding of the supportive bone microenvironment. Here, we identify stearoyl‐CoA desaturase (SCD) as a tumor‐promoting enzyme and potential therapeutic target in bone metastatic PCa.
Alexis Wilson +7 more
wiley +1 more source
Reparameterizing Mirror Descent as Gradient Descent
Most of the recent successful applications of neural networks have been based on training with gradient descent updates. However, for some small networks, other mirror descent updates learn provably more efficiently when the target is sparse. We present a general framework for casting a mirror descent update as a gradient descent update on a different ...
Amid, Ehsan, Warmuth, Manfred K.
openaire +2 more sources
We reconstituted Synechocystis glycogen synthesis in vitro from purified enzymes and showed that two GlgA isoenzymes produce glycogen with different architectures: GlgA1 yields denser, highly branched glycogen, whereas GlgA2 synthesizes longer, less‐branched chains.
Kenric Lee +3 more
wiley +1 more source
PARP inhibitors are used to treat a small subset of prostate cancer patients. These studies reveal that PARP1 activity and expression are different between European American and African American prostate cancer tissue samples. Additionally, different PARP inhibitors cause unique and overlapping transcriptional changes, notably, p53 pathway upregulation.
Moriah L. Cunningham +21 more
wiley +1 more source
A‐to‐I editing of miRNAs, particularly miR‐200b‐3p, contributes to HGSOC progression by enhancing cancer cell proliferation, migration and 3D growth. The edited form is linked to poorer patient survival and the identification of novel molecular targets.
Magdalena Niemira +14 more
wiley +1 more source

