Results 71 to 80 of about 132,225 (206)
Aims: Mesonephric-like adenocarcinoma (MLA) is a recently described histologic tumor subtype of the Müllerian tract. MLA can arise in association with Müllerian lesions that share common mutations.
Elizabeth Arslanian+3 more
doaj
Epidermal cell monolayers prepared from partially dissected barley (Hordeum vulgare) coleoptiles were used for in vivo analysis of race-specific resistance to powdery mildew (Erysiphe graminis f. sp. hordei) specified by host genes Mla-1, Mla-12, and Mlg.
Ruth Schiffer+6 more
doaj +1 more source
Magnifying Lens Abstraction for Stochastic Games with Discounted and Long-run Average Objectives
Turn-based stochastic games and its important subclass Markov decision processes (MDPs) provide models for systems with both probabilistic and nondeterministic behaviors. We consider turn-based stochastic games with two classical quantitative objectives:
Chatterjee, Krishnendu+2 more
core
Design tradeoffs for a Multispectral Linear Array (MLA) instrument [PDF]
The heart of the multispectral linear array (MLA) design problem is to develop an instrument concept which concurrently provides a wide field-of-view with high resolution, spectral separation with precise band-to band registration, and excellent ...
Mika, A. M.
core +1 more source
The Morphology of Craters on Mercury: Results from MESSENGER Flybys [PDF]
Topographic data measured from the Mercury Laser Altimeter (MLA) and the Mercury Dual Imaging System (MDIS) aboard the MESSENGER spacecraft were used for investigations of the relationship between depth and diameter for impact craters on Mercury. Results
Barnouin, Oliver S.+7 more
core +1 more source
The Mla locus in barley (Hordeum vulgare) conditions isolate-specific immunity to the powdery mildew fungus (Blumeria graminis f. sp. hordei) and encodes intracellular coiled-coil (CC) domain, nucleotide-binding (NB) site, and leucine-rich repeat (LRR ...
Sabine Seeholzer+9 more
doaj +1 more source
Latent Multi-Head Attention for Small Language Models [PDF]
We present the first comprehensive study of latent multi-head attention (MLA) for small language models, revealing interesting efficiency-quality trade-offs. Training 30M-parameter GPT models on 100,000 synthetic stories, we benchmark three architectural variants: standard multi-head attention (MHA), MLA, and MLA with rotary positional embeddings (MLA ...
arxiv