Results 11 to 20 of about 9,149,613 (368)

Scaling Instruction-Finetuned Language Models [PDF]

open access: yesJournal of machine learning research, 2022
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks.
Hyung Won Chung   +31 more
semanticscholar   +1 more source

Swin Transformer V2: Scaling Up Capacity and Resolution [PDF]

open access: yesComputer Vision and Pattern Recognition, 2021
We present techniques for scaling Swin Transformer [35] up to 3 billion parameters and making it capable of training with images of up to 1,536x1,536 resolution.
Ze Liu   +11 more
semanticscholar   +1 more source

Scaling Autoregressive Models for Content-Rich Text-to-Image Generation [PDF]

open access: yesTrans. Mach. Learn. Res., 2022
We present the Pathways Autoregressive Text-to-Image (Parti) model, which generates high-fidelity photorealistic images and supports content-rich synthesis involving complex compositions and world knowledge.
Jiahui Yu   +16 more
semanticscholar   +1 more source

Scaling Vision Transformers to 22 Billion Parameters [PDF]

open access: yesInternational Conference on Machine Learning, 2023
The scaling of Transformers has driven breakthrough capabilities for language models. At present, the largest large language models (LLMs) contain upwards of 100B parameters.
Mostafa Dehghani   +41 more
semanticscholar   +1 more source

Reproducible Scaling Laws for Contrastive Language-Image Learning [PDF]

open access: yesComputer Vision and Pattern Recognition, 2022
Scaling up neural networks has led to remarkable performance across a wide range of tasks. Moreover, performance often follows reliable scaling laws as a function of training set size, model size, and compute, which offers valuable guidance as large ...
Mehdi Cherti   +8 more
semanticscholar   +1 more source

Emergence of Scaling in Random Networks [PDF]

open access: yesScience, 1999
Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution.
B. McInnes   +4 more
semanticscholar   +2 more sources

Beyond neural scaling laws: beating power law scaling via data pruning [PDF]

open access: yesNeural Information Processing Systems, 2022
Widely observed neural scaling laws, in which error falls off as a power of the training set size, model size, or both, have driven substantial performance improvements in deep learning.
Ben Sorscher   +4 more
semanticscholar   +1 more source

Scaling Back on Scales with a Scale of Scales [PDF]

open access: yesAmerican Journal of Neuroradiology, 2010
An ever-increasing number of articles are published introducing clinical scales to describe neurovascular diseases. Unfortunately, unless you are some kind of idiot savant, there are now too many scales to remember.
H.J. Cloft, D.F. Kallmes
openaire   +3 more sources

The politics of scaling [PDF]

open access: yesSocial Studies of Science, 2021
A fixation on ‘scaling up’ has captured current innovation discourses and, with it, political and economic life at large. Perhaps most visible in the rise of platform technologies, big data and concerns about a new era of monopolies, scalability thinking has also permeated public policy in the search for solutions to ‘grand societal challenges ...
Pfotenhauer, Sebastian   +3 more
openaire   +6 more sources

Temperature-Dependent Crystallization Mechanisms of Methylammonium Lead Iodide Perovskite From Different Solvents

open access: yesFrontiers in Energy Research, 2021
Hybrid perovskites are a novel type of semiconductors that show great potential for solution-processed optoelectronic devices. For all applications, the device performance is determined by the quality of the solution-processed perovskite thin films ...
Oleksandra Shargaieva   +6 more
doaj   +1 more source

Home - About - Disclaimer - Privacy