Results 1 to 10 of about 9,304,881 (366)

Scaling Instruction-Finetuned Language Models [PDF]

open access: yesJournal of machine learning research, 2022
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks.
Hyung Won Chung   +31 more
semanticscholar   +1 more source

Swin Transformer V2: Scaling Up Capacity and Resolution [PDF]

open access: yesComputer Vision and Pattern Recognition, 2021
We present techniques for scaling Swin Transformer [35] up to 3 billion parameters and making it capable of training with images of up to 1,536x1,536 resolution.
Ze Liu   +11 more
semanticscholar   +1 more source

Scaling Autoregressive Models for Content-Rich Text-to-Image Generation [PDF]

open access: yesTrans. Mach. Learn. Res., 2022
We present the Pathways Autoregressive Text-to-Image (Parti) model, which generates high-fidelity photorealistic images and supports content-rich synthesis involving complex compositions and world knowledge.
Jiahui Yu   +16 more
semanticscholar   +1 more source

Scaling Vision Transformers to 22 Billion Parameters [PDF]

open access: yesInternational Conference on Machine Learning, 2023
The scaling of Transformers has driven breakthrough capabilities for language models. At present, the largest large language models (LLMs) contain upwards of 100B parameters.
Mostafa Dehghani   +41 more
semanticscholar   +1 more source

Reproducible Scaling Laws for Contrastive Language-Image Learning [PDF]

open access: yesComputer Vision and Pattern Recognition, 2022
Scaling up neural networks has led to remarkable performance across a wide range of tasks. Moreover, performance often follows reliable scaling laws as a function of training set size, model size, and compute, which offers valuable guidance as large ...
Mehdi Cherti   +8 more
semanticscholar   +1 more source

Scaling Vision Transformers [PDF]

open access: yesComputer Vision and Pattern Recognition, 2021
Attention-based neural networks such as the Vision Transformer (ViT) have recently attained state-of-the-art results on many computer vision benchmarks.
Xiaohua Zhai   +3 more
semanticscholar   +1 more source

Emergence of Scaling in Random Networks [PDF]

open access: yesScience, 1999
Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution.
B. McInnes   +4 more
semanticscholar   +2 more sources

Temperature-Dependent Crystallization Mechanisms of Methylammonium Lead Iodide Perovskite From Different Solvents

open access: yesFrontiers in Energy Research, 2021
Hybrid perovskites are a novel type of semiconductors that show great potential for solution-processed optoelectronic devices. For all applications, the device performance is determined by the quality of the solution-processed perovskite thin films ...
Oleksandra Shargaieva   +6 more
doaj   +1 more source

Scaling up nutrition through multisectoral planning: An exploratory review of 26 national nutrition plans

open access: yesMaternal and Child Nutrition, 2021
With a growing consensus on the need to address malnutrition in a comprehensive and multisectoral way, there has been increased attention on the processes and factors for multisectoral nutrition planning to be successful.
Amanda Coile   +5 more
doaj   +1 more source

Mastering stakeholders’ engagement to reach national scale, sustainability and wide adoption of digital health initiatives: lessons learnt from Burkina Faso

open access: yesFamily Medicine and Community Health, 2021
Although low-income countries have recently seen an exponential flourishing of digital health initiatives, the landscape is characterised by a myriad of small pilots that rarely reach scaling, sustainability and wide adoption.
Riccardo Lampariello   +1 more
doaj   +1 more source

Home - About - Disclaimer - Privacy