Results 271 to 280 of about 32,887,941 (331)
Some of the next articles are maybe not open access.
When to use and how to report the results of PLS-SEM
European Business Review, 2019Purpose The purpose of this paper is to provide a comprehensive, yet concise, overview of the considerations and metrics required for partial least squares structural equation modeling (PLS-SEM) analysis and result reporting.
Joseph F. Hair +3 more
semanticscholar +1 more source
2023
Reports from the Liecestershire Local History Council One Day Conference, Standing Conference for Local History and the Blake Report on Local ...
openaire +1 more source
Reports from the Liecestershire Local History Council One Day Conference, Standing Conference for Local History and the Blake Report on Local ...
openaire +1 more source
arXiv.org
We introduce Qwen2.5-VL, the latest flagship model of Qwen vision-language series, which demonstrates significant advancements in both foundational capabilities and innovative functionalities. Qwen2.5-VL achieves a major leap forward in understanding and
Shuai Bai +26 more
semanticscholar +1 more source
We introduce Qwen2.5-VL, the latest flagship model of Qwen vision-language series, which demonstrates significant advancements in both foundational capabilities and innovative functionalities. Qwen2.5-VL achieves a major leap forward in understanding and
Shuai Bai +26 more
semanticscholar +1 more source
arXiv.org
We introduce Gemma 3, a multimodal addition to the Gemma family of lightweight open models, ranging in scale from 1 to 27 billion parameters. This version introduces vision understanding abilities, a wider coverage of languages and longer context - at ...
Gemma Team Aishwarya Kamath +209 more
semanticscholar +1 more source
We introduce Gemma 3, a multimodal addition to the Gemma family of lightweight open models, ranging in scale from 1 to 27 billion parameters. This version introduces vision understanding abilities, a wider coverage of languages and longer context - at ...
Gemma Team Aishwarya Kamath +209 more
semanticscholar +1 more source
arXiv.org
In this report, we introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stages ...
Qwen An Yang +43 more
semanticscholar +1 more source
In this report, we introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stages ...
Qwen An Yang +43 more
semanticscholar +1 more source
arXiv.org
This report introduces the Qwen2 series, the latest addition to our large language models and large multimodal models. We release a comprehensive suite of foundational and instruction-tuned language models, encompassing a parameter range from 0.5 to 72 ...
An Yang +57 more
semanticscholar +1 more source
This report introduces the Qwen2 series, the latest addition to our large language models and large multimodal models. We release a comprehensive suite of foundational and instruction-tuned language models, encompassing a parameter range from 0.5 to 72 ...
An Yang +57 more
semanticscholar +1 more source
arXiv.org
We introduce Qwen3-VL, the most capable vision-language model in the Qwen series to date, achieving superior performance across a broad range of multimodal benchmarks. It natively supports interleaved contexts of up to 256K tokens, seamlessly integrating
Shuai Bai +64 more
semanticscholar +1 more source
We introduce Qwen3-VL, the most capable vision-language model in the Qwen series to date, achieving superior performance across a broad range of multimodal benchmarks. It natively supports interleaved contexts of up to 256K tokens, seamlessly integrating
Shuai Bai +64 more
semanticscholar +1 more source

