Results 261 to 270 of about 14,757,385 (333)
Some of the next articles are maybe not open access.

Wide Area Technical Report Service

Communications of the ACM, 1994
Wide Area Technical Report Service (WATERS) is a distributed database of computer science technical reports. Contributors are departments of computer science that make their reports, stored locally at their sites, available through World-Wide Web and a WAIS search engine.
Jim French   +3 more
openaire   +1 more source

Qwen2.5-VL Technical Report

arXiv.org
We introduce Qwen2.5-VL, the latest flagship model of Qwen vision-language series, which demonstrates significant advancements in both foundational capabilities and innovative functionalities. Qwen2.5-VL achieves a major leap forward in understanding and
Shuai Bai   +26 more
semanticscholar   +1 more source

Qwen3 Technical Report

arXiv.org
In this work, we present Qwen3, the latest version of the Qwen model family. Qwen3 comprises a series of large language models (LLMs) designed to advance performance, efficiency, and multilingual capabilities.
An Yang   +59 more
semanticscholar   +1 more source

Qwen2.5 Technical Report

arXiv.org
In this report, we introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stages ...
Qwen An Yang   +43 more
semanticscholar   +1 more source

Gemma 3 Technical Report

arXiv.org
We introduce Gemma 3, a multimodal addition to the Gemma family of lightweight open models, ranging in scale from 1 to 27 billion parameters. This version introduces vision understanding abilities, a wider coverage of languages and longer context - at ...
Gemma Team Aishwarya Kamath   +209 more
semanticscholar   +1 more source

Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone

arXiv.org
We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3 ...
Marah Abdin   +88 more
semanticscholar   +1 more source

Qwen2 Technical Report

arXiv.org
This report introduces the Qwen2 series, the latest addition to our large language models and large multimodal models. We release a comprehensive suite of foundational and instruction-tuned language models, encompassing a parameter range from 0.5 to 72 ...
An Yang   +57 more
semanticscholar   +1 more source

Qwen2.5-Omni Technical Report

arXiv.org
In this report, we present Qwen2.5-Omni, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner.
Jin Xu   +13 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy