Results 1 to 10 of about 3,151,766 (167)

Automatic classification of autism spectrum disorder in children using cortical thickness and support vector machine. [PDF]

open access: yesBrain Behav, 2021
Objective: Autism spectrum disorder (ASD) is a neurodevelopmental condition with a heterogeneous phenotype. The role of biomarkers in ASD diagnosis has been highlighted; cortical thickness has proved to be involved in the etiopathogenesis of ASD core ...
Squarcina L   +8 more
europepmc   +5 more sources

The questıon of Nogai language and translation from Nogai [PDF]

open access: yesUluslararası Türk Lehçe Araştırmaları Dergisi
The Nogais are a Turkic people who have experienced the largest number of exiles and genocides in history. Therefore, their language, cultural heritage, history and ethnography have not been widely studied. The frequent change of writing graphics led to serious errors in transliteration and translation.
Gumru
core   +6 more sources

Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2023
We present Video-LLaMA a multi-modal framework that empowers Large Language Models (LLMs) with the capability of understanding both visual and auditory content in the video.
Hang Zhang, Xin Li, Lidong Bing
semanticscholar   +1 more source

Is ChatGPT a General-Purpose Natural Language Processing Task Solver? [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2023
Spurred by advancements in scale, large language models (LLMs) have demonstrated the ability to perform a variety of natural language processing (NLP) tasks zero-shot -- i.e., without adaptation on downstream data.
Chengwei Qin   +5 more
semanticscholar   +1 more source

CodeT5+: Open Code Large Language Models for Code Understanding and Generation [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2023
Large language models (LLMs) pretrained on vast source code have achieved prominent progress in code intelligence. However, existing code LLMs have two main limitations in terms of architecture and pretraining tasks.
Yue Wang   +5 more
semanticscholar   +1 more source

SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2023
Multi-modal large language models are regarded as a crucial step towards Artificial General Intelligence (AGI) and have garnered significant interest with the emergence of ChatGPT.
Dong Zhang   +6 more
semanticscholar   +1 more source

A Survey of GPT-3 Family Large Language Models Including ChatGPT and GPT-4 [PDF]

open access: yesNatural Language Processing Journal, 2023
Large language models (LLMs) are a special class of pretrained language models obtained by scaling model size, pretraining corpus and computation. LLMs, because of their large size and pretraining on large volumes of text data, exhibit special abilities ...
Katikapalli Subramanyam Kalyan
semanticscholar   +1 more source

Red Teaming Language Models with Language Models [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2022
Language Models (LMs) often cannot be deployed because of their potential to harm users in hard-to-predict ways. Prior work identifies harmful behaviors before deployment by using human annotators to hand-write test cases.
Ethan Perez   +8 more
semanticscholar   +1 more source

TheSırderyaoğuzandtheırplaceıntheethnıccomposıtıonoftheuzbekturks

open access: yesBULLETIN of the L N GUMILYOV EURASIAN NATIONAL UNIVERSITY POLITICAL SCIENCE REGIONAL STUDIES ORIENTAL STUDIES TURKOLOGY Series, 2023
ManyTurkishtribesandnon-Turkic(Mongoletal.,especiallyEastIranian)groupscontributed to the ethnic formation of the Uzbeks, one of the most populous Turkish communities.
G. Babayar, F. Dzhumaniyazova
semanticscholar   +1 more source

AudioLM: A Language Modeling Approach to Audio Generation [PDF]

open access: yesIEEE/ACM Transactions on Audio Speech and Language Processing, 2022
We introduce AudioLM, a framework for high-quality audio generation with long-term consistency. AudioLM maps the input audio to a sequence of discrete tokens and casts audio generation as a language modeling task in this representation space. We show how
Zalán Borsos   +10 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy