Results 1 to 10 of about 23,732,778 (76)
Nörobilişsel Yaşlanma Modelleri: Kaybedilenin Telafisi Mümkün mü?
Yaşlanma sürecinde bilişsel işlevlerin birçoğunda düşüş görülmektedir. Yaşlanmayla birlikte bilişsel işlevlerde ortaya çıkan bu değişim ile nöral süreçler arasındaki ilişkinin incelenmesine olanak sağlayan nörobilişsel modeller, beyindeki aktivasyon ...
Handan Can, Elif Güldemir
doaj +3 more sources
Şizofreni hastalarında TSH, fT3 ve fT4 düzeylerinin nörobilişsel belirtiler üzerine etkisi
Amaç: Bu çalışmamızda ötiroid psikoz hastalarında tiroid hormon düzeyleri ile pozitif, negatif, genel ve bilişsel belirtiler arasındaki ilişkiyi incelemeyi amaçladık. Gereç ve Yöntem: Çalışmaya 33 şizofreni hastası dahil edildi.
Hatice Kaya, Batuhan Ayik
semanticscholar +1 more source
Visual Instruction Tuning [PDF]
Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field. In this paper, we present the first attempt to use
Haotian Liu +3 more
semanticscholar +1 more source
DINOv2: Learning Robust Visual Features without Supervision [PDF]
The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision.
M. Oquab +25 more
semanticscholar +1 more source
Improved Baselines with Visual Instruction Tuning [PDF]
Large multimodal models (LMM) have recently shown encouraging progress with visual instruction tuning. In this paper, we present the first systematic study to investigate the design choices of LMMs in a controlled setting under the LLaVA framework.
Haotian Liu +3 more
semanticscholar +1 more source
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models [PDF]
The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from
Junnan Li +3 more
semanticscholar +1 more source
Bu yazıda, dil sisteminin uzaysal ve zamansal işlemlenişini, İşitsel Tümce Anlamlandırmanın Dinamik Çift-Yönlü İşlemleme Modeli (Friederici, 2002, 2011, 2012) çerçevesinde inceleyerek alanyazına çok sayıda araştırmayı kazandıran sinirdilbilimci Angela D.
İpek Pınar UZUN
doaj +1 more source
Adding Conditional Control to Text-to-Image Diffusion Models [PDF]
We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers ...
Lvmin Zhang, Anyi Rao, Maneesh Agrawala
semanticscholar +1 more source
MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models [PDF]
The recent GPT-4 has demonstrated extraordinary multi-modal abilities, such as directly generating websites from handwritten text and identifying humorous elements within images.
Deyao Zhu +4 more
semanticscholar +1 more source
Classifier-Free Diffusion Guidance [PDF]
Classifier guidance is a recently introduced method to trade off mode coverage and sample fidelity in conditional diffusion models post training, in the same spirit as low temperature sampling or truncation in other types of generative models. Classifier
Jonathan Ho
semanticscholar +1 more source

