Results 51 to 60 of about 2,420,708 (306)
Dual Adapter Tuning of Vision–Language Models Using Large Language Models
Vision–language models (VLMs) pre-trained on large-scale image–text pairs have shown impressive results in zero-shot vision tasks. Knowledge transferability of these models can be further improved with the help of a limited number of samples.
Mohammad Reza Zarei +2 more
doaj +1 more source
Treatment Decision‐Making Roles and Preferences Among Adolescents and Young Adults With Cancer
ABSTRACT Background Decision‐making (DM) dynamics between adolescents and young adults (AYAs) with cancer, parents, and oncologists remain underexplored in diverse populations. We examined cancer treatment DM preferences among an ethnically and socioeconomically diverse group of AYAs and their parents.
Amanda M. Gutierrez +14 more
wiley +1 more source
Generating Synthetic Data for Neural Keyword-to-Question Models
Search typically relies on keyword queries, but these are often semantically ambiguous. We propose to overcome this by offering users natural language questions, based on their keyword queries, to disambiguate their intent.
Bogdanova Dasha +4 more
core +1 more source
Autoformalization with Large Language Models
Autoformalization is the process of automatically translating from natural language mathematics to formal specifications and proofs. A successful autoformalization system could advance the fields of formal verification, program synthesis, and artificial intelligence.
Wu, Yuhuai +6 more
openaire +3 more sources
DISCO: Distilling Counterfactuals with Large Language Models [PDF]
Zeming Chen +4 more
openalex +1 more source
ABSTRACT Background Parents of children treated for acute lymphoblastic leukemia (ALL) often experience significant caregiver burden and disruption to their well‐being. While parent quality of life (QoL) during treatment is well characterized, little is known about outcomes during early survivorship.
Sara Dal Pra +3 more
wiley +1 more source
Scaling Recurrent Neural Network Language Models
This paper investigates the scaling properties of Recurrent Neural Network Language Models (RNNLMs). We discuss how to train very large RNNs on GPUs and address the questions of how RNNLMs scale with respect to model size, training-set size ...
Ash, Tom +4 more
core +1 more source
This perspective highlights emerging insights into how the circadian transcription factor CLOCK:BMAL1 regulates chromatin architecture, cooperates with other transcription factors, and coordinates enhancer dynamics. We propose an updated framework for how circadian transcription factors operate within dynamic and multifactorial chromatin landscapes ...
Xinyu Y. Nie, Jerome S. Menet
wiley +1 more source
Personality Emulation Utilizing Large Language Models
Fake identities have proven to be an effective methodology for conducting privacy and cybersecurity research; however, existing models are limited in their ability to interact with and respond to received communications.
Jack Kolenbrander, Alan J. Michaels
doaj +1 more source
Large language models and the emergence phenomena
This perspective explores the potential of emergence phenomena in large language models (LLMs) to transform data management and analysis in radiology. We provide a concise explanation of LLMs, define the concept of emergence in machine learning, offer ...
Vera Sorin, Eyal Klang
doaj +1 more source

