Results 31 to 40 of about 1,137,884 (305)
Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models [PDF]
Pre-trained vision-language models (e.g., CLIP) have shown promising zero-shot generalization in many downstream tasks with properly designed text prompts.
Manli Shu +6 more
semanticscholar +1 more source
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity [PDF]
When primed with only a handful of training samples, very large, pretrained language models such as GPT-3 have shown competitive results when compared to fully-supervised, fine-tuned, large, pretrained language models.
Yao Lu +4 more
semanticscholar +1 more source
P-Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks
Prompt tuning, which only tunes continuous prompts with a frozen language model, substantially reduces per-task storage and memory usage at training. However, in the context of NLU, prior work reveals that prompt tuning does not perform well for normal ...
Xiao Liu +6 more
semanticscholar +1 more source
Current large language model (LLM) applications often employ multi-component prompts, comprising both system and user prompts, to guide model behaviors. While recent advancements have demonstrated the efficacy of automatically optimizing either the system or user prompt to boost performance, such unilateral approaches often yield suboptimal outcomes ...
Zhang, Xinyu +3 more
openaire +2 more sources
Artificial Intelligence (AI) plays an increasingly prominent role in various spheres of life in today’s world, including generation of a variety of visual content from selfie stream processing to creating works of digital art.
Ruslan Khandogin, Nina S. Proner
doaj +1 more source
I consider a serious objection to the knowledge account of assertion and develop a response. In the process I introduce important new data on prompting assertion, which all theorists working in the area should take note of.
openaire +2 more sources
Insights Gained from Using AI to Produce Cases for Problem-Based Learning
Ulster University’s School of Medicine embraces a problem-based learning (PBL) approach, yet crafting scenarios for this method poses challenges, requiring collaboration among medical and academic experts who are often difficult to convene. This obstacle
Enjy Abouzeid, Patricia Harris
doaj +1 more source
FI-NL2PY2SQL: Financial Industry NL2SQL Innovation Model Based on Python and Large Language Model
With the rapid development of prominent models, NL2SQL has made many breakthroughs, but customers still hope that the accuracy of NL2SQL can be continuously improved through optimization. The method based on large models has brought revolutionary changes
Xiaozheng Du +4 more
doaj +1 more source
Empowering Local Image Generation: Harnessing Stable Diffusion for Machine Learning and AI [PDF]
This paper examines the ability to use Stable Diffusion's diffusion models to get state-of-the-art synthesis results on image data and other types of data.
Ahmed Imran KABIR +3 more
doaj +1 more source
ABSTRACT Introduction Characterizing stressful events reported by childhood cancer survivors experienced throughout the lifespan may help improve trauma‐informed care relevant to the survivor experience. Methods Participants included 2552 survivors (54% female; 34 years of age) and 469 community controls (62% female; 33 years of age) from the St.
Megan E. Ware +13 more
wiley +1 more source

