Results 11 to 20 of about 2,331,313 (323)

The Flan Collection: Designing Data and Methods for Effective Instruction Tuning [PDF]

open access: yesInternational Conference on Machine Learning, 2023
We study the design decisions of publicly available instruction tuning methods, and break down the development of Flan 2022 (Chung et al., 2022). Through careful ablation studies on the Flan Collection of tasks and methods, we tease apart the effect of ...
S. Longpre   +10 more
semanticscholar   +1 more source

LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention [PDF]

open access: yesarXiv.org, 2023
We present LLaMA-Adapter, a lightweight adaption method to efficiently fine-tune LLaMA into an instruction-following model. Using 52K self-instruct demonstrations, LLaMA-Adapter only introduces 1.2M learnable parameters upon the frozen LLaMA 7B model ...
Renrui Zhang   +8 more
semanticscholar   +1 more source

Tuning in Higher Education: Ten years on

open access: yesTuning Journal for Higher Education, 2023
When the first issue of the Tuning Journal for Higher Education was published in November 2013, the Tuning initiative had become of global significance, running projects in all continents.
Julia María González Ferreras   +1 more
doaj   +1 more source

Retrograde Tuning of Tuning [PDF]

open access: yesNeuron, 2008
One way to localize sounds is to measure differences in sound intensity at the two ears. This comparison is made in the lateral superior olive, where signals from both ears converge. Magnusson et al. in this issue of Neuron show that dendritic GABA release can regulate this comparison, which may allow animals localizing sounds to adapt to listening ...
Xu-Friedman, Matthew A., Regehr, Wade G.
openaire   +2 more sources

TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation [PDF]

open access: yesACM Conference on Recommender Systems, 2023
Large Language Models (LLMs) have demonstrated remarkable performance across diverse domains, thereby prompting researchers to explore their potential for use in recommendation systems. Initial attempts have leveraged the exceptional capabilities of LLMs,
Keqin Bao   +5 more
semanticscholar   +1 more source

Fine-Tuning Language Models with Just Forward Passes [PDF]

open access: yesNeural Information Processing Systems, 2023
Fine-tuning language models (LMs) has yielded success on diverse downstream tasks, but as LMs grow in size, backpropagation requires a prohibitively large amount of memory.
Sadhika Malladi   +6 more
semanticscholar   +1 more source

Mathématiques de la musique idéale : Harmonie cosmique et proportions architecturales en Europe et aux États-Unis dans les années 1950-1960

open access: yesEtudes Epistémè, 2023
The idea that the proportions of a building are to be set on mathematical ratios that arise from musical harmony – itself being the image of a cosmic harmony inaudible to our mortal ears – was brought back to the forefront of the architectural scene in ...
Jules-Valentin Boucher
doaj   +1 more source

MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning [PDF]

open access: yesInternational Conference on Learning Representations, 2023
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset.
Xiang Yue   +7 more
semanticscholar   +1 more source

Fine-Tuning LLaMA for Multi-Stage Text Retrieval [PDF]

open access: yesAnnual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2023
While large language models (LLMs) have shown impressive NLP capabilities, existing IR applications mainly focus on prompting LLMs to generate query expansions or generating permutations for listwise reranking. In this study, we leverage LLMs directly to
Xueguang Ma   +4 more
semanticscholar   +1 more source

SVDiff: Compact Parameter Space for Diffusion Fine-Tuning [PDF]

open access: yesIEEE International Conference on Computer Vision, 2023
Diffusion models have achieved remarkable success in text-to-image generation, enabling the creation of high-quality images from text prompts or other modalities.
Ligong Han   +5 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy