Results 291 to 300 of about 2,331,313 (323)
Some of the next articles are maybe not open access.

ReFT: Reasoning with Reinforced Fine-Tuning

Annual Meeting of the Association for Computational Linguistics
One way to enhance the reasoning capability of Large Language Models (LLMs) is to conduct Supervised Fine-Tuning (SFT) using Chain-of-Thought (CoT) annotations.
Trung Quoc Luong   +5 more
semanticscholar   +1 more source

RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture

arXiv.org
There are two common ways in which developers are incorporating proprietary and domain-specific data when building applications of Large Language Models (LLMs): Retrieval-Augmented Generation (RAG) and Fine-Tuning.
M. A. D. L. Balaguer   +15 more
semanticscholar   +1 more source

Performance Tuning: Tuning the Instance

2003
In the previous chapter, you learned how to tune an application by writing efficient SQL in order to maximize its performance. The use of optimal SQL, efficient design of the layout of the database objects, and so on are all part of a planned or proactive tuning effort.
openaire   +1 more source

Mucin tuning

Nature Chemical Biology, 2021
openaire   +2 more sources

VAGAL TUNING

ASAIO Journal, 1965
A M, BILGUTAY   +4 more
openaire   +2 more sources

Tuned out

Nature Chemical Biology, 2022
openaire   +2 more sources

Index Tuning

2003
Bonnet, Philippe, Shasha, Dennis
openaire   +1 more source

Tuning out the ‘‘tuning forks’’

American Journal of Physics, 1982
T. H. Ragsdale, W. L. Davis
openaire   +1 more source

Home - About - Disclaimer - Privacy