Results 21 to 30 of about 529,949 (323)
N1 tuning to words, a neural marker of visual word recognition, develops by an interaction between age and ability. The development of N1 tuning to a second learnt print is unclear.
Shuting Huo +4 more
doaj +1 more source
The Two-Higgs Doublet Model (2HDM) is one of the most popular and natural extensions of the Higgs sector; but it has two potential fine-tuning problems, related to the electroweak (EW) breaking and the requirement of alignment with the SM Higgs boson. We
A. Bernal, J. A. Casas, J. M. Moreno
doaj +1 more source
Analysis of fine-tuning measures in models with extended Higgs sectors
In the literature measures of fine-tuning have been discussed as one of the tools to assess the feasibility of beyond the Standard Model theories. In this paper we focus on two specific measures and investigate what kind of fine-tuning they actually ...
Daniël Boer +2 more
doaj +1 more source
Numerous research have demonstrated that Convolutional Neural Network (CNN) models are capable of classifying visual field (VF) defects with great accuracy.
Masyitah Abu +6 more
doaj +1 more source
Naturalness and Fine Tuning in the NMSSM: Implications of Early LHC Results [PDF]
We study the fine tuning in the parameter space of the semi-constrained NMSSM, where most soft Susy breaking parameters are universal at the GUT scale. We discuss the dependence of the fine tuning on the soft Susy breaking parameters M_1/2 and m0, and on
A Strumia +52 more
core +3 more sources
The fine art of fine-tuning: A structured review of advanced LLM fine-tuning techniques
Transformer-based models have consistently demonstrated superior accuracy compared to various traditional models across a range of downstream tasks.
Samar Pratap +5 more
doaj +1 more source
Exploring fine-tuning of the Next-to-Minimal Composite Higgs Model
We perform a detailed study of the fine-tuning of the two-site, 4D, Next-to-Minimal Composite Higgs Model (NMCHM), based on the global symmetry breaking pattern SO(6) → SO(5).
Daniel Murnane +2 more
doaj +1 more source
Repeatability of Fine-Tuning Large Language Models Illustrated Using QLoRA
Large language models (LLMs) have shown progress and promise in diverse applications ranging from the medical field to chat bots. Developing LLMs requires a large corpus of data and significant computation resources to achieve efficient learning ...
Saeed S. Alahmari +3 more
doaj +1 more source
Fine Tuning in General Gauge Mediation [PDF]
We study the fine-tuning problem in the context of general gauge mediation. Numerical analyses toward for relaxing fine-tuning are presented. We analyse the problem in typical three cases of the messenger scale, that is, GUT ($2\times10^{16}$ GeV ...
A Birkedal +74 more
core +2 more sources
Remote sensing tuning: A survey
Large models have accelerated the development of intelligent interpretation in remote sensing. Many remote sensing foundation models (RSFM) have emerged in recent years, sparking a new wave of deep learning in this field.
Dongshuo Yin +6 more
doaj +1 more source

