Results 231 to 240 of about 5,805,357 (277)
Some of the next articles are maybe not open access.
Domain-Adversarial Training of Neural Networks
Journal of machine learning research, 2015We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions.
Yaroslav Ganin +7 more
semanticscholar +1 more source
Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks
European Conference on Computer Vision, 2020Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-
Xiujun Li +11 more
semanticscholar +1 more source
“Training, Training, Training”
2017This chapter deals with the Training system of the Iraqi armed forces. It describes the Training Division of the General Staff, its missions and responsibilities and its development, especially during the Iran-Iraq war, the Iraqi Military Doctrine and training methods, and the staff directorates subordinated to it.
Pesach Malovany +3 more
openaire +1 more source
Emergency Nurse, 2008
The National Patient Safety Agency (NPSA) has launched a Foresight Training Resource Pack to improve the safety of patients treated in the NHS.
openaire +2 more sources
The National Patient Safety Agency (NPSA) has launched a Foresight Training Resource Pack to improve the safety of patients treated in the NHS.
openaire +2 more sources
InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models
arXiv.orgWe introduce InternVL3, a significant advancement in the InternVL series featuring a native multimodal pre-training paradigm. Rather than adapting a text-only large language model (LLM) into a multimodal large language model (MLLM) that supports visual ...
Jinguo Zhu +47 more
semanticscholar +1 more source
Understanding R1-Zero-Like Training: A Critical Perspective
arXiv.orgDeepSeek-R1-Zero has shown that reinforcement learning (RL) at scale can directly enhance the reasoning capabilities of LLMs without supervised fine-tuning.
Zi-Yan Liu +7 more
semanticscholar +1 more source
Nursing Older People, 2009
The Royal Society for Public Health is holding a one-day course providing vital guidance on the nutritional needs of older people. The course includes advice on nutritional assessment, catering, practical cooking and menu planning.
openaire +2 more sources
The Royal Society for Public Health is holding a one-day course providing vital guidance on the nutritional needs of older people. The course includes advice on nutritional assessment, catering, practical cooking and menu planning.
openaire +2 more sources
TÜLU 3: Pushing Frontiers in Open Language Model Post-Training
arXiv.orgLanguage model post-training is applied to refine behaviors and unlock new skills across a wide range of recent language models, but open recipes for applying these techniques lag behind proprietary ones. The underlying training data and recipes for post-
Nathan Lambert +22 more
semanticscholar +1 more source
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
arXiv.orgHumans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity.
Evan Hubinger +38 more
semanticscholar +1 more source
Nursing Standard, 1987
Training district nurses in health education benefits the nurses themselves, users of the service and management, according to a health promotion officer from Nottinghamshire.
openaire +2 more sources
Training district nurses in health education benefits the nurses themselves, users of the service and management, according to a health promotion officer from Nottinghamshire.
openaire +2 more sources

