Results 21 to 30 of about 2,331,313 (323)

DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models [PDF]

open access: yesNeural Information Processing Systems, 2023
Learning from human feedback has been shown to improve text-to-image models. These techniques first learn a reward function that captures what humans care about in the task and then improve the models based on the learned reward function.
Ying Fan   +9 more
semanticscholar   +1 more source

Directly Fine-Tuning Diffusion Models on Differentiable Rewards [PDF]

open access: yesInternational Conference on Learning Representations, 2023
We present Direct Reward Fine-Tuning (DRaFT), a simple and effective method for fine-tuning diffusion models to maximize differentiable reward functions, such as scores from human preference models.
Kevin Clark   +3 more
semanticscholar   +1 more source

New Programme Profiles for a New Society: An Introduction

open access: yesTuning Journal for Higher Education, 2014
Higher education is fundamental to both national and global contemporary knowledge economies. It is also a driver for social change (see for example) which crucially includes making higher education available and relevant to a wider section of society ...
Julia González   +2 more
doaj   +1 more source

Building Degree Profiles. The Tuning Approach

open access: yesTuning Journal for Higher Education, 2014
The development of degree profiles is an important art which has become quite specialized in recent years. This article concentrates on the analysis of the importance of the role of degree profiles in the design of degrees and, as a consequence, in ...
Julia González, Maria Yarosh
doaj   +1 more source

STUN: Reinforcement-Learning-Based Optimization of Kernel Scheduler Parameters for Static Workload Performance

open access: yesApplied Sciences, 2022
Modern Linux operating systems are being used in a wide range of fields, from small IoT embedded devices to supercomputers. However, most machines use the default Linux scheduler parameters implemented for general-purpose environments.
Hyeonmyeong Lee   +2 more
doaj   +1 more source

Silicon-based distributed voltage-controlled oscillators [PDF]

open access: yes, 2001
Distributed voltage-controlled oscillators (DVCOs) are presented as a new approach to the design of silicon VCOs at microwave frequencies. In this paper, the operation of distributed oscillators is analyzed and the general oscillation condition is ...
Hajimiri, Ali, Wu, Hui
core   +1 more source

Competences and learning outcomes: a panacea for understanding the (new) role of Higher Education?

open access: yesTuning Journal for Higher Education, 2014
The competence and learning outcomes approach, which intends to improve effective performance of academic staff and students, is becoming dominant in today’s higher education. This was quite different 15 years ago. This contribution aims to offer insight
Robert Wagenaar
doaj   +1 more source

Mathematical Expressions Useful for Tunable Properties of Simple Square Wave Generators

open access: yesMathematics, 2022
This paper compares two electronically controllable solutions of triangular and square wave generators benefiting from a single IC package including all necessary active elements (modular cells fabricated in I3T 0.35 µm ON Semiconductor process operating
Roman Sotner, Jan Jerabek
doaj   +1 more source

Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning

open access: yesProceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022
Prefix-tuning, or more generally continuous prompt tuning, has become an essential paradigm of parameter-efficient transfer learning. Using a large pre-trained language model (PLM), prefix-tuning can obtain strong performance by training only a small portion of parameters.
Chen, Yifan   +5 more
openaire   +2 more sources

Universal Language Model Fine-tuning for Text Classification

open access: yesAnnual Meeting of the Association for Computational Linguistics, 2018
Inductive transfer learning has greatly impacted computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Universal Language Model Fine-tuning (ULMFiT), an effective transfer learning
Jeremy Howard, Sebastian Ruder
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy