Results 61 to 70 of about 731,801 (353)
Should You Mask 15% in Masked Language Modeling? [PDF]
Masked language models (MLMs) conventionally mask 15% of tokens due to the belief that more masking would leave insufficient context to learn good representations; this masking rate has been widely used, regardless of model sizes or masking strategies. In this work, we revisit this important choice of MLM pre-training.
arxiv
In nanoelectronic circuit synthesis, the majority gate and the inverter form the basic combinational logic primitives. This paper deduces the mathematical formulae to estimate the logical masking capability of majority gates, which are used extensively ...
Balasubramanian, P, Naayagi, R T
core +1 more source
The effects of the global structure of the mask in visual backward masking [PDF]
The visibility of a target can be strongly affected by a trailing mask. Research on visual backward masking has typically focused on the temporal characteristics of masking, whereas non-basic spatial aspects have received much less attention.
Hermens, Frouke, Herzog, Michael H.
core +1 more source
Facial mask masking tardive dyskinesia [PDF]
IntroductionFacial covering and mask use is generally considered a preventive measure in reducing spread of infectious respiratory illnesses. With the COVID-19 pandemic, covering of the face, except the eyes, has become the norm for the first time for most people.
openaire +2 more sources
Learning Better Masking for Better Language Model Pre-training [PDF]
Masked Language Modeling (MLM) has been widely used as the denoising objective in pre-training language models (PrLMs). Existing PrLMs commonly adopt a Random-Token Masking strategy where a fixed masking ratio is applied and different contents are masked by an equal probability throughout the entire training.
arxiv
Deterministic versus probabilistic quantum information masking
We investigate quantum information masking for arbitrary dimensional quantum states. We show that mutually orthogonal quantum states can always be served for deterministic masking of quantum information.
Fan, Heng+5 more
core +1 more source
Mask to reconstruct: Cooperative Semantics Completion for Video-text Retrieval [PDF]
Recently, masked video modeling has been widely explored and significantly improved the model's understanding ability of visual regions at a local level. However, existing methods usually adopt random masking and follow the same reconstruction paradigm to complete the masked regions, which do not leverage the correlations between cross-modal content ...
arxiv
Pixelating Familiar People in the Media: Should Masking Be Taken at Face Value? [PDF]
This study questions the effectiveness of masking faces by means of pixelation on television or in newspapers. Previous studies have shown that masking just the face leads to unacceptably high recognition levels, making it likely that participants also ...
Demanet, Jelle+4 more
core +2 more sources
On the Masking-Friendly Designs for Post-Quantum Cryptography [PDF]
Masking is a well-known and provably secure countermeasure against side-channel attacks. However, due to additional redundant computations, integrating masking schemes is expensive in terms of performance. The performance overhead of integrating masking countermeasures is heavily influenced by the design choices of a cryptographic algorithm and is ...
arxiv
Large multidimensional digital images of cancer tissue are becoming prolific, but many challenges exist to automatically extract relevant information from them using computational tools. We describe publicly available resources that have been developed jointly by expert and nonâexpert computational biologists working together during a virtual hackathon
Sandhya Prabhakaran+16 more
wiley +1 more source