Results 81 to 90 of about 16,825 (201)
Catch Me If You Can: The Dynamic Nature of Bias in Machine Learning Applications
ABSTRACT Bias in machine learning (ML) applications represents systematic differences between expected and actual values of the predicted outputs, such that certain individuals or groups are systematically and disproportionately (dis)advantaged. This paper investigates the dynamic nature of bias in ML applications.
Monideepa Tarafdar, Irina Rets, Yang Hu
wiley +1 more source
A hard to read font reduces the causality bias
Previous studies have demonstrated that fluency affects judgment and decision-making. The purpose of the present research was to investigate the effect of perceptual fluency in a causal learning task that usually induces an illusion of causality in non ...
Marcos Díaz-Lago, Helena Matute
doaj +1 more source
Towards Debiasing Code Review Support [PDF]
Cognitive biases appear during code review. They significantly impact the creation of feedback and how it is interpreted by developers. These biases can lead to illogical reasoning and decision-making, violating one of the main hypotheses supporting code review: developers' accurate and objective code evaluation.
Jetzen, Tobias +3 more
openaire +2 more sources
ABSTRACT Aim The aim of this research was to describe factors that influence Intensive Care Unit liaison nurses' decision to stand down a medical emergency team call response. The decision to end a medical emergency team response for a deteriorating patient is referred to as the medical emergency team call stand‐down decision.
Natalie A. Kondos +3 more
wiley +1 more source
Causal illusions consist of believing that there is a causal relationship between events that are actually unrelated. This bias is associated with pseudoscience, stereotypes and other unjustified beliefs.
Naroa Martínez +3 more
doaj +1 more source
Modeling and debiasing resource saving judgments
Svenson (2011) showed that choices of one of two alternative productivity increases to save production resources (e.g., man-months) were biased. Judgments of resource savings following a speed increase from a low production speed line were underestimated
Ola Svenson +2 more
doaj +1 more source
Quantifying and Reducing Stereotypes in Word Embeddings [PDF]
Machine learning algorithms are optimized to model statistical properties of the training data. If the input data reflects stereotypes and biases of the broader society, then the output of the learning algorithm also captures these stereotypes.
Bolukbasi, Tolga +4 more
core
Econometrics at the Extreme: From Quantile Regression to QFAVAR1
ABSTRACT This paper surveys quantile modelling from its theoretical origins to current advances. We organize the literature and present core econometric formulations and estimation methods for: (i) cross‐sectional quantile regression; (ii) quantile time series models and their time series properties; (iii) quantile vector autoregressions for ...
Stéphane Goutte +4 more
wiley +1 more source
Fair Classification Without Sensitive Attribute Labels via Dynamic Reweighting
Fairness-aware classification with respect to sensitive attributes, such as gender and race, is one of the most important topics in machine learning. Although numerous studies have made outstanding progress through various approaches, one key limitation ...
Pilhyeon Lee, Sungho Park
doaj +1 more source
Abstract Purpose Although interviews are individual in nature, and suggestiveness is a major pitfall when questioning children, individual differences in interviewer bias and suggestiveness remain understudied. We assessed relationships between Cognitions and Emotions about Child Sexual Abuse (CECSA) with suggestive questioning and bias across three ...
Elsa Gewehr +4 more
wiley +1 more source

