Abstract
The field of psychology has rapidly transformed its open science practices in recent years. Yet there has been limited progress in integrating principles of diversity, equity and inclusion. In this Perspective, we raise the spectre of Questionable Generalisability Practices and the issue of MASKing (Making Assumptions based on Skewed Knowledge), calling for more responsible practices in generalising study findings and co-authorship to promote global equity in knowledge production. To drive change, researchers must target all four key components of the research process: design, reporting, generalisation, and evaluation. Additionally, macro-level geopolitical factors must be considered to move towards a robust behavioural science that is truly inclusive, representing the voices and experiences of the majority world (i.e., low-and-middle-income countries).
Similar content being viewed by others
Introduction
Research in the science of human behaviour has come under unprecedented scrutiny over the last two decades1,2,3,4. The emergence of twin crises – the replicability and generalisability crises – has raised foundational questions about the credibility of psychological research, reducing public trust in psychological science5,6,7. Concerns around replicability have ushered a new era of scientific reform: tremendous interest in formulating best practices in meta-science in the service of greater transparency and openness in science has emerged1,8,9,10. The scientific community has developed a series of innovative practices, which include transparency of design (e.g., preregistration), collaboration in data collection (e.g., big team science), open materials, and broad accessibility of research output (e.g., preprint servers)11,12,13. Increasingly, journals and funding agencies have encouraged and sought to normalise these research practices12, introducing a fundamental cultural shift in how psychological scientists conduct research.
In contrast, the generalisability crisis, created largely by sampling and (over)generalising from thin slices of the global population14,15,16, has seen comparatively modest progress. This neglect to address the generalisability crisis persists despite concerns that it presents real and perceived threats to psychology equivalent in magnitude to concerns about replicability. Psychological research remains startlingly non-diverse at a global scale and continues to draw and generalise from narrow subsets of Western countries17,18,19,20 (e.g., highly educated, young, digitally connected, convenience samples). Yet, unwarranted generalisation remains common in psychological science17. The lack of population-level diversity in behavioural science extends beyond participants; there is cultural homogeneity in our communities of practice (researchers; reviewers; professional societies). These factors narrow psychological science’s scope in ways that limit generalisability.
The impact of limited generalisability extends beyond basic science. Psychologists routinely advise policymakers on some of the most pressing challenges of our time: the COVID-19 pandemic21, political polarisation22, climate change23, and digitalisation24. If psychological researchers draw from a small subset of the global population, resultant policies will necessarily be limited in their impact and relevance7. This skew can also mask topics of relevance to highly populous but under-represented regions of the world. For example, Low to Middle Income Countries (LMICs), which are far more vulnerable to the effects of climate change25, often do not contribute to nor benefit from a political agenda for mitigating climate change designed for High-Income Countries (HICs). In this way, fundamental questions about who shapes and participates in science is a key determinant of how science is scaffolded and developed.
Although a need for greater diversity in the service of generalisability has been centred in discussions about open science and replicability26,27,28,29,30, current practices do not adequately reflect this. This may be because these goals are sometimes viewed as orthogonal rather than complementary to one another31. In part, this reflects the origins of the open science movement, which emerged as a community-driven, grassroots initiative, initially spearheaded by a group of scholars based in HICs32. As a result, considerations around diversity were not as directly integrated into early open science practices, which may have contributed to asynchronous progress in open science practices relative to diversification.
In this manifesto, we propose an intentional and careful integration of scientific goals and practices to increase replicability, diversity and representation. This involves evaluating how open science promotes or hinders diversity in constructive ways and prioritising how diversity in authorship can contribute to achieving these goals. A scientific agenda that interlocks these priorities is essential to a robust, replicable, and culturally and contextually relevant understanding of human behaviour. Typically, generalisability implies that the same theory or findings apply across all contexts. However, this perspective risks framing the discovery of cultural differences, or other forms of boundary conditions (e.g., social and historical contexts), as a failure to the goal of generalisability. Instead, the notion of generalisability needs to be expanded to include a meta-theory that specifies context-specific theories (i.e., how behavioural phenomena occur in specific contexts or societies). Developing a scientific understanding of how different models apply as a function of cultural and social contexts is in itself a powerful positive step towards achieving goal. In this way, efforts to increase diversity in science must be accompanied by diversity in metascience.
Inspired by the manifesto on reproducible science by Munafo and colleagues1, we propose a similar conceptualisation of threats to a diverse open science in Fig. 1 and introduce the new concept of making assumptions based on skewed knowledge (MASKing). MASKing occurs when researchers inadvertently base their research process in unrepresentative or biased knowledge, often due to a lack of diversity in their underlying theories, samples or methods. To address this, we articulate a vision for structural change at each stage of the research process – research design, reporting, generalisation, and evaluation, advocating for a braided approach to diversity and openness in science. Finally, we broaden the lens on diversity and openness to consider macro-level geopolitical factors that shape these priorities on a global scale.
Potential threats include not specifying target populations, reliance on skewed theoretical frameworks, selection bias, poor study design, lack of culturally adapted methods, failure to account for within-country diversity, not accounting for sampling biases and adjustments (e.g., weighting or post stratification), unwarranted claims of generalisability, and citation bias. These cumulative issues lead to making assumptions based on skewed knowledge (MASKing), distorting research conclusions and undermining the validity of research. This figure is inspired by Munafo et al.1 which addressed the issue of HARKing
Designing open and inclusive research
We outline how to integrate diversity and openness into research design, highlighting specific areas where these dual priorities can augment our practices.
Methodological considerations when working with diverse populations
Working with under-represented populations requires careful consideration of theories and methodologies. For instance, what can we say about prosocial behaviour if studies largely only use proxies of it and measures have been primarily validated in American or European samples33? To investigate prosocial behaviour in experimental studies, researchers typically rely on hypothetical scenarios playing variations of a game, such as the public goods game where participants donate to a common pot34. All too often, tasks and procedures that were developed by and for culturally majoritised groups are uncritically transferred to under-studied settings35. Adapting the public goods game for a rural, semi-literate population in LMICs will require careful consideration of several factors, including participants’ educational backgrounds, and the village’s historical context (e.g., colonial legacy), that shape local social norms around cooperation36. While scholars have long advocated for diversifying theories37,38 and methods39 to ensure that open scientific inquiry is culturally relevant40, achieving methodological diversity remains a key challenge.
To address these issues, open sharing of under-utilised measures and techniques is critical to diversification of methods. Open science in LMICs can increase public access to materials and data and build up digital infrastructure. It also has the potential for wide dissemination and utilisation of methodological tools specifically designed for under-represented populations, which may otherwise be less accessible41. Open sharing of the rationale for specific methodological choices (e.g., culturally-adapted and linguistically diverse vignettes) mitigates against a common reliance on existing methodological tools, which are often designed for and adapted to Western populations39. There is a pressing need to build methods, to investigate topics relevant to under-studied world regions42, and to broadly disseminate research products that emerge from this process. Openness and collaboration in methodological innovation for diverse populations are therefore critical to a more global and inclusive open science.
Such considerations can be articulated via preregistration, which provides an open and transparent record of methodological approaches and can include justifications for decisions made43. Preregistration also provides an opportunity to address frequent challenges in diversification and to collectively generate solutions to these challenges. Just as preregistration requires a sample size rationale, scholars can add a sample rationale to specify how their sampling strategy is similar to or different from the target population they wish to generalise to. This is a fundamental step to avoid problems of retrofitting results from available populations and in order to be transparent about the scope of a study.
Preregistrations can also present an important avenue for highlighting researcher positionality and for more considered decisions about study design and the interpretation of results44. For any study involving marginalised or underrepresented groups, preregistrations can outline detailed inclusion and exclusion criteria, and document analytical decisions regarding potential subgroup analyses. It also presents an opportunity for sharing decision-making as research plans change. For example, scholars often face practical challenges and constraints that require deviation from their preregistered plans: even the best laid plans to collect data in hard-to-reach regions may not always go as planned. Arguably, such instances may be more likely when working with underrepresented populations or when devising novel methodologies. In such cases, open discussion of how to reconcile the ideals of preregistered plans with the realities of data collection is important.
Research training
Since the replication crisis has animated considerable discourse in psychology, the ethos of training in psychology has shifted markedly, with more institutions introducing required coursework on open science practices44,45,46. In an ideal scenario, this much-needed educational transformation would also incorporate critical considerations around diversification in relation to open science practices47,48,49. Both discipline-agnostic education (e.g., understanding the impact of lived experiences, diverse identities, and intersectionality in scientific practices) and discipline-specific (e.g., rethinking sample diversity and learning about shifting methodological norms in psychology), are urgently needed in undergraduate and postgraduate training12,50,51. Moreover, psychology would also benefit from closer contact and transfer of learning from cognate fields (e.g., Anthropology, Geography, Global Health) which have more readily embraced global diversity and socio-cultural factors into mainstream scientific practices52. This is especially important in regions of the world where psychological science (as viewed in the HICs) is often embedded within other disciplines (e.g., economics, public health, anthropology)53. Researchers studying human behaviour in Africa or Asia who are not situated in a psychology department may be likely to participate in interdisciplinary projects due to how the discipline may be viewed in their local context.
Research partnerships
In designing inclusive and open research, it is critical to ensure that researchers are not ‘parachuting’ into diverse regions solely for data collection, but that they are instead including local partners in low-resource settings as co-leaders47 and co-authors in these collaborations54. Due to less-generous policies on funding for research time, establishing these collaborations may be difficult, particularly if LMICs researchers lack the financial resources to cover research time at their institutions. Helicopter research, where researchers from more privileged settings conduct research in lesser-resourced settings without involving local communities or researchers, is questionable on ethical and epistemic grounds55, violating basic principles of beneficence, autonomy, and social justice. It also leads to an overemphasis on topics that are in vogue in HICs and relying on Western validated measures that are ill-suited for these settings. These practices falsely presume that Western theories and methods are value- and bias-free56 such that they can be imported without concern to culturally distal settings56,57. Of equal importance are considerations around data sovereignty: co-leadership from data collection sites can ensure just use of openly shared data and metadata.
In a similar vein, when working with diverse samples or multinational datasets, it becomes ever more important that we consider the power structures of academia – who gets the most influential positions on authorship (e.g., first and last author) and who secures grant funding based on LMICs work? By ensuring leadership roles are held by researchers from LMICs and marginalized communities, who often bridge academic and cultural divides by contextualising methods to local contexts and act as liminal researchers47 will further not perpetuate power hierarchies58. This is particularly salient in big-team science collaborations when researchers pool infrastructure in LMICs yet get disregarded for leadership in papers. For example, big-team science studies are typically funded by HICs, focus predominantly on effects discovered by Western scholars, and are rarely led by researchers based in LMICs18,59. Acknowledging their contributions through lead or senior authorship positions can help dismantle the persistent disparities in international collaborations.
To increase efforts to equitably engage with open and big team science science47, there is a crucial need to support new grassroots organisations and the LMICs research ecosystem. Initiatives such as the Chinese Open Science Network (COSN; https://open-sci.cn/), the Framework for Open and Reproducible Research Training (FORRT; https://forrt.org), and Advancing Open & Big-team Reproducible Science through Increased Representation (ABRIR; https://abrirpsy.org) are few examples of such efforts that reimagine open and big team science research32,60.
Reporting inclusive open research
In this section, we describe ways to evaluate the different aspects of diversity when reporting research.
Contextual accuracy in reporting
Critical to accurate reporting of research is the careful treatment of generalisability61. Articles should be written in a way that articulates potential constraints on the generality of the findings across different dimensions – populations, time, mechanisms, treatments, outcomes, and settings62. This defines the boundary conditions of reported effects in ways necessary for accurate interpretation. We advocate for careful examination of the anatomy of a scientific article in relation to context by representing contextual information within the title, abstract, introduction, methods, discussion, analysis, and limitations. In addition to situating a study in context, explicitly linking findings to context is an equally important step63,64,65. Moreover, within the methods section, a detailed description of the research site and data collection process will increase the depth of contextual understanding. In Table 1, we outline an inventory of practices that place both openness and diversity in focus when reporting research as well as the potential barriers to implementing the recommended practices.
While encouraging authors to situate studies in context and to be open about contextual factors, there is a danger that such information can peripheralise research from under-represented regions66. Such research is often more likely to be viewed as niche or marginal based on localising information16. Therefore, the impact of increased reporting of context needs to be monitored. To a large extent, psychology’s highest impact outlets tend to incentivise “newsworthy” findings, which can disincentivise localised research by favouring studies that present strong universal claims. Ironically, strongly localised research is often highly valuable to policymakers, who need to understand how research findings apply to the constituencies they serve67. We return to this point when discussing research evaluation.
Citations and visibility
Biases are evident in how research is documented and cited. Researchers from under-represented communities face additional barriers of not being appropriately cited68. The scientific community needs to correct for the epistemological exclusion of diverse research and voices. Initiatives such as CiteHER Bibliography and Cite Black Women are resources that help redress some such biases but shouldn’t be limited to these groups69. Additionally, researchers can include citation diversity statements, characterising the diversity of their reference list. Drawing inspiration from initiatives in other fields, such as philosophy70, a psychology diversity reading list representing diverse authors could assist researchers in integrating underrepresented voices in their work. However, researchers need to resist availability heuristics and go beyond familiar names to educate themselves on findings from diverse populations. In general, a move towards live, open repositories should assist authors in this goal.
Responsible authorship practices
A significant factor in reporting research is determination of authorship. A large proportion of first authors in psychology and behavioural sciences are originally from and based in the Global North—mostly in the United States71,72,73. Thus, researchers from populous world regions, notably Africa, Asia, and Latin America, are vastly underrepresented72. Here we expand on responsible authorship practices through the lens of equitable epistemic contributions. The Contributor Roles Taxonomy (CRediT) statements provide one means to qualify authorship positions with contributor statements74, while others recommend authorship agreements which offer a more dynamic record of contributions (https://authorshipproject.org/). However, CRediT statements are not without limitations and provide little safeguard against gift and ghost authorship or misuse of the intended framework. To ensure these statements reflect true contributions of the team, some scholars have proposed an additional Method Reporting with Initials for Transparency (MeRIT) system for specificity and accountability75. This includes linking contributions to each author’s initials and clearly identifying who performed each task (e.g., data analysis/writing) to accurately recognise roles in the team.
Monitoring authorship representation data
In tracking progress towards an open and inclusive science, it can be helpful to routinely track representation of authors by global regions. A tool in this regard is the “Missing Majority in Behavioural Science” dashboard76. This online dashboard aggregates first-author data obtained from the OpenAlex database and offers interactive visualisations, such as year of publication and continent of author affiliation (Fig. 1), or continent of author affiliation and journal (Fig. 2). These open data from the Missing Majority Dashboard highlight the need for monitoring and accountability and ultimately, shine the spotlight on the need for greater researcher diversity.
Scatter plot of percentage of first authors by continent, over time, for a list of popular academic psychology journals. Journals were selected based on Thalmayer et al. 73, by soliciting input from colleagues, and from the authors’ own knowledge base. Data from the Missing Majority in Behavioural Science Dashboard (psychology tab) on October 19, 2024, available at https://remi-theriault.com/dashboards/missing_majority.
Generalisation, openness, and inclusiveness in research
A robust and inclusive open science requires careful attention to the generalisability of findings17. However, principles for defining and operationalising generalisability have remained elusive. The normative assumption is that representative samples of participants often imply generalisability77. However, who counts as representative in relation to the underlying population? In the presence of large and meaningful heterogeneity in psychological phenomena, are findings about the ‘behaviour of the average person’ in a highly representative sample necessarily more generalisable? Do different psychology subfields need to focus on different aspects of sample diversity in the service of disciplinary generalisability? Although the last few years have seen rapid improvements in large-scale collaborative studies that have diversified psychological science78, samples from around the world may appear more diverse than they truly are. For example, an overreliance on socioeconomically advantaged, educated, urban and digitally connected populations across different contexts may narrow representations even across a large number of countries and settings78,79. This could lead to questionable generalisability practices (QGPs), which may be considered an extension of questionable research practices (QRPs).
QGP practices range from benign labelling of study populations80 (e.g., referring to samples as WEIRD/non-WEIRD81) to drawing strong and unwarranted conclusions about generalisability in the absence of available data82. QGPs also extend to not reporting relevant sample details, failing to address selection bias, neglecting to calibrate claims of generality, or transparently acknowledging study limitations. A reorientation towards epistemic humility is needed to modify these practices and restore credibility in scientific reporting83. In addition, the view that widely studied populations are normative or reflect baseline behaviour against which under-represented populations should be compared (i.e., the Western Centrality Assumption84) presents an additional threat to how we view and interpret samples in psychological research.
Although recent investigations have shown that authors are most likely to mention generalisability as a limitation85, this does not routinely go beyond handwaving, and such statements, unless mandated, are often entirely absent. Over time, there is evidence that scientific articles are using more positive words in their abstracts86, implying a trend towards more optimistic and less calibrated scientific language83. Practical solutions, such as adding Constraints on Generality (COG) statements, aim to increase transparency regarding such limitations87. However, even COGs have not been widely adopted yet. In addition, and perhaps more importantly, a statement offered via addendum is a poor substitute for cautious interpretation throughout the manuscript. While it may not be feasible or even optimal for all researchers to access and sample under-represented populations in their own work, all researchers can commit to a more careful evaluation of their own study’s generalisability19,88,89.
Over the last decade, the open science movement has raised awareness about the prevalence of false positives in psychological research. One key issue highlighted was HARKing90 or “hypothesising after results are known”, which leads researchers to present findings as the direct outcome of a priori hypotheses. While HARKing distorts the meaning of statistical significance (i.e., p-values) by presenting post-hoc tests as if they were planned a priori, MASKing presents a companion threat. MASKing introduces bias at multiple stages of the research process, from hypothesis generation and study design to analysis and interpretation. These biases stem from overgeneralising based on skewed theoretical assumptions, (e.g., drawing on theories based on HICs), employing narrow methods (e.g., not culturally adapted) or collecting unrepresentative samples, (e.g., urban, educated samples from LMICs).
For instance, suppose researchers conduct a cross-sectional study on well-being in Kenya that relies on a small sample of college students drawn from Nairobi. Despite being a densely populated city with varying levels of socio-economic backgrounds, generalising findings from these accessible samples to the entire region of Kenya is not appropriate91. Such a small and unrepresentative sample reduces variability in the dataset and excludes majority rural populations, who make up most of the Kenyan population. This can introduce uncontrolled sources of variation, narrow the pool of possible effects and increase the likelihood of Type I errors (false positives) and Type II errors (false negatives) in statistical interpretation. Furthermore, well-being, in itself, may not have equivalent construct validity across regions, which raises the question of measurement equivalence both within and across cultures. MASKing occurs when researchers extrapolate findings from an unrepresentative sample to the entire population, obscuring the true nature of relationships within distinct subgroups of the population. Even well-designed observational studies that have relatively higher external validity compared to experimental studies are not robust to the effects of population-level variation. This variation is important to capture and define in the service of faithful data interpretation.
We posit that MASKing, like HARKing, lies at the heart of the credibility crisis in psychology, and that reforming both practices will be integral to the future success of the science. Centred within a broader matrix of QRPs, MASKing fails to consider that extant knowledge is heavily circumscribed by the research practices thus far17. Consciousness-raising and disincentivising MASKing is therefore an important charge for psychological researchers. This is integral to ensuring that the theoretical and empirical foundations of the field are both defined by and reflective of the representation and diversity therein.
Evaluating inclusive open research
Here we describe methods to evaluate and disseminate inclusive open research, including increase diversity in peer and editorial reviews and reduce barriers to dissemination.
To open or not to open
Transparency in research design, materials and data is a pillar of open science. However, this requires resources, infrastructure and careful attention to sensitive research topics and marginalised groups. The structural challenges faced by researchers in LMICs, including issues of access to high-speed networks, limited computational and technological resources, and a lack of capacity or training can restrict participation in open science. These factors limit the ability to engage with open practices and to innovate in these spaces. These are major equity considerations for the Open Science movement to consider. Additionally, when evaluating research with marginalised groups, open materials and data could be at high risk of misuse92. Hard-to-reach and marginalised populations93, such as ethnically diverse populations, and Indigenous Peoples, have suffered greatly in the name of science.
Efforts to ensure informed consent for open access have to be accurately conveyed and safeguarding the interests of these communities is critical. The CARE principles (Collective benefit; Authority to control; Responsibility; Ethics) provide one important initiative in this direction94, but there is no one-size-fits-all formula, and there is no ultimate safeguard to data misuse. Decisions around making sensitive data publicly accessible can be complex, particularly for critical research topics (e.g., mental health) and vulnerable groups (e.g., rural populations in LMICs). This warrants an appreciation on the part of editors and reviewers of the various factors that constrain and guide data sharing. On the part of the researchers, statements that demonstrate thoughtful consideration of these issues and how they were resolved could be submitted for editor and reviewer consideration.
Diversity in peer and editorial review
To counter decades of publication bias, we need to urgently find ways to identify and address structural barriers to publication95. This involves diversification of research evaluators to ensure cultural literacy and familiarity on the part of reviewers and editors as science expands and diversifies. Editorial boards of mainstream psychology journals remain alarmingly narrow in terms of race and country of affiliation in spite of considerable awareness of representational gaps on editorial boards96. Further, the lack of global representation in the professional editorial workforce perpetuates gatekeeping and systemic biases97. By diversifying editorial representation and broadening cultural awareness among reviewers and editors, barriers to publication arising from a narrow base of evaluators can potentially be addressed. Diversifying editorial boards is not a simple matter: it requires careful consideration of the incentive systems that increase the retention and promotion of diverse researchers and broadening these incentives to encourage wider participation. Scholars from lesser resourced environments are often taxed with multiple obligations and have more teaching and service commitments. Consideration of unique barriers and incentives that would support such researchers, and enable their participation, is essential.
New initiatives that proactively address inequities in access to journals are thus needed. One positive example is Reviewer Zero, an initiative that aims to promote equity-centred practices in research evaluation. Reviewer Zero advocates for training on how to review diverse research and invites an expansion of the definition of ‘good science’98. These types of initiatives can support authors from under-represented backgrounds to navigate the publication process. An additional barrier that needs to be addressed is language. Speakers of languages other than English99 and non-native writers face considerable challenges, which can limit scientific communication and exchange with Anglophone countries. Accordingly, some authors have turned to large language models to help with academic English writing100. However, AI-detector tools used to assess the likelihood of AI-generated text can falsely categorise non-native English writing as AI-generated, leading to further barriers for non-native English scholars101. Copyediting services are often prohibitively costly for authors. Publishing houses and professional societies could commit resources to support authors for whom language barriers limit access to publications.
Dissemination
Research dissemination and publication has seen significant changes over recent years. In particular, the movement towards openness and diversification in psychological science is at odds with emergent practices in academic publishing. This is perhaps more clearly reflected in the deeply troubling trend towards exorbitant article processing charges (APCs), which only stands to increase global inequities in open science. As a result of such changes, the core goals of equity and inclusion – so pivotal to the open science movement – can potentially be refracted through the structures within which they are implemented. With high APCs being one of the biggest barriers in developing countries, waiving journal APCs for specific world regions is a critical mitigator. For instance, authors from the LMICs can now publish Gold open access in Springer Nature journals at no cost, while other publishers, such as Wiley, and Sage, offer waivers and discounted pricing options. Such an initiative removes some barriers to inclusion. However, arguably, exorbitant costs for open dissemination should not be an organising factor in science at all. To truly level the playing field, we need a scholarly world in which diamond open access, where neither researchers nor readers pay any fees, becomes the de facto choice92.
Building an infrastructure for an open and inclusive science: The role of funders and professional organisations
In order to diversify open science, we also need to mobilise influential stakeholders within the scientific community, notably funders and professional organisations. In terms of funding agencies, offering incentives to diversify samples in proposals, examining the key issue of researcher diversity, and encouraging equitable collaboration structures are just a few of the factors that could precipitate reform. For example, most psychological research in Africa is donor-funded, with donors being concentrated in high-income countries. Ensuring that residents, and researchers in African nations are afforded full participation and joint leadership in such projects is critical102.
Professional organisations host major conferences in many sub-fields of psychology. Such conferences should carefully consider visible representation at these events and direct resources and funding to initiatives that promote scientific activities in LMICs. This approach to promote global diversity should be concrete, and sincere, focusing on developing, retaining, and promoting early career research talent. Examining past trends in speaker visibility and samples included in past research presented at major conferences can reveal important patterns. For example, both population demographics of samples and open science practices in all posters were showcased at the 2021 Society for Research in Child Development meeting, revealing an alarming lack of diversity in their sample103. This type of active monitoring, like the earlier Missing Majority author dashboard, can help identify future targets and evaluate progress towards them. Most recently, the Society for Improvement of Psychological Science held its first conference in Kenya in 2024, setting a precedent that should inspire more societies to diversify their annual meetings.
Challenges for global equity in open science practices
Intended to diversify and democratise, the origins and evolution of open science practices have historically been centred in resource-rich environments, most notably, the European Union and United States32. To a large extent, the inequities in global representation in the scientific record are now mirrored in the creation and global distribution of Open Science initiatives and resources. Without broader global representation in Open Science practices and their evolution, the movement may continue to reflect the interests, goals, and priorities of specific world regions. In addition, there may be assumptions about ease of implementation that do not apply to many areas. In many populous world regions, everyday barriers to Open Science implementation (e.g., proxy servers, digital connectivity, necessary hardware and software) may present challenges that need to be addressed. In this sense, expanding on Open Science principles at a global scale involves both increasing digital capacity for data storage and retrieval as well as reconciling pressures and challenges across world regions in implementing Open Science practices.
In order to engage fuller participation in Open Science principles and practices, core principles of equity, inclusivity, and democratisation that require global cooperation must be carefully reconciled with a competitive bid for global scientific leadership104. To this end, some researchers have called into question a ‘one-size-fits-all’ solution to adopting existing open science practices as universally beneficial in favour of a more critical view that considers the possibility of exacerbating inequity in some parts of the world and mitigates against this outcome105.
Outlook
Scientists are considered the most trusted producers and purveyors of knowledge that is often deemed to be generalisable and valid. However, some current practices are at odds with basic principles of generalisability and scientific validity. In medical science, it is considered unethical to draw a sample of tissue and blood from white individuals to design therapies for ethnically and racially diverse populations to which these therapies may not apply. As susceptibility to diseases varies across populations, it is crucial to design therapies that are effective for diverse groups and do not lead to adverse, unintended, and harmful consequences. However, as evidenced by the recent disparities during the COVID-19 pandemic, public health strategies have not been adequately tailored to address the needs of different demographic groups106. Moreover, from a policy-maker perspective, contextually specific research or situated science, is often more useful than generalised studies that address broad issues affecting abstract or different populations that do not reflect the specific realities of their context. The stakes of narrow sampling are high not only in medical research but also psychological research107.
Open science (and mainstream) psychological research must account for the ethical implications of sampling from a thin slice of the world’s population and perform a more comprehensive re-evaluation of open science practices to address the challenges associated with generalisability. In this manifesto, we made the case for an integrated approach to diversity and inclusiveness in open science and outlined specific initiatives that individual researchers can take, as well as institutional changes that could be made at journals, societies, and funders. These changes will not happen without systematic identification of the visible and invisible barriers to diversification and the intentional revision of open science practices that dismantle these barriers.
References
Munafò, M. R. et al. A manifesto for reproducible science. Nat. Hum. Behav. 1, 1–9 (2017).
Ioannidis, J. P. A. Why Most Published Research Findings Are False. PLOS Med. 2, e124 (2005).
John, L. K., Loewenstein, G. & Prelec, D. Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychol. Sci. 23, 524–532 (2012).
Simmons, J. P., Nelson, L. D. & Simonsohn, U. False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychol. Sci. 22, 1359–1366 (2011).
Baker, M. 1500 scientists lift the lid on reproducibility. Nature 533, 452–454 (2016).
Henrich, J., Heine, S. J. & Norenzayan, A. The weirdest people in the world? Behav. Brain Sci. 33, 61–83 (2010). discussion 83-135.
Hruschka, D. J., Medin, D. L., Rogoff, B. & Henrich, J. Pressing questions in the study of psychological and behavioral diversity. Proc. Natl Acad. Sci. 115, 11366–11368 (2018).
Schooler, J. W. Turning the Lens of Science on Itself: Verbal Overshadowing, Replication, and Metascience. Perspect. Psychol. Sci. 9, 579–584 (2014).
IJzerman, H. et al. Use caution when applying behavioural science to policy. Nat. Hum. Behav. 4, 1092–1094 (2020).
Klein, R. A. et al. Many Labs 2: Investigating Variation in Replicability Across Samples and Settings. Adv. Methods Pract. Psychological Sci. 1, 443–490 (2018).
Nosek, B. A. et al. Replicability, Robustness, and Reproducibility in Psychological Science. Annu. Rev. Psychol. 73, 719–748 (2022).
Korbmacher, M. et al. The replication crisis has led to positive structural, procedural, and community changes. Commun. Psychol. 1, 1–13 (2023).
Forscher, P. S. et al. The Benefits, Barriers, and Risks of Big-Team Science. Perspect. Psychol. Sci. 17456916221082970 https://doi.org/10.1177/17456916221082970. (2022)
Muthukrishna, M. et al. Beyond Western, Educated, Industrial, Rich, and Democratic (WEIRD) Psychology: Measuring and Mapping Scales of Cultural and Psychological Distance. Psychol. Sci. 31, 678–701 (2020).
Tiokhin, L., Hackman, J., Munira, S., Jesmin, K. & Hruschka, D. Generalizability is not optional: insights from a cross-cultural study of social discounting. R. Soc. Open Sci. 6, 181386 (2019).
Cheon, B. K., Melani, I. & Hong, Y. How USA-Centric Is Psychology? An Archival Study of Implicit Assumptions of Generalizability of Findings to Human Nature Based on Origins of Study Samples. Soc. Psychol. Personal. Sci. 11, 928–937 (2020).
Yarkoni, T. The generalizability crisis. Behav. Brain Sci. 45. https://doi.org/10.1017/S0140525X20001685, (2022).
Ghai, S., Forscher, P. S. & Chuan-Peng, H. Big-team science does not guarantee generalizability. Nat. Hum. Behav. 8, 1053–1056 (2024).
Rad, M. S., Martingano, A. J. & Ginges, J. Toward a psychology of Homo sapiens: Making psychological science more representative of the human population. Proc. Natl Acad. Sci. 115, 11401–11405 (2018).
Arnett, J. The Neglected 95%, a Challenge to Psychology’s Philosophy of Science. Am. Psychol. 64, 571–574 (2009).
Bavel, J. J. V. et al. Using social and behavioural science to support COVID-19 pandemic response. Nat. Hum. Behav. 4, 460–471 (2020).
Jost, J. T., Baldassarri, D. S. & Druckman, J. N. Cognitive–motivational mechanisms of political polarization in social-communicative contexts. Nat. Rev. Psychol. 1, 560–576 (2022).
Vlasceanu, M. et al. Addressing climate change with behavioral science: A global intervention tournament in 63 countries. Sci. Adv. 10, eadj5778 (2024).
Orben, A. & Przybylski, A. K. The association between adolescent well-being and digital technology use. Nat. Hum. Behav. 3, 173–182 (2019).
Mishra, S. K. et al. A need for actionable climate projections across the Global South. Nat. Clim. Change 13, 883–886 (2023).
Grahe, J. E., Cuccolo, K., Leighton, D. C. & Cramblet Alvarez, L. D. Open Science Promotes Diverse, Just, and Sustainable Research and Educational Outcomes. Psychol. Learn. Teach. 19, 5–20 (2020).
Leonelli, S. Open Science and Epistemic Diversity: Friends or Foes? Philos. Sci. 89, 991–1001 (2022).
Stracke, C. M. Open Science and Radical Solutions for Diversity, Equity and Quality in Research: A Literature Review of Different Research Schools, Philosophies and Frameworks and Their Potential Impact on Science and Education. In Radical Solutions and Open Science: An Open Approach to Boost Higher Education (ed. Burgos, D.) 17–37. (Springer, Singapore, 2020).
Bahlai, C. et al. Open science isn’t always open to all scientists. Am. Scientist 107, 78–82 (2019).
Lui, P. P. Integrating open science and multiculturalism to restore trust in psychology. Nat. Rev. Psychol. 1, 555–556 (2022).
Syed, M. Reproducibility, Diversity, and the Crisis of Inference in Psychology. Preprint at https://doi.org/10.31234/osf.io/89buj (2021).
Jin, H. et al. The Chinese Open Science Network (COSN): Building an Open Science Community From Scratch. Adv. Methods Pract. Psychol. Sci. 6, 25152459221144986 (2023).
Adetula, A., Forscher, P. S., Basnight-Brown, D., Azouaghe, S. & IJzerman, H. Psychology should generalize from — not just to — Africa. Nat. Rev. Psychol. 1, 370–371 (2022).
Yang, Y. & Konrath, S. A systematic review and meta-analysis of the relationship between economic inequality and prosocial behaviour. Nat. Hum. Behav. 7, 1899–1916 (2023).
Speed, L. J., Wnuk, E. & Majid, A. Studying psycholinguistics out of the lab. in Research methods in psycholinguistics and the neurobiology of language: A practical guide 190–207 (Wiley Blackwell, Hoboken, NJ, US, 2018). https://pure.mpg.de/rest/items/item_2339692_9/component/file_2355162/content
Chaudhary, L., Rubin, J., Iyer, S. & Shrivastava, A. Culture and colonial legacy: Evidence from public goods games. J. Econ. Behav. Organ. 173, 107–129 (2020).
Medin, D., Bennis, W. & Chandler, M. Culture and the Home-Field Disadvantage. Perspect. Psychol. Sci. 5, 708–713 (2010).
Muthukrishna, M. & Henrich, J. A problem in theory. Nat. Hum. Behav. 3, 221–229 (2019).
Broesch, T. et al. Navigating cross-cultural research: methodological and ethical considerations. Proc. Biol. Sci. 287, 20201245 (2020).
Puthillam, A. et al. Guidelines to improve internationalization in the psychological sciences. Soc. Personal. Psychol. Compass 18, e12847 (2024).
Deffner, D., Rohrer, J. M. & McElreath, R. A Causal Framework for Cross-Cultural Generalizability. Adv. Methods Pract. Psychol. Sci. 5, 25152459221106366 (2022).
Ledgerwood, A. et al. Disrupting racism and global exclusion in academic publishing: Recommendations and resources for authors, reviewers, and editors. Collabra: Psychol. 10, 121394 (2024).
Nosek, B. A., Ebersole, C. R., DeHaven, A. C. & Mellor, D. T. The preregistration revolution. Proc. Natl Acad. Sci. 115, 2600–2606 (2018).
Manago, B. Preregistration and Registered Reports in Sociology: Strengths, Weaknesses, and Other Considerations. Am. Sociol. 54, 193–210 (2023).
Wagenmakers, E.-J. et al. Seven steps toward more transparency in statistical practice. Nat. Hum. Behav. 5, 1473–1480 (2021).
Pownall, M. et al. Teaching open and reproducible scholarship: a critical review of the evidence base for current pedagogical methods and their outcomes. R. Soc. Open Sci. 10, 221255 (2023).
Jeftic, A., Lucas, M. Y., Corral-Frias, N.,& Azevedo, F. Bridging the majority and minority worlds: Liminal researchers as catalysts for inclusive open and big-team science. In P. Forscher & M. Schmidt (Eds). A better how: Notes on developmental meta-research. https://doi.org/10.62372/ISCI6112 (2024)
Azevedo, F. et al. Towards a culture of open scholarship: the role of pedagogical communities. BMC Res. Notes 15, 75 (2022).
Corral-Frías, N. S. et al. Latin American Psychological Science: Will the Global North Make Room? APS Obs. 36, https://www.psychologicalscience.org/observer/gs-latin-american-psychological-science (2023).
Elsherif, M. et al. Bridging Neurodiversity and Open Scholarship: How Shared Values Can Guide Best Practices for Research Integrity, Social Justice, and Principled Education. Preprint at https://doi.org/10.31222/osf.io/k7a9p (2022).
Pownall, M. et al. Embedding open and reproducible science into teaching: A bank of lesson plans and resources. Scholarsh. Teach. Learn. Psychol. 10, 342–349 (2024).
Fish, J. M. What anthropology can do for psychology: Facing physics envy, ethnocentrism, and a belief in ‘race.’. Am. Anthropol. 102, 552–563 (2000).
Nwoye, A. African psychology: from acquiescence to dissent. South Afr. J. Psychol. 51, 464–473 (2021).
Adame, F. Meaningful collaborations can end ‘helicopter research’. Nature https://doi.org/10.1038/d41586-021-01795-1 (2021).
Singh, L. Navigating equity and justice in international collaborations. Nat. Rev. Psychol. 1, 372–373 (2022).
Abo-Zena, M. M., Jones, K. & Mattis, J. Dismantling the master’s house: Decolonizing “Rigor” in psychological scholarship. J. Soc. Issues 78, 298–319 (2022).
Castro Torres, A. F. & Alburez-Gutierrez, D. North and South: Naming practices and the hidden dimension of global disparities in knowledge production. Proc. Natl Acad. Sci. 119, e2119373119 (2022).
Kumar, R., Khosla, R. & McCoy, D. Decolonising global health research: Shifting power for transformative change. PLOS Glob. Public Health 4, e0003141 (2024).
Schimmelpfennig, R. et al. The Moderating Role of Culture in the Generalizability of Psychological Phenomena. Adv. Methods Pract. Psychol. Sci. 7, 25152459231225163 (2024).
Azevedo, F. et al. Introducing a Framework for Open and Reproducible Research Training (FORRT). Preprint at https://doi.org/10.31219/osf.io/bnh7p (2019).
Clarke, B., Schiavone, S. & Vazire, S. What limitations are reported in short articles in social and personality psychology? J. Personal. Soc. Psychol. 125, 874–901 (2023).
Findley, M. G., Kikuta, K. & Denly, M. External Validity. Annu. Rev. Polit. Sci. 24, 365–393 (2021).
Hoekstra, R. & Vazire, S. Aspiring to greater intellectual humility in science. Nat. Hum. Behav. 5, 1602–1607 (2021).
Ijzerman, H., et al. Psychological science needs the entire globe, Part 1 (Vol. 34). APS Observer (2021).
Hruschka, D. J., Medin, D. L. & Rogoff, J. Henrich, Pressing questions in the study of psychological and behavioral diversity. Proc. Natl Acad. Sci. Usa. 115, 11366–11368 (2018).
Draper, C. E. et al. Publishing child development research from around the world: An unfair playing field resulting in most of the world’s child population under-represented in research. Infant Child Dev. n/a, e2375.
Kaufman, J., Glassman, A., Levine, R. & Keller, J. M. Breakthrough to Policy Use: Reinvigorating Impact Evaluation for Global Development. Available at https://www.cgdev.org/sites/default/files/reinvigorating-impact-evaluation-for-global-development.pdf. (Center for Global Development, 2022)
Heidt, A. Racial inequalities in journals highlighted in giant study. Nature https://doi.org/10.1038/d41586-023-01457-4. (2023)
Zurn, P., Teich, E. G., Simon, S. C., Kim, J. Z. & Bassett, D. S. Supporting academic equity in physics through citation diversity. Commun. Phys. 5, 1–5 (2022).
Fokt, S., Pharr, Q. & Torregrossa, C. Indexing Philosophy in a Fair and Inclusive Key. J. Am. Philos. Assoc. 10, 387–408 (2024).
Urbina-Blanco, C. A. et al. A diverse view of science to catalyse change. Nat. Chem. 12, 773–776 (2020).
Thalmayer, A. G., Toscanelli, C. & Arnett, J. J. The neglected 95% revisited: Is American psychology becoming less American? Am. Psychol. 76, 116–129 (2021).
Roberts, S. O., Bareket-Shavit, C., Dollins, F. A., Goldie, P. D. & Mortenson, E. Racial Inequality in Psychological Research: Trends of the Past and Recommendations for the Future. Perspect. Psychol. Sci. 15, 1295–1309 (2020).
Allen, L., O’Connell, A. & Kiermer, V. How can we ensure visibility and diversity in research contributions? How the Contributor Role Taxonomy (CRediT) is helping the shift from authorship to contributorship. Learn. Publ. 32, 71–74 (2019).
Nakagawa, S. et al. Method Reporting with Initials for Transparency (MeRIT) promotes more granularity and accountability for author contributions. Nat. Commun. 14, 1788 (2023).
Thériault, R., & Forscher, P. The Missing Majority in Behavioral Science Dashboard. https://remi-theriault.com/dashboards/missing_majority (2024).
Degtiar, I. & Rose, S. A Review of Generalizability and Transportability. Annu. Rev. Stat. Its Appl. 10, 501–524 (2023).
Coles, N. A., Hamlin, J. K., Sullivan, L. L., Parker, T. H. & Altschul, D. Build up big-team science. Nature 601, 505–507 (2022).
Majid, A. Establishing psychological universals. Nat. Rev. Psychol. 2, 199–200 (2023).
Ghai, S. It’s time to reimagine sample diversity and retire the WEIRD dichotomy. Nat. Hum. Behav. 5, 971–972 (2021).
Henrich, J. WEIRD. In M. C. Frank & A. Majid (Eds.), Open Encyclopedia of Cognitive Science. MIT Press. https://doi.org/10.21428/e2759450.8e9a83b0 (2024).
Ruggeri, K. et al. The globalizability of temporal discounting. Nat. Hum. Behav. 6, 1386–1397 (2022).
Riddle, T. Linguistic overfitting in empirical psychology. Preprint at https://doi.org/10.31234/osf.io/qasde (2018).
Kline, M. A., Shamsudheen, R. & Broesch, T. Variation is the universal: making cultural evolution work in developmental psychology. Philos. Trans. R. Soc. B Biol. Sci. 373, 20170059 (2018).
Clarke, B. et al. Looking our limitations in the eye: A call for more thorough and honest reporting of study limitations. Soc. Personal. Psychol. Compass 18, e12979 (2024).
Vinkers, C. H., Tijdink, J. K. & Otte, W. M. Use of positive and negative words in scientific PubMed abstracts between 1974 and 2014: retrospective analysis. BMJ 351, h6467 (2015).
Simons, D. J., Shoda, Y. & Lindsay, D. S. Constraints on Generality (COG): A Proposed Addition to All Empirical Papers. Perspect. Psychol. Sci. 12, 1123–1128 (2017).
Bryan, C. J., Tipton, E. & Yeager, D. S. Behavioural science is unlikely to change the world without a heterogeneity revolution. Nat. Hum. Behav. 5, 980–989 (2021).
Vazire, S. Implications of the Credibility Revolution for Productivity, Creativity, and Progress. Perspect. Psychol. Sci. 13, 411–417 (2018).
Parsons, S. et al. A community-sourced glossary of open scholarship terms. Nat. Hum. Behav. 6, 312–318 (2022).
Ma, V. & Schoeneman, T. J. Individualism Versus Collectivism: A Comparison of Kenyan and American Self-Concepts. Basic Appl. Soc. Psychol. https://doi.org/10.1207/s15324834basp1902_7. (1997).
Ahmed, A. et al. The future of academic publishing. Nat. Hum. Behav. 7, 1021–1026 (2023).
Ellard-Gray, A., Jeffrey, N. K., Choubak, M. & Crann, S. E. Finding the Hidden Participant: Solutions for Recruiting Hidden, Hard-to-Reach, and Vulnerable Populations. Int. J. Qual. Methods 14, 1609406915621420 (2015).
Carroll, S. R. et al. Using Indigenous Standards to Implement the CARE Principles: Setting Expectations through Tribal Research Codes. Front. Genet. 13, 823309 (2022).
Settles, I. H., Jones, M. K., Buchanan, N. T. & Dotson, K. Epistemic exclusion: Scholar(ly) devaluation that marginalizes faculty of color. J. Divers. High. Educ. 14, 493–507 (2021).
Liu, F., Rahwan, T. & AlShebli, B. Non-White scientists appear on fewer editorial boards, spend more time under review, and receive fewer citations. Proc. Natl Acad. Sci. 120, e2215324120 (2023).
Lin, Z. & Li, N. Global diversity of authors, editors, and journal ownership across subdisciplines of psychology: Current state and policy implications. Perspect. Psychol. Sci. 18, 358–377 (2023).
Aly, M. et al. Changing the culture of peer review for a more inclusive and equitable psychological science. J. Exp. Psychol.: Gen. 152, 3546–3565 (2023).
Hwang, S. I. et al. Is ChatGPT a “Fire of Prometheus” for Non-Native English-Speaking Researchers in Academic Writing? Korean J. Radiol. 24, 952–959 (2023).
Ingley, S. J. & Pack, A. Leveraging AI tools to develop the writer rather than the writing. Trends Ecol. Evol. 38, 785–787 (2023).
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E. & Zou, J. GPT detectors are biased against non-native English writers. Patterns 4, 100779 (2023).
Mughogho, W., Adhiambo, J. & Forscher, P. S. African researchers must be full participants in behavioural science research. Nat. Hum. Behav. 1–3 https://doi.org/10.1038/s41562-023-01536-6 (2023).
Kim, M. H. et al. A metascience investigation of inclusive, open, and reproducible science practices in research posters at the 2021 SRCD biennial meeting. Child Dev. https://doi.org/10.1111/cdev.14059. (2023).
Singh, L. A vision for a diverse, inclusive, equitable, and representative developmental science. Developmental Sci. 27, e13548 (2024).
Czerniewicz, L. Confronting inequitable power dynamics of global knowledge production and exchange: feature-opinion. Water Wheel 14, 26–28 (2015).
Fawzy, A. et al. Racial and Ethnic Discrepancy in Pulse Oximetry and Delayed Identification of Treatment Eligibility Among Patients With COVID-19. JAMA Intern. Med. 182, 730–738 (2022).
Medin, D. L. Psychological Science as a Complex System: Report Card. Perspect. Psychol. Sci. 12, 669–674 (2017).
Acknowledgements
We thank Sanderson Onie and Amy Orben for early discussions which helped improve the scope of this manuscript. The funders had no role in the preparation of the manuscript or decision to publish.
Author information
Authors and Affiliations
Contributions
S.G.: Conceptualisation, Project Admin, Resources, Visualisation, Writing – Original draft, Writing -Review & Editing. R.T.: Visualisation, Writing - Reviewing and Editing. P.F: Writing - Reviewing & Editing. Y.S. Writing - Reviewing & Editing. M.S.: Writing - Reviewing & Editing. A.P. Writing - Reviewing & Editing. H.C.-P.: Writing - Reviewing and Editing. D.B.B.: Writing - Reviewing and Editing. A.M.: Writing - Reviewing & Editing. F. A.: Writing - Reviewing and Editing. L.S.: Writing – Original draft, Writing - Review & Editing, Supervision
Corresponding author
Ethics declarations
Competing interests
R.T. received financial support from Busara to develop the Missing Majority Dashboard. P.F. works as a Director for Busara, a non-profit that does behavioral science in service of alleviating poverty, with a specific focus on benefiting the Global South. H.C.P. is also part of the executive committee of Chinese Open Science Network and an Editorial Board Member for Communications Psychology but was not involved in the editorial review of, nor the decision to publish this article. D.B.B. was a founding board member of the Psychological Science Accelerator. F.A. serves as the Director of Framework for Open and Reproducible Research Training (FORRT) which aims to promote transparency and open science practices through education and training and doesn’t receive financial compensation in this role. All other authors declare no competing interest.
Peer review
Peer review information
Communications psychology thanks Fabiano Couto Corrêa da Silva and Merryn McKinnon for their contribution to the peer review of this work. PriMarike Schiffer. A peer review file is available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Ghai, S., Thériault, R., Forscher, P. et al. A manifesto for a globally diverse, equitable, and inclusive open science. Commun Psychol 3, 16 (2025). https://doi.org/10.1038/s44271-024-00179-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s44271-024-00179-1