Security paradigms in digital medicine

The digitization of healthcare introduces new vulnerabilities such as the risk of data breaches and cyber attacks1, which have serious consequences for healthcare infrastructure and patient safety and privacy2,3. The response has often been to maximize security from a technical standpoint4, focusing on the modalities of data collection, the robustness of systems to downtime, and approaches to ensure cybersecurity. Recent information security paradigms such as security-by-design aim at “building security into” digital health technologies5. In practice, this can mean security measures such as strict user authentication are difficult and time-consuming for end users, making security a barrier to healthcare efficiency and quality5. This Perspective calls attention to how an emphasis on technical security risks, without concern for socio-medical practices, introduces a chasm between care practices and the digital technologies used to support these practices. We argue that there is a need to reconcile the technical and social elements of security and care. For instance, getting healthcare systems to perform as needed often requires end users to circumnavigate security measures such as authentication6. This leads to a paradoxical situation whereby concerns and practices aimed at achieving effective patient care are in turn considered as barriers to security-compliant behavior7.

When end users are factored into discussions of digital health security, they are often framed as vulnerabilities8 or the weakest link9 in the system. This reflects cybersecurity discourse in general, which frames individual human actors as a “problem” in the technical system10. For example, end users are framed as systemic weaknesses that might expose networks to malicious attacks11, and their interactions with systems are framed as antagonistic to infrastructure and data security12,13. This framing of end users leads to information-security approaches based on control-based compliance models, which seek to regulate human behavior through bureaucratic security rules10,14. Failing to acknowledge that non-compliance does not necessarily stem from a lack of knowledge, and instead may be the result of informed decisions that privilege different rationalities, values, and aims15 can lead to less effective solutions. For instance, further security training is often prescribed for healthcare workforce training9 instead of exploring the root causes of non-compliance. Among other factors, this approach – in which end users are framed as a problem, and security processes are designed to minimize their interaction – “does not seem to work”10 given the ongoing escalation of cyber attacks, of which digital healthcare infrastructure is a prime target16.

While we question the effectiveness of framing end users as vulnerabilities, we do not deny that large numbers of data and cybersecurity breaches can be traced back to end users. However, the increasing emphasis on technical security in digital care systems urges a reflection on what is to become of care values that are cast as antithetical to secure systems. Indeed, research in the social sciences has previously argued for the irreconcilability of these two logics. The UK’s National Health Service (NHS) Federated Data Platform, developed on Palantir’s data analysis platform Foundry, has recently been criticized for importing into the healthcare domain values that are extrinsic to care, such as technical efficiency17. Philosopher Marjolein Lanzing highlights the risk that the “sphere of health will increasingly be structured according to the industrial logic and values of the sphere of security, crowding out those of care, changing its meaning”17. This highlights the reality that technologies are not neutral, and often encode social, political, and economic values.

Responding to this situation, this Perspective explores how, and under which conditions, security might be reconciled with, and indeed made an integral part of, care in digital health. It proposes that a way forward might lie in building security frameworks and practices that are shaped by care, rather than in opposition to it. As political scientists Berenice Fisher and Joan Tronto suggest, care includes “everything that we do to maintain, continue and repair ‘our world’ so that we can live in it as well as possible”18. This concept of care refers to practices that are “ambivalent, contextual, and relational”19, thus emphasizing that maintaining the world is a matter of fostering relationships and navigating clashing commitments (i.e., different ideas about what is good and necessary), rather than formulating rules and principles20,21. Scholarship in the social sciences has also argued that technologies should not be conceived of as opposed to care, but rather an integral part of it22,23.

This Perspective transposes this argument on the relationship between care and technology to the context of care and security. It argues that reconciling security and care in digital health entails at least three connected steps. First, looking at care practices as sources of inspiration for innovating security approaches and frameworks. Second, appraising end users as potential contributors to the security of connected medical devices. Third, engaging end users in security processes for digital health through participatory design.

Methodologically, this Perspective substantiates these points by drawing on two qualitative case studies. These case studies were selected from secondary literature due to their ability to capture the tensions and overlap between logics of security and logics of care. They foreground, respectively, the organizational and professional perspective, and the perspective of older patients, often framed as a vulnerability in security discourse due to their (assumed) limited digital literacy. Although we provide more context for each case below, both cases mobilize observational qualitative research designs. The first one aims at understanding “the rationalities drawn upon by healthcare professionals in their information security practice”14, hinging on a comparison between information security actions (ISAs) as they are prescribed in information security documents and as they are implemented by professionals in their daily work. The authors map prescribed and actual ISAs, respectively, by analyzing information security policy documents and by observing professionals in their daily work, and especially in their interactions with electronic health records and electronic communication platforms. Interviews with high-level information security managers (n = 3) and with clinicians (n = 24) were used to illuminate the value-based rationales behind, respectively, prescribed and actual ISAs, especially when the two diverged. The second case harnesses four focus groups with older adults (n = 13) who had been involved in the 12-month pilot of a smart-home system. The authors sought to investigate “older adults’ perceptions and expectations of smart home technology as well as their beliefs about the ways in which technology could help to improve their daily lives”24. Holding the focus groups in a lab where the system’s prototype had been installed enabled the researchers to both “foster participants’ confidence in their ability to be creative and useful when talking about technology” and to focus the discussion “on real-world examples”24.

Case study: Information security in Swedish hospitals

The first case study focuses on the gaps between security protocols and care practices in healthcare organizations14. Access to digital healthcare systems such as medical records is governed through a range of information security measures, such as password creation and protection guidelines, and authorized access protocols. Hedström et al. illustrate how everyday care practices in two small Swedish county hospitals diverge from prescribed security practices.

For example, information security protocols in the hospital state that passwords should not be shared. In reality, to make it easier to access passwords required for different electronic systems, caregivers wrote passwords on pieces of paper they stuck above their computers. In another case, the password to access an electronic system was written on a consultation room’s wall. Similarly, an information security protocol stated that only authorized people should be able to access a system. In reality, the system for patient registration was kept logged on and accessed by all members of staff, rather than each staff member logging off and re-authenticating each time. In these examples, healthcare professionals’ easy access to necessary patient information led to them circumnavigating the information security protocols.

In contrast, an information integrity and availability protocol states that patient information must be documented at once when a caregiver is with the patient. In reality, a physician waits to document the patient’s information because he decides it is not needed immediately. A similar assessment is made by a counselor, who sometimes decides not to enter particularly sensitive information in a patient’s medical record due to respect for the patient’s privacy. The counselor explains that the patients communicate sensitive information to her in a trusted situation, while the medical records can be read by other groups of users who do not necessarily have the same trusting relationship with the patient as the counselor.

The researchers observed how healthcare staff made informed decisions that negotiated both security and care practices in a way that required circumnavigating the information security and integrity protocols:

“More sensitive information (i.e., patient information in the medical records) was well protected by the actions of the health care staff while less sensitive information (i.e., lists showing patients coming to the clinic that day) was made available to all”14.

The hospital staff members demonstrate a nuanced and contextual understanding of what constitutes ‘sensitive’ data – omitting very private information from the medical record, while making other information easily available – that contrasts with the documented security protocols. Hedström et al.’s observations show that there is no way to take into account actual care practices without changing ideas of what information can be securely shared, since there is no such thing as care without information sharing. Therefore, acknowledging the expertise and sensitivity hospital staff demonstrate in their information security practices is critical to improving security protocols. The authors suggest protocols should be designed and implemented with hospital staff, rather than a top-down process in which security teams create protocols that care teams are expected to comply with.

Case study: Security and care in the SPHERE smart home

The second case study illustrates how patients’ experiences of care and security can come into conflict. The SPHERE (a Sensor Platform for Healthcare in a Residential Environment) smart-home system comprises home-based sensors designed to detect changes in health or well-being in older people living at home24. It includes sensors that monitor humidity, temperature, air quality, noise, light, occupancy, door contacts, and water and electricity consumption24. In Ghorayeb et al.’s qualitative study of older adults’ perspectives on the technology, most participants appear to have an overall positive relationship to the SPHERE system. However, the researchers surfaced an awareness of cybersecurity risks that threaten to intervene in the system’s care function. An interview with one potential user revealed the concern that the SPHERE system could make them a target for hackers:

“It’s all intelligent data which they collect, and all this data it is so easy to identify the vulnerable people and I think that’s where my problems would be… I mean if it got in the wrong hands then what would happen is [a] lot of people would realize that this person is vulnerable and lives on his own and you know what happens with vulnerable people!”24.

This potential user’s repeated description of themselves as ‘vulnerable’ highlights the complex relationship between security and care in the smart home. Even in the absence of a cyberattack, the system introduced other forms of insecurity that might shape users’ engagement with the technology. Participants described receiving marketing materials for holiday brochures and funeral plans that they believed targeted them based on their age, and expressed concern about how the SPHERE system may further mark their vulnerability to malicious actors. This reveals a tradeoff between the extra sense of care participants can feel due to the SPHERE system, and the new sense of insecurity they feel based on fears regarding data privacy breaches and sale of patient data to third parties. Although the researchers register users’ concerns, they do not explore how they might be reflected in how users use the system. As we discuss below, harnessing the security practices users develop in interaction with systems they perceive as insecure might provide a path towards reconciling security and care logics.

End users as solution in secure care systems

This Perspective has built on recent calls for human-centric approaches to cybersecurity in the field of computer science25,26, as well as on studies that have pointed out the tensions between values driving the logic of security and the logic of care14,15. As we have argued, reconciling security with care requires three connected steps.

First, looking at care practices as sources of inspiration for innovating security approaches and frameworks. In this sense, the concept of exnovation might be useful. Exnovation proposes an alternative approach to innovation that, rather than simply replacing existing practices, focuses on “the hidden competences of practices and practitioners”27. Importantly, the related realm of patient safety has recently advocated and largely implemented a similar transition, from so-called Safety-I to Safety-II approaches. Whereas Safety-I treats humans as liabilities and emphasizes the importance of strict compliance with safety protocols, Safety-II frameworks emphasize the complexity of healthcare environments, and how humans’ ability to “adjust what they do to match the conditions of work” represents an asset, rather than a hazard28. Likewise, in a security context, an exnovation approach suggests focusing not so much on what goes wrong in practice, but on what people do manage to achieve even in difficult or insecure situations27. This involves acknowledging that security practices, as well as care practices, are “ambivalent, contextual, and relational”19. Concretely, this requires ongoing attention to digital health technology use within everyday care practices to understand how inherent competences developed by users in practice might be harnessed to achieve security frameworks that are more compatible with care. For example, despite circumnavigating information security protocols, the Swedish hospital staff in the first case study nevertheless developed security practices that were both compatible with their care practices and protected what they considered to be sensitive information. As a model that facilitates reconciling care and security practices, Hedström et al suggest moving away from the current focus on security compliance materializing in “sanctions, controls and regulation of users”14. Rather, they suggest acknowledging that “behavioral information security problems” often stem from a conflict in the values informing, respectively, mandated security protocols and care practices. Acknowledging these value tensions might open up avenues to put care and security into conversation.

Second, and related, reconciling security and care involves appraising end users, such as healthcare professionals and patients as shown in the two case studies, as potential contributors to the security of connected medical devices. For example, a potential failure to reconcile the technical and social aspects of security in the SPHERE smart home showed how a technical system intended to enhance the social sense of security risked doing the opposite. The participant felt the system marked them as vulnerable, and that a data privacy breach could make them or their home a target for burglars. Re-framing the effectiveness of digital health technologies as always entangled with the input of patients, human caregivers, and other actors is necessary to understand how this technology affects care experiences, and for policy that anticipates security concerns in connected healthcare systems. By treating end users as contributors to system security, these technologies can more effectively harness human behavior to enhance both security and care outcomes.

Third, engaging end users to help shape security processes from the beginning is possible through participatory design approaches. This could help capture the “invisible work”29 required of end users and their contributions to making these systems effective and secure. Sociological perspectives use the concept of end users’ invisible work to show how digital health technologies redistribute, rather than reduce, work29 and ultimately transform care practices30. Participatory design approaches that collaborate with end users could identify value conflicts and points of tension between care and security mandates to produce security processes that work well in everyday care practices, not just in theory. Notably, participatory design diverges from approaches such as “usable security”31 or the FDA-prescribed usability testing32 in that it casts a deeper and more constructive focus on user practices, goals, and needs, rather than zeroing in on errors as threats to the integrity and effectiveness of the system. Indeed, participatory design has often been advocated to improve the usefulness, ease of use and acceptability of digital health interventions33,34, as well as to navigate conflicting needs of different stakeholders35. Participatory design is predicated on reaching a deep understanding of end users’ goals, values and needs through iterative approaches that involve them “as expert on their own experience” in an ongoing manner36. Not incidentally, the FDA has recently acknowledged the need for similar approaches in the launch of their Health Care at Home Initiative37. Although we are not aware of examples in the digital medicine realm, researchers have identified the potential for participatory design approaches to cybersecurity processes for non-healthcare contexts. For example, this approach has helped challenge “top-down and siloed thinking” from security experts in the UK’s Digital Security by Design program38, and highlighted alternatives to Western-centric cybersecurity paradigms in rural Africa39. We identify an opportunity for future research to explore a participatory design approach to cybersecurity processes in digital health contexts. More broadly, future research on the entanglement of security and care practices in digital health can address how and why these systems are digitized, how these systems are impacted by the scale and economy of data, which incentives and disincentives exist to stop cyber criminals, and how security can be understood in terms of care and health outcomes.

Underpinning these three steps is a recognition of the ultimate impossibility of building perfectly secure systems for highly volatile, messy and unpredictable environments such as healthcare settings40. Resilient healthcare systems must empower end users to anticipate, detect, and respond to potential security incidents41. By keeping humans ‘in the loop’10 and enabling them to adapt to emerging situations, digital health systems can better balance both care and security, allowing for more effective responses to anomalies and greater system stability.

Given the decades-long systematic dismantling of public health services in Europe42,43 and OECD countries in general44, this Perspective’s articulated strategies for reconciling security and care are likely to encounter practical hindrances. Admittedly, constrained budgets, staff shortages and other socio-economic tensions currently exerting pressure on care provision might hamper the implementation of time- and resource-intensive interventions such as participatory design. Nonetheless, it is crucial that the current emphasis on securing digital health does not translate into a further dismantling of care infrastructures and practices. To this end, this Perspective has sought to emphasize the necessity of a reappraisal of attitudes and assumptions concerning end users in security frameworks, practices, and protocols in healthcare. Ultimately, as we have argued, it behooves us to start shifting the conversation on security and digital health, to fully appreciate and work with the tensions in values and the risk allocation negotiations that shape end users’ practices and experiences of care and security. Ultimately, there can be no security without care.