The recent news that Facebook had been sloppy in the way it handled users’ personal information illustrates the urgent need for fresh perspectives and approaches to be adopted to ensure data security, given the enormous amount of information that is being mined and used by organisations. Dr Owen Schaefer, Mr Markus Labude and Dr Vicki Xafis explain.

 

Background

The past decade has seen a rapid increase in the amount and variety of data, including personal information, being gathered and linked together. This Big Data, as it is sometimes called, has a variety of potential applications in commerce, policy, education – and particularly in healthcare. Researchers can use Big Data to develop improved diagnostic techniques, screening programmes, personalised therapies and preventative measures. Clinicians can also leverage rich patient data to further tailor treatment plans and recommendations.

 

But the volume, variety, and linkability of Big Data also exacerbates key ethical challenges in privacy and patient protection. Informed consent has traditionally been the mechanism to ensure individuals have adequate understanding of and control over how their data is being used. However, the sheer amount of Big Data being gathered increasingly makes obtaining consent for each and every use of personal data impracticable.

 

At the same time, the protective measure of anonymisation is increasingly under strain in the Big Data era. Sensitive personal information, if obtained by the wrong individuals, could lead to discrimination, embarrassment, stigma or other deleterious effects. Anonymisation of personal data is meant to prevent data holders from being able to ascertain the identities of individuals in a given dataset. However, even if direct identifiers such as names and NRIC numbers are stripped out, the richness of Big Data (particularly as diverse datasets are linked together) makes re-identification increasingly possible.

 

While these issues are not entirely new to the field of data ethics, the increasingly sophisticated capabilities in Big Data science makes them more pressing and necessitates fresh approaches to patient privacy and data governance. The Science, Health and Policy-relevant Ethics in Singapore (SHAPES) initiative at the Centre for Biomedical Ethics (CBmE) has recently commenced a Big Data ethics project. The project focuses on contexts where Big Data is being used to improve health and healthcare.

 

Development of an ethics framework

The SHAPES Team has convened a Working Group comprising local and international experts and has tasked it with producing an ethical decision-making framework. An ethical decision-making framework is a tool that we use to help us think through complex issues to arrive at ethically acceptable decisions. Such frameworks are not guidelines and do not provide the answers per se, but they highlight important considerations and ways of thinking about ethical issues.

The Working Group identified substantive and procedural values that relate to the area of Big Data. Substantive values are those used to justify decisions while procedural values are those which govern the decision-making process. Substantive values include, but are not limited to: integrity, privacy, and stewardship. Procedural values include values such as engagement, accountability, and transparency. Identifying the underlying values helps us to think through the ethical issues that arise in a variety of Big Data activities. The framework being developed by the Working Group discusses how these and other relevant values relate to a number of Big Data domains.

 

One such domain relates to open science and large data repositories. For example, ethical and governance challenges arise in relation to sharing and exploiting data in generalist or community-specific scientific repositories. Such repositories house research products and datasets from a variety of fields in addition to biomedical data and could theoretically be used to generate knowledge in areas previously unimagined.

 

In considering issues that are prominent in open science, we highlight the interests of the various stakeholders. For example, there is a tension between the need to protect privacy and requirements to openly share data. There are also considerations in relation to ownership and access control of data and the fair attribution of intellectual contributions. The fair distribution of benefits and burdens that arise from the use of openly available data must also be taken into account and deserves special consideration.

 

Other domains that the framework will examine relate to precision medicine; big data as a source of real world evidence; AI-assisted clinical decision-making; governance in cross-sectorial big data; public-private partnerships; and vulnerabilities and power.

 

Plans going forward

The SHAPES Big Data project explores these and related issues not as a mere academic exercise: a key goal is also to assist stakeholders such as clinicians, researchers and data governance personnel in making ethically sound decisions on the use of Big Data. Co-chaired by Associate Professor Tai E Shyong (NUHS) and Professor Graeme Laurie (University of Edinburgh), the Working Group will present a draft framework for public consultation by March 2019. During the consultation process, there will perhaps be an opportunity for the NUS medical community to provide feedback on the Big Data Ethics Framework.

 

The final version of the Ethics Framework will be shared with stakeholders by the end of 2019.

 

SHAPES is supported by the Singapore Ministry of Health’s National Medical Research Council.