AI and the Future of Healthcare

Image related to AI and the Future of Healthcare

 

AI and the future of healthcare – how will this impact our human rights?

Introduction

This month, the United Nations and the Universal Declaration of Human Rights is 75 years old. and the challenges it faces now are, arguably, the greatest in its long history. Conflicts between counties and cultures thought settled are re-emerging, other conflicts are developing, and climate change is real and present. But there is another perceived threat to our existence, which is steadily growing in prominence - Artificial Intelligence. Many believe that unless it is properly overseen, it could take control of our daily lives and deeply affect our relationship with the fundamental human rights we currently enjoy.

As things stand, this seems unlikely. AI generated research is, often, flawed. Chatbots can only hold very stilted conversations as they lack the emotional intelligence of human beings and, for all the impressive AI generated art we are seeing machine intelligence draw hands.

Anyway, isn’t the future that we all stay home and make use of our vastly extended leisure time while AI goes out to work?

AI and Healthcare - Benefits

Imagine it is 2035 and AI finally learnt how to draw hands properly in 2028. More importantly, it is now assuming a pivotal role in our daily lives.

In healthcare AI is:

  • Using powerful machine written algorithms to access multiple sources of data to reveal subtle patterns in disease – allowing AI to then direct resources to aid care and treatment.
  • Enabling networked healthcare systems to predict an individual’s risk of disease and suggest preventative measures.
  • Helping to reduce waiting times and improving efficiency by both assuming the administrative burden and acting as a device to augment the skills of clinicians.
  • The benefits of this are obvious, but the risks to the fundamental human rights of individual human beings are considerable if the AI systems are not implemented properly.

AI and Healthcare - Risks

The key to all AI working in an ethical and competent way is data - lots, and lots of data. But data is also, a massive instrument of power. And in AI’s possession, data poses a significant risk of being manipulated in increasingly subtle ways, threatening the fundamental human rights of everyone on the planet.

Consider an AI healthcare system that analyses not only your family's health records but also those of others in your demographic and geographical areas. It determines that you have an elevated risk of developing diabetes. The AI model predicts that, unless you change your lifestyle, you will be pre-diabetic in five years’ time. This is, undoubtedly, extremely helpful to know and all the data outlined above can be provided anonymously. But what if that information was sent to your health insurance company, causing them to threaten to put up your premiums unless you follow their diet and exercise regime? What if social media was used to alert your family and friends to the need for you to eat and act a little more healthily? What if, thanks to facial recognition software, AI sees you entering your favourite café, so sends your phone a reminder about your pre-pre-diabetic status and suggests you have the salad? And when you opt for the burger and chips instead, AI knows this too, thanks to your bank card transaction and notifies all your friends through social media, suggesting they can earn credits towards their own health assessments by sending you messages to, essentially, put the burger down? Perhaps you should pick your friends more carefully!

How Can We Continue to Protect Our Human Rights?

The difficulty with legislation is that, right now, it is all written to tell human beings how to behave. AI works to a different set of rules written in algorithms by highly intelligent people. Any, so far unwritten, AI specific legislation must tell those people how to write algorithms in such a way as to protect everyone’s individual human rights. We also must hope that those people are both competent 100% of the time and possessed of a sound moral compass. But even then, where is the cut-off point? What data should the AI have and what should it not have? In safeguarding, for example, information that someone is going to be facing a safeguarding risk in the next few days would be extremely helpful for public authorities? But what is that data based on and should a public authority be granted access to it?? Also, where does it come from? The manipulation and measurement of data by AI could be so subtle that we would struggle to understand its workings out, even if it told us. Think minority report without the psychics but with a scarily omniscient machine intelligence.

The United Nations, the intergovernmental organisation set up to promote peace, security, and co-operation was founded just after the second world war, where society and the challenges it faced looked very different than it does today. In another 75 years, society will look very different. It will have to face challenges that right now we cannot envisage. To preserve individual autonomy and freedom of choice through all potential changes, it is essential that the Universal Declaration of Human Rights and, for that matter, the European Convention on Human Rights, be both guarded and thoroughly enforced.