Ethical Guidelines for Health AI Released

As artificial intelligence continues its march forward, the question of ethics always comes into question. In the past, AI in healthcare has played fast and loose with ethics because there was no overarching guideline. However, all that is about to change. The World Health Organization (WHO) has released a series of policies that address the use of AI in healthcare and its ethical boundaries. Already, healthcare AI is showing its usefulness in underprivileged parts of the world. Places that don’t have access to more robust health architecture can benefit from AI dedicated to helping locals overcome disease. However, technology is not a magic bullet, and it can only be as valuable as those who design and implement it allow it to be.

Approaching Technology From the Design Perspective

In the document, the WHO mentioned that they figured these six principles should be the core of how governments and private companies look at the evolution of AI in healthcare. Recent AI applications in healthcare show how much promise the technology could have, but the hype is a double-edged sword. With so much at stake, including patients’ lives, AI needs to live up to its promise. If it doesn’t, it risks losing the trust of the same people who would sing its praises otherwise. The recent pandemic saw much development of AI systems to help with healthcare, but not all of them were as useful as they set out to be. Nature recognizes the common pitfalls that machine learning algorithms may have in diagnosing COVID-19 through chest scans, for example. AI in healthcare is still largely unproven, but it still offers many opportunities if implemented smartly.

The Six Ethical Principles

The WHO’s recommendations for ethical principles in designing AI systems for healthcare is as follows:

  • Protect autonomy: A human physician should have oversight on all decisions. No decision regarding the health of a human being should be made entirely by an AI, nor should it be used to guide a person’s healthcare without their consent.
  • Promote Human Safety: Developers need to always keep an eye on their tools to ensure that no harm comes to patients.
  • Develop Transparency: The design of AI healthcare systems should be open to the public. A common criticism of the system is that they are “black boxes,” and researchers or doctors don’t know how it makes its decisions. The WHO expects these agents should be vetted thoroughly, and all information about them should be transparent.
  • Ensure Accountability: If an AI makes decisions that bring harm to humans, then someone needs to be accountable. Ideally, these should be built into the system.
  • Develop with Equity in Mind: Tools should be available in multiple languages and be trained on diverse data to ensure equality throughout its implementation.
  • Promote sustainable AI: Tools should never be left to get outdated, and developers should keep on top of the latest advancements in technology to ensure that AI is sustainable.

While these may seem like applicable ethical guidelines, it remains to be seen what the latest iteration of applications leveraging AI in healthcare will look like. For now, we can only hope that developers take these guidelines under advisement. Humans can rely on professionals trained by Ethics Demystified, but AI is a whole new frontier.