Research

Why the use of artificial intelligence in healthcare should be regulated

Artificial intelligence (AI) could greatly advance healthcare in South Africa but, at the same time, can also pose serious risks. Government therefore has a duty to regulate the use of AI in the provision of healthcare services to ensure accountability among producers and end-users without stifling innovation.

This is one of the main conclusions drawn by Unisa PhD graduate and staff member, Dr Anisha Jogi, who recently completed one of the first legal theses in South Africa on the regulation of AI in the healthcare sector.

Her PhD research, which one of her overseas examiners has praised as “world class”, entailed an in-depth exploration of the ethical and legal challenges of AI in healthcare, including a comparative analysis of liability for injury or damage arising from AI systems in the United Kingdom and United States.

Artificial intelligence (AI) could greatly advance healthcare in South Africa but, at the same time, can also pose serious risks. Government therefore has a duty to regulate the use of AI in the provision of healthcare services to ensure accountability among producers and end-users without stifling innovation.

This is one of the main conclusions drawn by Unisa PhD graduate and staff member, Dr Anisha Jogi, who recently completed one of the first legal theses in South Africa on the regulation of AI in the healthcare sector.

Her PhD research, which one of her overseas examiners has praised as “world class”, entailed an in-depth exploration of the ethical and legal challenges of AI in healthcare, including a comparative analysis of liability for injury or damage arising from AI systems in the United Kingdom and United States.

Dr Anisha Jogi

Jogi, who works at Unisa Legal Services, says the expectation is that AI will have a more positive benefit than a negative impact for society, and that drawing attention to the possible risks of AI in healthcare should not be misconstrued as “anti-innovative”.

“Rather, it is essential to have due regard to risks so as to ensure that novel technologies like AI, are developed and operated based on an appropriate AI governance regime that secures the trust of individual users and serves the public interest with emphasis on the safeguarding of a human-centric culture.”

AI should not cross ethical or legal boundaries

While the ultimate objective of AI is to benefit humanity, she says, it must be designed so that it is not mutually destructive with the primary goals of humans.

“AI should not become so proficient at what it was developed to undertake, that it results in crossing over ethical or legal boundaries. Because of these new ‘intelligent’ technologies, the evolutionary impact of AI on our society will have far-reaching legal and ethical consequences, which we need to examine and be prepared for.”

A “huge issue” in South Africa is that the current liability frameworks for negligence seem to hold the human actors responsible, Jogi says. “However, as the technology becomes more sophisticated and can be classified as strong AI (capable of exceeding humans at almost every intellectual task), the autonomy of the technology will become more prevalent.”

The more autonomous technology becomes, the more challenging it becomes to decide who is legally accountable if something goes wrong.

Blurring of the lines makes it harder to decide accountability

“As technology becomes more developed in performing human ‘intelligent’ tasks, the line between operator and instrument begins to blur,” says Jogi. “Who or what should be held accountable when injury or harm arises due to AI? Should the blame lie with the developer, producer, data supplier, the health institution that procured and implemented the AI system, or the healthcare professional who use and relied on it?”

The country’s current legal regime may not be adequate to deal with medical claims for negligence arising from the use of AI, she says. South Africa also appears to be lagging behind in developing a proper framework for approving and regulating autonomous and adaptive AI-based medical devices.

She goes on to make a range of recommendations to address these and other legal and ethical issues concerning AI in healthcare, including to:

  • Create a legal basis for conferring “legal personhood” on AI systems, making these systems capable of being sued in court. In this way, the burden of liability would lie with the AI system, and those involved in the supply value chain would not be subjected to lawsuits for negligence (unless they had failed to make provision for AI insurance or other financial mechanisms, or if the court decided to “pierce the corporate AI veil”).
  • Establish an AI insurance regime as a mandatory requirement for specific AI offerings, covering losses arising from unintentional errors.
  • Develop specialised AI courts comprising judges who are knowledgeable and conversant of the AI field.
  • Appoint an AI ombudsperson to investigate complaints on AI matters.
  • Consider developing auditing functions for AI systems to detect and recognise undesirable outcomes.
  • Through new legislation, establish an independent agency to safeguard the public interest by implementing a certification process for AI systems commercially traded on the open market. In the case of AI medical devices, such an agency would make recommendations on safety to the South African Health Products Regulatory Authority (SAHPRA).
  • Develop an AI Corporate Governance Code of Best Practices focusing on accountability and ethical leadership in AI.
  • Develop an African-focused observatory initiative to monitor and report on advances in AI across various sectors, including healthcare.
  • Create financial incentives to encourage the design and deployment of AI systems that are aligned with set ethical and regulatory objectives.

“AI has the ability to revolutionalise medical science and offers many prospects for healthcare, from limiting onerous functions in administration to enhancing the precision of diagnosis and improving patient care management to accelerating drug production and distribution,” Jogi says.

That said, those who deploy and stand to profit from AI in the provision of healthcare services must be accountable for their adverse risks and consequences.

Says Jogi: “With the recommendations offered here, a case is made for an inclusive and harmonised approach of imperative policy adoption and technological transformation, which is believed will underwrite the benefits, and mitigate risks resulting from AI deployed in the healthcare industry, for the benefit of all people, and more particularly for the most vulnerable populations in South Africa.” 

To view the full text of Jogi’s PhD thesis, which she wrote under the supervision of Prof M Labuschaigne and Prof S Mahomed at the College of Law, visit the Unisa institutional repository at Jogi, Anisha Amarat URI: https://hdl.handle.net/10500/28134, Date: 2021-05, Type: Thesis.

* By Clairwyn van der Merwe

** Teaser image via www.vpnsrus.com

Publish date: 2022/01/31