Connect with us

Biotech

What Are the Challenges of Healthcare in the Face of the European AI Regulation

The AI Act, effective August 1st, classifies AI applications by risk. Minimal risk includes video games, while chatbots require transparency. High-risk categories, such as AI diagnostic tools, have strict safety rules, adding to health regulations. Researchers highlight challenges in oversight and accountability for low-risk AI, warning of potential harm and trust erosion in healthcare systems.

Published

on

AI Act

How to govern Artificial Intelligence (AI) in the healthcare sector. This question motivates the debate on the challenges inherent in the European Union (EU) regulation establishing harmonised rules on AI (AI Act), now that it is in its development and implementation phase in the member states.

Broadly speaking, the discussion revolves around the lack of oversight and accountability mechanisms, the erosion of trust in the health system, and how to reconcile fundamental rights with the exemptions provided for in the law for medical research and in cases of threat to national security, such as epidemiological control. This is reflected in the latest volume of the academic publication Health Policy.

The ​​Act came into force on August 1st, after years of deliberations dating back to April 2021

It is expected to be fully implemented in the countries that make up the EU by August 2027. Beyond the health sector, this European regulation classifies the applications it regulates according to their risk .

Thus, the majority of AI-based video games or image-editing applications fall into the minimum risk category, and chatbots fall into the specific transparency risk category , where their owners are obliged to inform the user when they interact with a machine. And then there are the high risk levels, which include part of the health sector, and the unacceptable risk levels, which are directly prohibited by the regulations .

The AI ​​Act adds a layer of regulation to existing health regulations

The high-risk category would include, for example, AI diagnostic tools that store data on their users. In this case, the AI ​​Act complements the 2017 Medical Devices Regulation with an additional layer of specific regulations . These are mandatory safety and quality requirements for operators. The occasional use of AI to handle emergency calls also falls within this group.

Then there are the low-risk assumptions in the health field, which include mobile apps for monitoring vital signs during sports or sensors used in life support machines. In general, digital tools not used in medical care or medical devices that are not required to undergo a third-party conformity assessment .

In this classification, which also includes the aforementioned chatbots , it is only necessary to guarantee a certain transparency, in accordance with the General Data Protection Regulation. Apart from this, there are no specific rules for their use .

AI Challenges

It is in this latter category that some of the challenges of the AI ​​Act lie, according to researchers Hannah van Kolfschooten and Janneke van Oirschot, from the University of Amsterdam and Health Action International (HAI), respectively. The aforementioned lack of rules results in the absence of comprehensive accountability and oversight mechanisms for minimal risk devices . This, according to the authors of the article in Health Policy , can translate into the proliferation of ineffective or potentially harmful systems.

The damage that could arise from the lack of specific regulations for this type of application could mean that the trust of AI users in the health system is undermined. A situation that could be compounded by the absence of mandatory evaluation instruments for private health providers. “It can lead to disparities in the protection of patients’ rights between providers and between Member States ,” Van Kolfschooten and Oirschot point out.

It is also an outstanding issue in the implementation process to clarify when the use of AI would be justified without any type of regulatory control, which for the AI ​​Act is in contexts where national security purposes prevail (such as a pandemic) or in scientific research.

For all these reasons, both authors conclude, it is necessary to define some concepts contained in the regulation, specify in national legislation mechanisms for evaluating the impact of AI on fundamental rights and promote participation systems for the different agents of the health system, including the developers of this technology.

__

(Featured image by Igor Omilaev via Unsplash)

DISCLAIMER: This article was written by a third party contributor and does not reflect the opinion of Born2Invest, its management, staff or its associates. Please review our disclaimer for more information.

This article may include forward-looking statements. These forward-looking statements generally are identified by the words “believe,” “project,” “estimate,” “become,” “plan,” “will,” and similar expressions. These forward-looking statements involve known and unknown risks as well as uncertainties, including those discussed in the following cautionary statements and elsewhere in this article and on this site. Although the Company may believe that its expectations are based on reasonable assumptions, the actual results that the Company may achieve may differ materially from any forward-looking statements, which reflect the opinions of the management of the Company only as of the date hereof. Additionally, please make sure to read these important disclosures.

First published in PlantaDoce. A third-party contributor translated and adapted the article from the original. In case of discrepancy, the original will prevail.

Although we made reasonable efforts to provide accurate translations, some parts may be incorrect. Born2Invest assumes no responsibility for errors, omissions or ambiguities in the translations provided on this website. Any person or entity relying on translated content does so at their own risk. Born2Invest is not responsible for losses caused by such reliance on the accuracy or reliability of translated information. If you wish to report an error or inaccuracy in the translation, we encourage you to contact us

Eva Wesley is an experienced journalist, market trader, and financial executive. Driven by excellence and a passion to connect with people, she takes pride in writing think pieces that help people decide what to do with their investments. A blockchain enthusiast, she also engages in cryptocurrency trading. Her latest travels have also opened her eyes to other exciting markets, such as aerospace, cannabis, healthcare, and telcos.

Continue Reading