top of page

Adapting to the EU AI Act: Ensuring Compliance in AI-Powered Healthcare Solutions

By: Magdalena Kogut-Czarkowska, Timelex


Timelex is the legal partner in AI4LUNGS project, leading tasks related to legally compliant health data processing, providing legal assistance in the AI4LUNGS business model and guidance for the validation pilots. Timelex works with project partners to ensure that all data processing complies with the provisions of the GDPR and applicable national data protection laws.



What is the AI Act?


The Artificial Intelligence (AI) Act is an EU regulation (2024/1689) that took effect on August 2, 2024. It establishes rules to ensure AI systems are developed and used in a safe and beneficial way for society. Legislators recognize that human-centric and trustworthy AI has many potential benefits, such as improving healthcare, making transport safer and cleaner, increasing manufacturing efficiency, and providing cheaper, more sustainable energy. However, some AI systems also pose risks to individuals'  health, safety, democracy and other fundamental rights. Depending on the level of these risks, different AI systems are subject to more or less regulation.


Although the AI Act is now EU law, its rules will be applied gradually, allowing AI developers and users time to adapt. While the AI Act provides a solid framework, further details on specific requirements will come through the development of European harmonised standards.


What does the AI Act regulate?


New rules in the AI Act set obligations for public and private entities: developers, sellers and users (also called "deployers") of AI systems with are placed on the EU market, or that have an impact on people located in the EU . Natural persons which use AI systems for personal and non-professional purposes are not subject to its rules.


Moreover, under the AI Act, an AI system is defined as a machine-based system that (a) operates with varying levels of autonomy, (b) may adapt after deployment, and (c) generates outputs, such as predictions, written content, recommendations, or decisions, based on the input it receives.


As of 2 February 2025, certain AI systems considered a threat to people will be prohibited, including AI systems that, among others:


  • Manipulate people or use subliminal techniques to exploit vulnerabilities, materially distorting the behaviour of a person

  • Use social scoring, which classifies people based on behaviour or personal characteristics, which leads to unjustified or unfavourable treatment

  • Predict the risk of a natural person committing a crime, based solely on the profiling of this person or on assessing their personality

  • Recognize emotions in the workplace and education institutions, unless for medical or safety reasons.


Some exceptions may be allowed for law enforcement purposes.


A limited number of AI systems that pose risks to safety or fundamental rights are classified as "high risk". They fall into two categories:


  • AI systems used in products covered by EU product safety laws, such as toys, cars, medical devices, or lifts.

  • AI systems used in specific areas outlined in the AI Act, which must be registered in an EU database. Such high-risk AI systems include for example AI systems that assess whether somebody is able to receive a certain medical treatment, a job promotion or to get into a university. 


All high-risk AI systems must comply with strict development rules and undergo assessment before being marketed, as well as throughout their lifecycle. Individuals will have the right to file complaints about AI systems with designated national authorities.


Certain AI systems, even if not classified as high-risk, may still need to meet transparency requirements and comply with EU copyright law. For instance, content generated by AI should be clearly labelled as such.


Why is the AI Act relevant for AI4LUNGS?


AI4LUNGS’s innovative system will integrate decision support functionalities, AI models, and a seamless web interface to enhance the patient stratification process. The project aims to go further by using individual patient data to offer personalized diagnosis and treatment planning. Given the AI components and the healthcare focus of the system, careful attention is required.


While the AI Act provides specific exemptions for AI systems developed and used exclusively for scientific research. Thus R&D activities and prototyping that take place before an AI system is released on the market are not subject to its rules. Still, while during the project researchers are allowed to innovate without the immediate burden of full regulatory compliance, the potential for future commercial or real-world use of AI4LUNGS’s results must still be considered. Through its legal and ethical work package, AI4LUNGS is proactively mapping and addressing the requirements of the AI Act, as well as other relevant legal frameworks, such as GDPR and medical device regulations.


 

Stay updated on the AI4LUNGS research development and progress by following the project on X and LinkedIn, and subscribing to our newsletter for the latest insights and advancements.

bottom of page