What is EU AI Act?
The EU Artificial Intelligence Act (Regulation EU 2024/1689) is now in force, and it redefines how AI powered medical devices, particularly Software as a Medical Device (SaMD), are regulated across the European Union. As the world’s first legally binding framework for AI, the Act introduces a risk tiered structure, mandating strict requirements for high risk systems, including those used in diagnostics, clinical decision support, and patient monitoring.
At Freyr Solutions, we integrate our deep expertise in MDR, IVDR, and SaMD regulation with the evolving demands of the AI Act to help MedTech manufacturers achieve seamless, end-to-end compliance while continuing to innovate with confidence.
The AI Act + EU MDR: A Dual Regulatory Framework
For AI driven medical devices, compliance is no longer governed by MDR alone. The AI Act adds an additional regulatory layer, especially when the AI system influences patient outcomes or clinician decision making.
This dual oversight requires a new approach: a harmonized compliance strategy that integrates both MDR technical documentation and AI specific requirements under a single regulatory umbrella. Freyr specializes in delivering this integrated pathway, streamlining submissions, ensuring audit readiness, and enabling faster time to market.
Risk Classification Under the EU AI Act
The AI Act establishes four levels of AI risk, each with distinct compliance obligations. Most SaMDs fall under the high-risk category, requiring robust documentation, testing, transparency, and post-market monitoring.
Unacceptable Risk (Prohibited AI)
These are banned across the EU:
- Emotion recognition in healthcare or education settings
- Real-time biometric surveillance in public clinical environments
- Social scoring in patient triage or care prioritization
- Predictive AI used for individual criminal or behavioral risk assessment
- Exploitation of vulnerable patient groups via AI
Freyr conducts early phase design audits to ensure prohibited functionalities are eliminated before development or notified body engagement.
High Risk (Majority of AI-Based Medical Devices)
High risk classification includes:
- Diagnostic AI (e.g., radiology, pathology)
- Predictive analytics in treatment decision making
- AI-enabled surgical planning tools
- Physiological data interpretation platforms
- AI modules in wearable and implantable devices
Freyr enables full lifecycle compliance, from classification and CE marking to algorithmic transparency and cybersecurity.
Transparency Risk
This applies to:
- AI-generated clinical summaries, recommendations, or alerts
- Chatbot interfaces in patient communication
- Any AI-generated output influencing healthcare decisions
We establish clear documentation and user interface labeling aligned with EU transparency obligations.
Minimal or No Risk
Basic tools (e.g., non-clinical automation) fall into this category, yet Freyr ensures a future proof compliance foundation, particularly for evolving systems.
General Purpose AI Models (GPAI) in Medical Contexts
The AI Act introduces new obligations for medical devices leveraging general-purpose AI models, such as foundation models, LLMs, or pre-trained algorithms. These provisions take effect in August 2025 and are critical for:
- SaMD platforms built on GPT-class systems
- Medical chatbots trained on extensive clinical corpora
- AI summarization tools in electronic health records (EHR)
How Freyr Solutions helps clients:
- Assess systemic risk
- Align with the Code of Practice for GPAI
- Disclose training datasets
- Prevent bias, hallucination, and performance drift
Compliance Timeline: Milestones for Medical Device AI
According to the European Commission's page on AI Act implementation, the following are the timelines
| Key Milestone | Date | Impact for Medical Devices |
|---|---|---|
| Regulation Enforced | Aug 1, 2024 | Start readiness and classification activities |
| Ban on Prohibited AI | Feb 2, 2025 | Must cease any disallowed functionalities |
| GPAI Transparency Rules | Aug 2, 2025 | Required for embedded foundation models |
| Full AI Act Application | Aug 2, 2026 | AI based devices must meet all requirements |
| High Risk Medical Devices Compliance | Aug 2, 2027 | Final date for full MDR + AI Act integration |
Freyr supports regulatory planning aligned with these phases, minimizing operational disruptions.
Complete EU AI Act Services Required for Compliance
To comply fully with the EU AI Act, manufacturers of AI powered medical devices must address a wide spectrum of technical, regulatory, and operational requirements. These include:
- AI System Risk Classification and Justification
- Data Governance (representativeness, quality, bias control)
- Algorithm Validation & Clinical Evaluation
- Technical Documentation (aligned to MDR + AI Act)
- Human Oversight Mechanism Design
- Cybersecurity & Robustness Protocols
- Conformity Assessment Coordination with Notified Bodies
- AI Transparency & Disclosure Requirements
- CE Marking for AI Based Devices
- Post Market Surveillance for AI Specific Risks
- Incident Reporting and Lifecycle Monitoring
- GPAI Risk Analysis and Code of Practice Implementation
- Audit Preparation and Regulatory Liaison
- Labeling and UI Adjustments for AI Outputs
- Training for Internal Regulatory, QA, and Development Team

Freyr Solutions Regulatory Enablement Across the AI Act Lifecycle
Freyr Solutions offers comprehensive regulatory support tailored to the full scope of the EU AI Act, integrated seamlessly with MDR and IVDR frameworks. Our approach is built on aligning compliance with innovation, ensuring AI powered medical devices reach the market faster, safer, and fully prepared for regulatory scrutiny.
We support organizations at every stage of the product lifecycle, from early stage planning to post market surveillance, with services including but not limited to:
AI Risk Classification & Use Case Mapping
Determining system risk level under the AI Act and aligning with MDR classification pathways for medical devices and SaMD.Regulatory Strategy and Roadmap Design
End to end planning that bridges AI Act, MDR, and IVDR obligations, delivering a unified compliance model.Technical Documentation & Algorithmic Transparency
Preparation of AI specific documentation, including performance metrics, explainability protocols, and data traceability, aligned to both MDR Annex II and AI Act requirements.Data Governance and Bias Mitigation
Ensuring dataset integrity, fairness, and representativeness, crucial for AI safety, clinical validity, and post market acceptability.
CE Marking and Conformity Assessment Support
Integration of AI Act deliverables with existing MDR conformity assessments and notified body interactions.Design of Human Oversight Mechanisms
Implementing operational frameworks to ensure clinicians remain in control of AI driven outputs, meeting human centric governance standards.Post Market Surveillance for AI Enabled Devices
Establishing monitoring frameworks to detect algorithmic drift, performance degradation, and incident triggers specific to adaptive or learning systems.Cybersecurity, Robustness, and Model Resilience
Implementation of controls to mitigate adversarial risk, ensure model robustness, and support safety across clinical environments.
AI Transparency and User Communication Strategy
Labeling, disclosures, and patient facing information tailored to comply with AI Act transparency obligations.Governance for General Purpose AI Integration
Risk assessments and documentation for SaMD integrating foundation models, including Code of Practice alignment and systemic risk mitigation.Audit Readiness and EU AI Office Engagement
Advisory and documentation support for inspections, reviews, and formal interactions with regulatory authorities and the AI Office.Cross Functional Enablement and Internal Training
Enabling RA, QA, R&D, and leadership teams through tailored learning modules and regulatory intelligence briefings.
Freyr’s services are designed to scale with your innovation, ensuring that whether you’re launching your first AI enabled device or expanding across global markets, your regulatory infrastructure is robust, future ready, and aligned with the evolving AI ecosystem in healthcare.
