EU AI Act Guidance: Six Key Points to Identify Compatible AI Vendor within the EU Regulations

EU AI Act Guidance: Six Key Points to Identify Compatible AI Vendor within the EU Regulations

Dimitar Dimitrov is the founder and Managing Partner at Accedia, a leading European IT services company.

The EU AI Act is a significant regulation aiming to create reliable AI by setting strict obligations for providers and users of AI systems. Its implementation comes at a time when AI adoption is rapidly increasing across industries.

Recent McKinsey surveys indicate a rise in AI adoption, as around 72% of organizations have now incorporated AI—a considerable increase from the 50% it had remained at for six years.

As businesses integrate AI into their operations, partnering with a compliant AI development partner that aligns with the EU AI Act is crucial. In the following sections, we will focus on specific clauses of the document, offering actionable insights to help you verify your partner's readiness.

1. Insist on Detailed Risk Classification

Chapter 2, Section 1, Article 6: Classification Rules for High-Risk AI Systems

The EU AI Act contains a risk classification framework categorizing AI systems based on acceptability, high risk, limited, and minimal risk. High-risk systems, such as those used in biometric identification and critical infrastructure, are subject to the most stringent requirements.

When considering an AI development partner, request a comprehensive risk classification report to assess how the proposed system fits into this framework.

At Accedia, we found over 90% of our AI projects fall into the limited risk category. While less regulated, this classification still demands thorough documentation to meet compliance standards. Ensure that any risk classification report provided by your vendor clearly indicates the rationale behind the classification. If the system falls into the high-risk category, request a compliance roadmap, specifying steps for conformity assessments, whether conducted internally or by an external body.

2. Demand Conformity Assessment Certification

Chapter 3, Section 5, Article 43: Conformity Assessments

While classification is essential, compliance does not conclude with it. High-risk AI systems must pass rigorous conformity assessments to verify their safety and transparency. Successful software development companies should be able to demonstrate their capabilities to meet these requirements, ideally through evidence of successful audits conducted by a notified body or internal conformity checks.

Specify in your contract that you require conformity certifications prior to deployment. Also, look for a vendor who consistently updates their compliance records. It shows their commitment to long-term collaboration, not just a one-off project.

3. Implement Post-Market Monitoring

Chapter 9, Section 1, Article 72: Post-Market Monitoring by Providers and Post-Market Monitoring Plan for High-Risk AI Systems

Once a system is operational, the EU AI Act requires providers to establish robust post-market monitoring mechanisms. These mechanisms collect, document, and analyze performance data over the AI system's lifetime to ensure continued compliance and address emerging risks.

When choosing an AI development partner, examine their post-market monitoring strategy and ensure it includes protocols for incident reporting and issue resolution. Serious incidents must be reported to the relevant authorities within 15 days, a strict timeframe unique to this regulation. Ask your partner to provide examples of how they've handled incidents, including communication with stakeholders and mitigation efforts. Contractual provisions can also formalize these obligations, specifying penalties for non-compliance.

4. Request AI Explainability

Chapter 3, Section 2, Article 12: Record-Keeping and Chapter 3, Section 2, Article 13: Transparency and Provision of Information to Deployers

The EU AI Act emphasizes explainability as a key requirement for high-risk AI systems. These systems must provide clear explanations for their outputs, tailored to the intended audience and use case.

Ask AI providers for documentation explaining how the developed systems generate decisions, particularly for sensitive applications such as recruitment or credit scoring. Test these explanations with various user groups—technical staff, end-users, and regulators—to ensure clarity. Check that your potential AI development partner includes tools for generating logs and evidence for audits.

5. Address Risks with General-Purpose AI

Chapter 5, Section 1, Article 52: Procedure

The EU AI Act introduces specific obligations for general-purpose AI systems, commonly used across multiple industries. Developers must ensure that users acknowledge their limitations and intended use cases when building such systems.

When working with an AI development company, request clear guidelines tailored to your industry and evaluate their ability to simulate and address potential misuse scenarios. Liability clauses in your agreement can further protect your business, holding the provider accountable if ambiguous guidance results in harmful outcomes.

6. Confirm AI Development Partner's Location and Territorial Compliance

Chapter 1, Article 2: Scope

Consider the vendor's location and its implications for compliance. The EU AI Act applies to companies both within and outside the EU if their systems are placed on the EU market or affect EU users.

Non-EU vendors are required to designate an EU-based authorized representative. During your evaluation, request proof of this. If your potential partner is not EU-based, explore further. Do they have a solid understanding of EU regulations? How do they plan to modify their systems to comply with the EU AI Act? This consideration is essential for vendors operating across multiple markets, where compliance variations can significantly differ between EU and non-EU jurisdictions.

As a business dealing with clients across more than 20 nations, covering various continents, we've witnessed how geographical variety can add intricacy to compliance. This necessitates a meticulous grasp of regional regulations, considering their territorial specifications. Subsequently, contracts ought to stipulate jurisdictional prerequisites, holding development firms accountable for abiding by the Act, irrespective of their base locations.

Conclusion

Maneuvering through the AI Act may seem daunting, but it's a worthwhile task. This legislation isn't merely about adhering to legalities; it's about fostering trust in AI and ensuring these potent technologies are utilized ethically and responsibly. By posing the appropriate queries, you can identify a collaborator who's both compliant and devoted to ethical and transparent AI development.

Regarding Membership, our Website Technology Council is an exclusive forum, solely open to high-caliber CIOs, CTOs, and technology leaders. Am I eligible?

Dimitar Dimitrov, as the founder and Managing Partner at Accedia, played a crucial role in navigating the EU AI Act for their AI projects, ensuring compliance and demonstrating the company's abilities to meet minimum requirements.

In the event you seek membership to the Website Technology Council, you may inquire about eligibility with the organization, given that you hold a prominent position as a CIO, CTO, or technology leader within your company.

Read also: