The EU’s ambitious AI Act is set to rollout over the next year, introducing strict regulations around high-risk artificial intelligence systems. However, many organizations may find themselves unprepared to meet the new compliance requirements.
Here are some key things to know about the EU AI Act:
- It is a proposed regulation by the European Commission aimed at regulating artificial intelligence systems within the European Union.
- The goal of the Act is to create a regulatory framework to ensure AI systems used in Europe are safe and respect fundamental rights.
- It proposes certain AI systems be classified as “high-risk”. This includes systems used in areas like critical infrastructure, transport, education, employment, law enforcement etc.
- Strict requirements would apply to the development, use and monitoring of “high-risk” AI systems. This includes aspects like data governance, accuracy, robustness, cybersecurity, human oversight etc.
- Developers and operators of high-risk AI would need to conduct risk assessments and implement risk management systems. They must also document key aspects of how the AI was developed and ensure it is transparent to users.
- Independent auditing of high-risk systems would be mandated prior to use. National authorities would enforce compliance and issue fines for violations.
- The regulation bans certain AI practices like “subliminal techniques beyond a person’s consciousness” and systems that manipulate human behaviour.
- It aims to establish Europe as a leader in trustworthy AI and promote transparency and accountability in how AI is used across different sectors.
Key compliance requirements for high-risk AI systems under the proposed EU AI Act:
- Robust risk management system – Developers must implement a risk management system to identify, assess and address risks throughout the AI system’s lifecycle.
- Accuracy and reliability – Systems must be developed and tested to achieve appropriate levels of accuracy, reliability and cybersecurity for their intended purpose.
- Data governance – Strict rules around data quality, collection, annotation, storage and access will apply. Both training and operational data must meet requirements.
- Documentation and record-keeping – Extensive technical documentation is required covering the system’s capabilities, limitations, development/training processes, performance validation procedures and results, incident logs etc.
- Human oversight – High-risk AI must be overseen by natural persons, rather than deployed as fully autonomous systems. Various oversight mechanisms like auditing, incident response must be in place.
- Transparency – Users must receive relevant information about the AI system and clear indications when interacting with an algorithm.
- Information to users – Key details about the system’s purpose and limitations, accuracy thresholds, potential risks and harms must be provided to users.
- Training and testing – Systems need to be continuously trained and tested rigorously using representative data to ensure they remain accurate and safe when in actual use.
- Post-market monitoring – Once deployed, AI systems must be monitored for any issues like biases, inaccuracies or other forms of unintended and harmful behavior.
The proposed EU AI Act includes robust enforcement mechanisms and penalties for non-compliance:
- Fines: Companies can be fined up to 6% of global annual turnover for certain violations like illegal high-risk systems, failure to conduct risk assessments, or inadequate documentation.
- Corrective Actions: Regulators can issue orders requiring companies to suspend or limit the use of non-compliant AI systems, revise processes, or implement specific risk management measures.
- Withdrawal of Systems: As an enforcement tool of last resort, authorities can require the full withdrawal of high-risk AI systems from the EU market if serious issues persist after corrective actions.
- National Oversight: EU member states will establish responsible national competent authorities to monitor compliance. They may conduct on-site audits and investigations.
- Whistleblower Protections: Individuals reporting non-compliance are protected from retaliation. Reports will be investigated and agencies will take appropriate supervisory measures.
- Liability: Under certain conditions like negligence, damages caused by malfunctioning high-risk AI can lead to the liability of providers and users.
- Criminal Penalties: In very severe cases involving risks to health and safety, criminal penalties like imprisonment may apply for directors or representatives.
So in summary, the regulation has bite – aiming to ensure issues are corrected and deter non-compliance through a combination of administrative oversight, corrective actions, financial penalties, market withdrawal powers and in rare cases, criminal sanctions.
5 Reasons Why Organizations Won’t Be Ready for the EU AI Act
The EU’s ambitious AI Act is set to rollout over the next year, introducing strict regulations around high-risk artificial intelligence systems. However, many organizations may find themselves unprepared to meet the new compliance requirements. Here are 5 top reasons why companies face an uphill battle to get ready in time.
- Lack of Governance Frameworks
One of the main stipulations of the AI Act is that high-risk AI systems must be deployed with robust governance frameworks in place. Yet surveys show that only a small fraction of companies have formal AI governance programs today. Developing comprehensive frameworks to oversee all aspects of AI development, testing, use and monitoring is a major undertaking. It requires establishing cross-functional teams, defining roles and responsibilities, and instituting governance boards. For most organizations without existing structures, rolling out governance in under 12 months will be challenging.
- Insufficient Documentation Processes
The AI Act demands extensive technical documentation be maintained for all high-risk systems. This includes detailed records on data, algorithms, testing procedures and results, model performance, potential biases, decisions made during development and deployment, internal rules and safeguards, authorized uses cases, and much more. But the reality is that many companies do not systematically track or store this level of documentation currently. Building out documentation capabilities, tools, and processes to satisfy regulators will be time intensive. It may not be feasible for some to implement robust documentation practices from scratch before the Act fully phases in.
- Lack of Proper Risk Management
The AI Act places strong emphasis on risk management – requiring high-risk systems undergo risk assessments and have appropriate mitigation strategies in place prior to deployment. Yet a 2021 survey revealed that less than 20% of organizations have formal risk management programs for AI. Developing an in-depth understanding of AI risks, assessing every possible failure mode, devising technical and procedural safeguards, determining risk appetite and oversight takes extensive effort. Rushing to build risk management capacities from the ground up may cause companies to miss the mark on compliance.
- Scarce Technical Resources
Ensuring AI systems satisfy the technical requirements detailed in the Act demands specialized skills and resources that are scarce across many companies and industries. This includes requirements for accuracy, robustness, security, data governance, human oversight, and techniques like privacy-preserving AI. However, technical AI expertise remains limited. Those with the necessary skills to implement state-of-the-art techniques to meet all requirements are in high demand but low supply. For many organizations, scaling technical capabilities so rapidly presents a major blocker to compliance.
- Lack of Sector-Specific Guidance
While the AI Act provides a baseline framework, sector-specific regulations are still being drafted to outline compliance requirements for industries like healthcare, transport, and public sector use cases. This lack of clarity presents challenges. Without understanding all guidelines applicable to their domain, it is difficult for organizations to conduct meaningful risk assessments or build out governance based on incomplete information. Relying on sector rules that are still evolving makes compliance planning ambiguous. More definitive regulatory direction is needed for different verticals to properly align operations over the next year.
In summary, becoming AI Act compliant demands new programs, frameworks, documentation processes, technical capabilities and risk control measures that most companies have not budgeted for or developed. As regulators phase in requirements incrementally over the next 12 months, it won’t leave sufficient time for organizations to stand up these elaborate compliance regimes from scratch. The road ahead will be long and bumpy for the majority struggling to catch up and meet new legal obligations on time. Sectors may require transitional periods or regulators must provide concise industry-specific guidance to smoothen the path to compliance.
In my opinion, Here are some potential strategies organizations could adopt to help quickly scale their technical AI expertise and capabilities to meet the Act’s requirements:
- Partner with specialized AI consulting firms, oversight boards or researchers that have deep technical skills. Leverage their expertise on short-term consulting or services agreements to help stand up initial compliant systems.
- Aggressively hire external talent by recruiting top AI engineers and practitioners, even if only on temporary contracts initially. Offer competitive salaries and flexibility to attract needed skills.
- Invest in extensive upskilling and certification programs to rapidly train existing technical staff on techniques like model monitoring, explainable AI, privacy engineering etc. Partner with Universities for accelerated courses.
- Adopt AI-as-a-service solutions from specialized tech vendors who have already produced Act-compliant products and can take on much of the implementation work. Especially for non-core AI use cases.
- Contribute to open-source initiatives and communities focused on building AI governance toolkits, documentation templates, model cards and other reusable compliance assets to reduce starting from scratch.
- Consider joint ventures or alliances with other companies in the same sector facing similar skill shortages. Pool technical talent and other resources to satisfy compliance needs together in a collaborative manner.
- Leverage public sector funding and grants available in many jurisdictions for AI safety and accountability projects. Use subsidies to hire extra talent on research contracts working solely on regulatory solutions.
- Emphasize flexibility in interpretations where possible – focus on establishing core risk management foundations before complex technical aspects if regulators will accept staged compliance plans.
The above approaches can help bolster internal AI safety engineering capacity at an accelerated pace by leveraging external assets, expertise and networks during this period of transition to new regulations.
While regulations like the AI Act are necessary to ensure the safe and ethical development of artificial intelligence, we must also recognize the challenges of operationalizing such ambitious compliance frameworks, especially within tight timelines. I believe regulators should take a more pragmatic approach – focusing first on establishing core risk management processes, with a phased rollout that supports companies innovating in good faith.
Prescriptive technical requirements are difficult to satisfy overnight. We would urge allowing for interim solutions and continuous improvement efforts, as long as organizations demonstrate active progress.
Most importantly, close collaboration is needed between the private and public sectors to overcome barriers through open guidance, best practices sharing and alternative paths to assure the spirit of the rules is met.
If our aim is to cement Europe as a global leader in trustworthy and lawful AI, both sides must work as partners to shape an achievable yet impactful implementation of this vital legislation.

Leave a Reply