AI Act Readiness Assessment

Answer 8 questions to determine your EU AI Act risk classification

Question 1 of 8

Does your organization develop, deploy, or distribute AI systems in the EU market?

The EU AI Act applies to providers, deployers, importers, and distributors of AI systems placed on the EU market, regardless of where the organization is based.

Question 2 of 8

Do you use AI for biometric identification of natural persons?

Real-time biometric identification in publicly accessible spaces is generally prohibited under the AI Act, with narrow exceptions for law enforcement.

Question 3 of 8

Is your AI system used in any of these high-risk areas?

Annex III of the AI Act lists specific use cases that are automatically classified as high-risk, each carrying significant compliance obligations.

Question 4 of 8

Do you provide or deploy a General-Purpose AI (GPAI) model?

GPAI models (e.g., large language models) face additional obligations including technical documentation, transparency, and copyright compliance.

Question 5 of 8

Does your AI system interact directly with natural persons (e.g., chatbots, virtual assistants)?

Systems that interact with people must disclose that they are AI-powered, triggering transparency obligations under Article 50.

Question 6 of 8

Does your AI system generate or manipulate content (deepfakes, synthetic media)?

AI-generated or manipulated content must be clearly labelled, and deployers must disclose when content has been artificially generated.

Question 7 of 8

Do you maintain a complete technical inventory of all AI systems in your organization?

A comprehensive AI inventory is a foundational step for compliance — you cannot govern what you cannot see.

Question 8 of 8

Do you have documented AI governance policies and risk management procedures?

Articles 9 and 17 require high-risk AI providers to establish risk management systems and quality management processes.

Your AI Act Classification