Skip to main content

AICET: The standardised and multidimensional framework for AI competency evaluation

Discover in detail the methodology, levels, and dimensions that make AICET the reference for reliable and objective AI expertise assessment.

AICET Overview

Why AICET?

Artificial Intelligence is a vast, evolving, and multidimensional field (technology, history, ethics, law…). This technological ensemble is cross-cutting and impacts numerous aspects of society, notably the job market, art, and public policies.

The appropriate use of AI requires mastering specific skills whose teaching and evaluation are crucial.

Existing digital competence frameworks, such as DigComp 2.2, are comprehensive but not sufficiently specific for relevant AI skills assessment.

AICET meets this need by providing a standardised, adapted, and rigorous evaluation framework for precise, reliable, and objective measurement of individuals’ artificial intelligence competencies.

Scope and Objectives of AICET

Scope:

Establish the requirements to define and apply an evaluation test of people’s competencies and literacy regarding artificial intelligence.

Objectives of the AICET methodology:

  • Provide a standardised methodology to design tests measuring the public’s AI knowledge
  • Define specific questions and tasks to evaluate competency levels (acculturation, advanced, expert) in AI
  • Allow participants to use the test for training/self-assessment or for official (and subsequently certified) evaluation
  • Enable stakeholders to obtain macro-statistics on tests and results over time

Theoretical Framework

Key Definitions

The definitions and terminology employed in AICET rely on those defined in international standardisation reference frameworks relating to competencies and Artificial Intelligence.

They notably incorporate concepts, vocabulary, and definitions from standards ISO/IEC 22989:2022 (Information technology — Artificial intelligence — Artificial intelligence concepts and terminology) and ISO/IEC 23053:2022 (Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)), which establish the descriptive and functional foundations of AI systems.

These references ensure that AICET terminology is consistent with recognised practices and compliant with international standards, facilitating its interoperability and readability in technical, regulatory, and organisational contexts.

Theoretical Frameworks for Expertise Levels

The AICET methodology combines two reference frameworks to structure the AI domain and evaluate competencies:

Bloom-Krathwohl Taxonomy: Used to describe knowledge progression, transformed into a matrix comprising types of knowledge (factual, conceptual, procedural, metacognitive) and cognitive processes (remember, understand, apply, analyse, evaluate, create).

DigComp Framework: Offers a methodology to describe competency management in the digital domain.

Bloom-Krathwohl’s 6 technical levels are grouped into 3 main AICET levels to simplify question production.

The Three AICET Expertise Levels

Acculturation Level

Target Audience: People who have general competence in all aspects of AI, but are not necessarily very familiar with specific details. This level constitutes an entry point for everyone.

Evaluation Objective: Evaluate the ability to explain basic AI concepts, detect AI contribution in everyday tools, have ideas about its dangers and contributions in civil society, or be able to distinguish what is AI from what is not. The goal is to measure general understanding of AI.

Example of Competence: Be able to know and briefly explain the difference between supervised learning and unsupervised learning.

Advanced Level

Target Audience: Individuals with a deeper understanding of AI and who may even be regular users of AI-based applications. An academic knowledge equivalent (postgraduate diploma, Bachelor’s or Master’s in computer science and particularly in AI) is recommended.

Evaluation Objective: Evaluate understanding of basic AI principles and methods. This audience will be tasked with finding the type of AI to use to solve a particular problem. This level concerns mastering the construction of AI systems or putting them into operation.

Example of Competence: Be able to select the appropriate type of learning for a given problem, taking into account data nature and final objective, and understand the implementation of typical algorithms for each.

Expert Level

Target Audience: Individuals possessing deep technical knowledge in AI, such as engineers, researchers, or professionals working in the field. Extensive academic knowledge is recommended (equivalent postgraduate diploma, Master’s or PhD in computer science and particularly in AI).

Evaluation Objective: Test expertise and ability to explain complex and advanced concepts. Computer code corrections may be part of the test. This level requires extensive knowledge and an ability to innovate.

Example of Competence: Be able to solve complex problems thanks to a deep understanding of all types of machine learning.

Competency Categories Evaluated (The 5 Dimensions)

The test is structured around five categories (dimensions), each aiming to deepen understanding of AI from a specific angle, addressing both positive and negative aspects.

Theoretical Dimension of AI

This axis concerns fundamental knowledge of artificial intelligence (AI), including its theoretical principles, underlying mathematical models, algorithms, and key concepts. Examples of related domains are neural network structures, mathematical tools, different learning types, or quality metrics.

Applicative Dimension of AI

In this axis, concrete AI application domains in daily life are explored, providing real examples. Additionally, this axis aims to offer practical advice on AI, data management, associated best practices, risks, and limitations. Examples of related domains are natural language processing, computer vision, or recommendation systems.

Operational Dimension of AI

This axis examines the practical implementation of AI, including data structuring, AI algorithm programming, AI lifecycle, use of specific libraries, and operational skills necessary to develop and deploy AI systems. In accordance with this test’s values, no technology should be disparaged or promoted. Examples of related domains are hardware, technical stack, or Human-AI interface.

Legal and Ethical Dimension of AI

This domain addresses legal, ethical, and ecological aspects surrounding AI. It covers national and international laws, standards and regulations related to AI, as well as ethical issues such as data privacy, intellectual property, and liability. Examples of related domains are bias and fairness, environmental impact, or regulation.

General Culture Dimension of AI

In this axis, personalities and entities that have made significant contributions to AI are explored, including pioneers, women in the field, innovative companies, and researchers who have made major advances in the field. Examples of related domains are trends within AI, AI history, or public perception.

AICET’s integrity is guaranteed by its mission and five core values

Independence

Totally vendor-neutral, offering objective competency assessment

Adaptability

The question database is continually revised by experts to reflect the state of the art.

Comprehensiveness

Covers the full spectrum of AI knowledge, from theory to ethical dimensions.

Reliability

Built on rigorous principles to ensure every test is fair, reproducible, and non-discriminatory.

Inclusivity

Committed to making assessment accessible, with accommodations available for all candidates.

A collaborative initiative supported by leaders in technology, academia, and standardisation

AICET was developed by an open working group of individuals and organisations committed to establishing a trusted benchmark for AI skills. This diverse coalition of experts ensures the standard is comprehensive, practical, and aligned with real industry needs.

Key Contributors

Standardisation and Coordination Partners

The standard is developed under the coordination of AFNOR and aligned with the European CEN-CENELEC framework, guaranteeing a rigorous and internationally recognised process.

Turn your employees’ AI competency into a performance lever

Discover how our micro-learning platform can transform your approach to training and accelerate your regulatory compliance.