Responsible Use of Artificial Intelligence Directive
This directive sets out the requirements for the transparent, responsible and accountable use of artificial intelligence (AI) by the Government of Ontario.
1 Introduction
Artificial intelligence (AI) technologies provide considerable opportunities to advance innovation, improve service delivery, and save time and money for the people of Ontario. When used by the Government of Ontario, these technologies must be used in a way that protects the people, businesses and data of the province, preserves public trust, and in a manner that responsibly assesses and manages risk.
While risk is inherent to all technology, AI technologies are distinct in their potential to operate autonomously and make decisions. As a result, it can be difficult to determine how certain AI systems came to a given solution, which reduces transparency and could make it challenging to identify errors or dispute decisions. AI may also exacerbate existing biases and stereotypes
To ensure responsible, accountable and transparent use of AI in the Government of Ontario, ministries and provincial agencies must take a consistent, centralized approach to AI governance, have a common understanding of risks posed by AI, and consistently apply rules to manage these risks at every stage of the AI lifecycle, from the design to the development, procurement, deployment, operation and/or decommissioning of AI systems. Management of these risks would allow the Government of Ontario to realize the benefits and opportunities offered by AI while promoting a culture of shared responsibility over AI systems.
2 Purpose
The Responsible Use of Artificial Intelligence Directive (the “Directive”) sets out the requirements for the transparent, responsible and accountable use of AI.
3 Application and scope
The Directive applies to all Ontario ministries and provincial agencies. This Directive requires the application of AI risk management by ministries and provincial agencies that are seeking to use AI systems, or use services that include AI functionality (including procured, ministry/provincial agency developed and publicly available tools), as part of the development or delivery of, or decision-making for, a Government of Ontario policy, program, or service (referred to as a “use case”).
Infrastructure Technology Services or the appropriate Information and Information Technology (I&IT) Cluster, as the case may be, is accountable for applying AI risk management to enterprise Information Technology (IT) resources or systems that incorporate AI components and are consumed by ministries (or provincial agencies, as applicable).
Ministry and provincial agency executives are accountable for ensuring AI risk management of any IT resources and systems that may be built or used in the delivery of ministry-specific and provincial agency-specific programs and services, with the support of their I&IT Cluster, as appropriate.
Operational policies and guidance may provide additional support for the implementation of the Directive’s requirements, including how those requirements apply to AI use cases established prior to the Directive’s effective date.
4 Administration
The Directive is a Management Board of Cabinet Directive, issued under the Management Board of Cabinet Act and effective December 1, 2024.
The Secretary of Treasury Board (TB) and Management Board of Cabinet (MBC) has authority to issue operational policies that are consistent with this Directive.
The Ministry of Public and Business Service Delivery and Procurement (MPBSDP) is responsible for the Directive, and generally, will be responsible for any operational policies. Should any other program consider drafting an operational policy under this Directive, they must consult MPBSDP.
Ministries and provincial agencies must seek Management Board of Cabinet (MBC) approval if, in exceptional circumstances, they require an exemption from all or part of this Directive. Before seeking exemptions, program areas must consult MPBSDP as they prepare their request. The rationale for the exemption must be documented in a business case that requires MBC approval.
5 Responsible use of AI principles
The following 6 principles support the application of this Directive. The principles are meant to be applied in alignment with existing legislation, including the ethical framework established under the Public Service of Ontario Act, 2006, and help inform decision-making when considering the use of AI systems (including procured, ministry and provincial agency developed and publicly available).
5.1 AI is used to benefit the people of Ontario
The people interacting with the AI system, and those affected by its outcomes, are considered when exploring potential AI use. The unique and diverse needs of users of government programs and services that leverage AI, and those affected by the outcomes of AI use, are accounted for in the design, operation and interpretation of outcomes. The tremendous benefits that can be realized by use of AI must be shared with the people of Ontario, while also ensuring that direct and indirect risks to the people of Ontario are mitigated and balanced with the benefits.
5.2 AI use is justified and proportionate, and AI systems used are reliable and valid
AI is only used where it serves a well-defined purpose, and the scope of AI use is proportionate to the problem it is trying to solve. Use follows a problem-first, rather than technology-first, approach. Once deployed, the AI system is reliable and valid – i.e., it works as intended and expected throughout its lifecycle.
5.3 AI is used in a safe, secure and privacy protective way
Data privacy and security are maintained in a way that protects personal and sensitive information and minimizes potential risks and negative impacts, as per Ontario privacy legislation and internal sensitivity policies. Any use or collection of personal or sensitive data is proportionate and reasonable, accounting for the potential benefit to the people of Ontario.
5.4 AI use is human rights affirming and non-discriminatory
AI is used in ways that respect and protect equity, human rights and fundamental freedoms and ensure fairness consistent with applicable legislation including the Canadian Charter of Rights and Freedoms and the Ontario Human Rights Code. Community-informed context, including an understanding of potential discriminatory outcomes and their mitigations, as well s inclusive design, are the foundations of determining if and how AI is used.
5.5 AI use is transparent and meaningful explanations of decisions are made available
Information is provided to the public and public servants about how AI is being used in a service or process, in a way that facilitates understanding of outcomes, consequences and benefits.
5.6 AI use is accountable and responsible
There is clear ongoing human oversight, accountability for, and maintenance of AI systems with a readily available process for the public and public servants to raise concerns about AI use.
6 Requirements
6.1 AI risk management
- ministries must engage in AI risk management when seeking to use AI systems for a Government of Ontario use case. AI risk management aligns with the Ontario Public Service (OPS) Risk Management Process
- to manage AI risk, ministries must:
- 1. State objectives and establish context.
- a. Document the problem AI use is intended to solve.
- b. Determine whether AI use is justified.
- 2. Identify risks.
- 3. Assess risks.
- a. Determine the risk level of the AI use to inform potential risk-based proportional controls.
- 4. Plan and take action.
- a. Identify and apply risk-based proportional controls.
- 5. Report and monitor.
- a. Monitor and update risk assessment to ensure proportional controls remain current and are re-applied as necessary.
- 1. State objectives and establish context.
- AI risk management must be validated by IT and digital governance processes and does not duplicate or replace existing processes (Privacy Impact Assessments, Threat Risk Assessments). Operational policy and guidance may specify further direction on AI risk management governance and approvals
6.2 Disclosure and reporting
- ministries must report on AI use cases and AI risk management, including identified risks related to AI use, to MPBSDP annually
- ministries must coordinate with MPBSDP to publish a list of AI use cases
- if the public is interacting directly with a service that leverages AI (for example, a chatbot) or if AI is involved in decision-making directly affecting a member of the public (for example, determining eligibility for a government service or benefit), then ministries must also:
- publicly disclose AI use as part of the process, service, or program
- provide an accessible avenue for the public to seek information about the use of AI in a process, service, or program. This does not create a new avenue for seeking review of decisions – existing legislative avenues to appeal a decision or outcome of a process, service or program continue to apply
6.3 Application to Provincial Agencies
Provincial agencies must:
- where using AI as part of the development or delivery of, or decision-making for, an agency-specific policy, program, or service, in alignment with Section 3 (Application and Scope), implement AI risk management in alignment with the principles under section 5 and requirements under section 6.1, including establishing approvals processes for AI use based on risk level
- keep records associated with AI risk management, including records of any risk assessments and risk-based proportional controls applied, in alignment with existing Records and Information Management and archiving requirements
- publish a list of AI use cases and report on AI use cases and AI risk management, including identified risks related to AI use, to their accountable ministry in alignment with requirements in the Agencies and Appointments Directive and Provincial Agencies Memorandum of Understanding, including any necessary supporting information and analysis
- if the public is interacting directly with a service that leverages AI (for example, a chatbot) or if AI is involved in decision-making directly affecting a member of the public (for example, determining eligibility for a government service or benefit), then:
- publicly disclose AI use as part of the process, service, or program
- provide an accessible avenue for the public to seek information about the use of AI in a process, service, or program. This does not create a new avenue for seeking review of decisions – existing legislative avenues to appeal a decision or outcome of a process, service or program continue to apply
7 Roles and responsibilities
The accountabilities detailed below attach to these entities or successor organizations who inherit these mandates (through organizational restructuring or name changes).
7.1 Treasury Board and Management Board of Cabinet
- approve this Directive and any changes to it
- approve exemptions from this Directive in whole or in part through review of a submitted business case
- receive annual report from MPBSDP about AI risk management, including a list of AI use cases across the Government of Ontario and identified risks related to AI use
7.2 Secretary, Management Board of Cabinet
- approve updates, exemptions and operational policies pursuant to this Directive
- recommend updates to this Directive to Treasury Board/Management Board of Cabinet
7.3 Deputy Ministers and Provincial Agency Heads or equivalent
- ensure the Directive’s principles and requirements are implemented and monitored throughout their ministry and provincial agency, including putting in place processes that support the Directive
- ensure that provincial agencies for which Deputy Ministers are accountable are aware of the requirements of this Directive
- ensure that all persons covered by this Directive are aware of their responsibilities under this Directive
- in the case of provincial agencies, provincial agency heads or equivalent must also establish approvals processes for AI use based on risk level
7.4 Deputy Minister Committee on Service Delivery
- provide advice on strategic direction for AI adoption and risk management across the Government of Ontario
- issue communications and guidance based on emerging trends or issues in AI technology
7.5 Associate Deputy Minister, Policy, Archives, and Data (PAD)
- maintain this Directive and support its implementation
- develop and maintain strategic policy, as well as operational policies, standards, guidelines and best practices governing responsible use of AI and data pursuant to this Directive, as required
- conduct reviews of the Directive and supporting material, every two years at a minimum, and recommend any changes
- work with the Corporate Chief Information Officer to raise awareness of and promote compliance with AI and information security policies, standards and guidelines across the Government of Ontario
- ensure alignment between AI use and data privacy protection and freedom of information requirements and policies, including the Freedom of Information and Protection of Privacy Act and the Personal Health Information Protection Act
- support education, training and implementation of Directive requirements
7.6 Office of the Chief Risk Officer
- oversee the OPS Enterprise Risk Management process, including reviewing and advising on ministry risk information and risk management practices
- work in cooperation with ministries and central agencies to ensure that risk information is available to MPBSDP
7.7 Chief Information Security Officer
- provide advice and guidance to ministries, provincial agencies and other partners to ensure that security considerations are incorporated into AI planning, procurement, deployment, use and ongoing management.
- identify appropriate security controls for AI and ensure that they are implemented in accordance with security policy and standards, security testing, and assessment or evaluation recommendations (or that residual risks are documented and accepted).
- ensure cyber risk management policies, standards, requirements and processes are incorporated into AI use and evaluate for continuous compliance.
- work with policy and operational partners (for example, Associate Deputy Minister, PAD and Corporate Chief Information Officer) to provide advice and guidance, create and raise AI security awareness and training to the enterprise.
- take action to block or stop any activities posing imminent security risk of significant impact, implement necessary security protocols, conduct thorough investigations into potential breaches, and collaborate with relevant authorities as needed.
7.8 Corporate Chief Information Officer
- develop and maintain technology-focused and implementation operational policies, standards, guidelines and best practices pursuant to this Directive, as required
- provide advice and guidance to ministries and provincial agencies on potential opportunities to derive value from AI, while abiding by the principles of this Directive
- adhere to the requirements of the AI Directive when operationalizing enterprise technology programs and services, in collaboration with ministry executives and provincial agency heads
- consult with the Deputy Minister Committee on Service Delivery on issues related to AI use, related cyber security concerns and emerging AI trends
- submit reporting on AI use cases, AI risk management and risks related to AI to TBS annually
- receive and review additional risk information, including from the Office of the Chief Risk Officer, to support the monitoring and tracking of AI related risks
- work with the Associate Deputy Minister, PAD to raise awareness of and promote compliance with AI policies, guidelines and standards across the Government of Ontario
7.9 All ministry and provincial agency employees
Act in accordance with this Directive, as well as any other policies, guidance and standards that further define obligations relating to the responsible use of AI.
8 Definitions
For the purposes of this Directive, these terms have the following meaning:
Artificial intelligence (AI) system: An AI system is a machine-based system that, for explicit or implicit objectives, makes inferences, from the input it receives, in order to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. (Aligned with Organisation for Economic Co-operation and Development (OECD), 2024).
AI lifecycle: The AI lifecycle encompasses the following phases that are not necessarily sequential: planning and design; collecting and processing data; building and using the model; verifying and validating; deployment; and operating and monitoring (OECD, 2019).
AI use case: A purposeful application of an AI system to a project or initiative (whether it be policy, program or service) to achieve a specific outcome.
Biases: A predisposition, prejudice or generalization about a group of persons based on personal characteristics or stereotypes (Ontario Human Rights Commission Glossary of Human Rights Terms, 2013).
Community: In the context of this Directive, community specifically references affected groups, including traditionally marginalized groups and people who are systemically excluded from decision-making, public institutions, basic services and meaningful participation in economic, political and social activity.
Discrimination: Treating someone unfairly by either imposing a burden on them, or denying them a privilege, benefit or opportunity enjoyed by others, because of their race, disability, sex or other personal characteristics identified under the Ontario Human Rights Code. Systemic discrimination refers to where institutional behaviour, policies, practices and procedures create or perpetuate inequality for groups identified under the Ontario Human Rights Code (Ontario Human Rights Commission, 2024).
Risk: The effect of uncertainty on objectives. It can be characterized as either a potential negative (threat) or positive (opportunity) consequence or event that deviates from an expected outcome (OPS Enterprise Risk Management Directive, 2020).
Risk management: A systematic approach to setting the best course of action under uncertainty by identifying, assessing, understanding, acting on, monitoring, and communicating risk issues (OPS Enterprise Risk Management Directive, 2020).
Validity: Confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled. Deployment of AI systems which are inaccurate, unreliable, or poorly generalized to data and settings beyond their training, creates and increases negative AI risks and reduces trustworthiness (National Institute of Standards and Technology, U.S. Department of Commerce, 2023).
Footnotes
- footnote[1] Back to paragraph Bias and stereotypes in AI are well documented – refer to Bias in data-driven artificial intelligence systems – An introductory survey (Ntoutsi et al, 2020) for an overview of existing research.