Consultation: Ontario’s Trustworthy Artificial Intelligence (AI) Framework
Consultation status: Closed.
Thank you for sharing your ideas to improve the public’s trust in artificial intelligence (AI). Your feedback will be reviewed by the Ontario Digital Service, as we work to create a government framework for artificial intelligence that is accountable, safe and rights based.
We will report on what we heard during the consultation in July 2021.
On this page Skip this page navigation
Building a digital economy that is powered by trustworthy AI is a key goal of the government’s Digital and Data Strategy. Already, our researchers and entrepreneurs are at the forefront of scientific research, unlocking new economic and societal benefits for the people of Ontario.
However, as seen around the world, the use of AI without proper rules or oversight has at times created division, uncertainty and harm. To protect Ontarians from these risks, Ontario is developing a trustworthy artificial intelligence (AI) framework to support AI use that is accountable, safe and rights based.
To start, we will create guidelines for the government’s use of AI. This framework, created with your help, will outline how the government will use AI responsibly to minimize misuse and maximize benefits for Ontarians.
Ontario’s AI framework will be developed following Open Government Partnership principles to demonstrate the province’s commitment to transparency, accountability and working in the open with the people of Ontario. As an active member of the Open Government Partnership (OGP) since 2016, the public development of this AI framework and its resulting action items will form Ontario’s 2021-2022 OGP Action Plan.
We want to hear from you
As part of our initial consultation phase, we are asking you to share your ideas to improve the public’s trust in AI. Your experiences, concerns, and insights will help the government make its use of AI accountable, safe and rights based across Ontario.
To start the conversation, we are asking for feedback on potential actions under the following three commitments:
- No AI in secret
- AI use Ontarians can trust
- AI that serves all Ontarians
Your feedback on these actions is an integral part of co-creating a plan that will benefit people across Ontario. The final plan will include one to three specific actions under each commitment.
AI framework draft commitments and actions
Ontario is committed to responsible algorithmic use by the government that is accountable, safe and rights based.
Commitment 1: No AI in secret
Future success statement:
I know how the government is using algorithms to process my application quickly to get me the support I need and that I have a right to contest the decision if an error was made.
AI is used to inform decisions in big and small ways using increasingly sophisticated methods and technologies. For people to trust that the use of AI is safe and appropriate they must first be aware that the AI exists. As a result, the government needs to be transparent about how, when, and why these tools are used so that people have a right to address potential biases created by the AI algorithms.
The use of AI by the government is always transparent, fair, and equitable.
- Be fully transparent when using algorithms to interact with the public (e.g. rules to require the public be informed if they are interacting with a machine or have decisions made about them by an algorithm)
- Create accountability for the use of AI in the government by giving people rights to address potential biases created by the AI (e.g. right to explainability, right to contest, and right to opt out)
- Provide clarity and transparency to the public on how Ontario collects data for use in algorithms (e.g. explore options to update provincial notices of collection to inform the public if data collected is used to develop algorithms for decision-making.).
Commitment 2: AI use Ontarians can trust
Future success statement:
I can promote my traffic predicting algorithm with confidence knowing it meets the rules and keeps clients safe.
Protecting individual rights and ensuring safety requires rules and governance for AI. People building, procuring, and using AI have a responsibility to the people of Ontario that AI never puts people at risk and that proper guardrails are in place before the technology is used by the government. A risk-based approach is encouraged to ensure that Ontario’s AI leadership continues to advance, while potentially risky use cases adopt the needed rules to ensure safety.
Risk-based rules are in place to guide the safe, equitable, and secure use of AI by government.
- Deliver recommendations on ways to update Ontario’s rules, laws and guidance to strengthen the governance of AI, including whether to adopt a risk-based approach to determine when which rules apply.
- Assess whether to use an algorithmic assessment tool as a way to measure risk, security, and quality.
- Ensuring processes are in place so that algorithms are continuously tested and evaluated for bias/risk and whether audits or human oversight controls are needed.
Commitment 3: AI that serves all Ontarians
Future success statement:
I know how to challenge or learn more about how AI was used to process my form and make a decision about my application.
As AI-powered technologies continue to develop, they offer new ways to deliver better products to Ontarians – including government services and programs. That is why, as adoption increases, it is vitally important that the use of AI does not reinforce existing structures of discrimination, expand harmful surveillance, or threaten personal privacy. Therefore, it is important to require high standards from vendors and for people to have clear paths to challenge outcomes and improve the use of AI in decision-making.
Government use of AI reflects and protects the rights and values of Ontarians.
- Embed equity and inclusion in the use of data and digital tools by requiring organizations to take steps to mitigate potential harms (e.g. data set requirements, documentation requirements for traceability, accountability provisions).
- Engage with sector leaders and civil society to develop a standard for “trustworthy AI” and a process to certify that vendors are meeting the government’s standard.
- Assess whether the government should prohibit the use of AI in certain use cases where vulnerable populations are at an extremely high risk.