Artificial Intelligence and Data Act expected to regulate AI at the federal level in Canada, but provincial legislatures have yet to be introduced.
Laws/Regulations directly regulating AI (the “AI Regulations”)
Canada is seeking to regulate AI at the federal level, through the Artificial Intelligence and Data Act (AIDA), which forms part of Bill C-27, a bill that also includes the Consumer Privacy Protection Act (an update to the current federal privacy law) and the Personal Information and Data Protection Tribunal Act.1
At the provincial level, provincial legislatures have yet to introduce laws to directly regulate AI. The Innovation Council of Quebec, a body mandated by the government of Quebec, has however, recently recommended the adoption by the provincial legislature of a law to regulate AI specifically.2
Status of the AI Regulations
AIDA, which was introduced in June 2022, successfully passed its second reading and was referred to the Standing Committee on Industry and Technology ("the Committee") on April 24, 2023.3 The pace of the Bill’s review by the Committee has however been slow; it remains unclear whether it will be completed and the Bill passed before the next federal election, which is set to occur no later than October 20, 2025. If the Bill is not adopted by Parliament before its dissolution, it will die on the order paper and will need to be re-introduced (whether in its current form or not) by the next government.
The original version of AIDA contained limited substantive content as it left most key elements of the legal regime to be set out at a later date in regulations (including the main compliance obligations and the definition of "high-impact systems," which are the primary focus of the Bill). The Minister of Innovation, Science and Industry, François-Philippe Champagne, subsequently presented the Committee with proposed amendments on November 28, 2023 ("the Proposed Amendments") which are substantial and have addressed some of the concerns regarding the first iteration of AIDA.4 The Proposed Amendments have not yet been officially adopted.
It is unclear when AIDA will come into effect; it is yet to be voted out of committee, and there is some doubt whether it will pass before October 2025 (the deadline to hold a federal election).5 Others have called for AIDA to be stripped out of Bill C-27 and overhauled.6
On November 12, 2024, the Government of Canada announced the launch of the Canadian Artificial Intelligence Safety Institute (CAISI)7, a new initiative out of a broader CAD $2.4 billion investment in AI initiatives announced as part of the 2024 federal budget. From this amount, CAD $2 billion is dedicated to launch and support the Canadian AI Sovereign Compute Strategy and a new AI Compute Access Fund. Those initiatives seek to enhance domestic advanced compute infrastructures in order to support Canada’s AI industry and researchers. More specifically, the federal government intends to invest CAD $700 million in support of the Canadian AI ecosystem to encourage the development of national AI compute capacities, CAD $1 billion to build public supercomputing infrastructure and CAD $300 million to support purchase of AI compute resources by companies.
With an initial budget of CAD $50 million over five years, CAISI will collaborate with the National Research Council of Canada to advance research focused on key government concerns (such as cybersecurity and international AI safety) and CIFAR, to advance research on initiatives pertaining to critical AI safety and responsible deployment. As ISED Minister Francois-Philippe Champagne announced in the wake of the CAISI unveiling, the initiative will "propel Canada to the forefront of global efforts to use AI responsibly, and will be a key player in building public trust in these technologies."8 The launch of CAISI is a key component of Canada's broader AI strategy, which also includes the proposed Artificial Intelligence and Data Act and the Voluntary Code of Conduct on AI, to foster safe AI development and build public trust in AI technologies.
Other laws affecting AI
There are various laws that do not directly seek to regulate AI, but that may affect the development or use of AI in Canada, whether federal, provincial or territorial. A non-exhaustive list of related laws affecting AI includes:
- At the federal level, the Privacy Act and the Personal Information Protection and Electronic Documents Act (PIPEDA).9
- At a provincial level, the Personal Information Protection Act, SA 2003 (in Alberta); the Personal Information Protection Act (in British Columbia); and the Act Representing the Protection of Personal Information in the Private Sector (in Quebec). Amendments to the Quebec Act that entered into force in September 2023 now regulate automated decision-making based on the processing of personal information, and require disclosure of the occurrence of processing and the provision of information about the reasons and factors that have led to a decision. This new obligation is inspired by Article 22 of the GDPR.
- Intellectual property laws may affect several aspects of AI development and use.
- Competition law may have influence over the structure of the AI market in Canada. In March 2024, the Competition Bureau of Canada has launched a call for comments on the intersection of AI and competition law.
- Although no specific changes have been proposed at this moment, the Innovation Council of Quebec has recommended amending Quebec labor law to account for the impacts of AI.10
- Existing human rights and anti-discrimination laws (including the Canadian Human Rights Act) regulate discriminatory outcomes and practices, such as those that may result from biased outputs and decisions from AI systems.
Definition of “AI”
Further to the Proposed Amendments, AIDA currently adopts the following definitions:
- "Artificial intelligence system" means a "system that, using a model, makes inferences in order to generate output, including predictions, recommendations or decisions".11
- "General-purpose system" means an "artificial intelligence system that is designed for use, or that is designed to be adapted for use, in many fields and for many purposes and activities, including fields, purposes and activities not contemplated during the system's development".12
- "Machine-learning model" means a "digital representation of patterns identified in data through the automated processing of the data using an algorithm designed to enable the recognition or replication of those patterns".13
The Proposed Amendments clarify that an AI system may be a general-purpose system and a high-risk system at the same time.14
Territorial scope
AIDA has a wide territorial scope. Per the Proposed Amendments, Part 1 of AIDA applies in respect of: (i) AI systems or machine learning models made available in the course of international or interprovincial trade and commerce (cross-border trade); and (ii) the management of the operations of AI systems used in the course of cross-border trade.15
In the event a provincial legislation would adopt its own AI law, it remains unclear to what extent AIDA would apply to activities inside this province. Per the Canadian Constitution, federal and provincial government have specific fields of competence where they have authority to legislate. It is possible that the AI law may follow the same path as privacy law: Where some provinces (like Quebec, Alberta and British Columbia) have adopted their own privacy laws, PIPEDA does not apply in their field of competence (but may regulate sectors under federal jurisdiction, like banking). However, in provinces with no general privacy laws (like Ontario), PIPEDA fully applies.
Sectoral scope
While "high-impact systems" (see below) are defined by reference to their potential use in certain classes (some of which relate to sectors), AIDA does not generally adopt a sector-specific approach. Instead, AIDA imposes different obligations on certain persons depending on: (i) the type of AI system they are involved with (i.e., general-purpose, machine-learning models, or high-impact systems); and (ii) their position along the AI value chain.16
Compliance roles
As per the Proposed Amendments, obligations are tailored to each organization's role in the AI value chain. Specifically, the following persons have obligations under AIDA:
- In relation to general-purpose AI systems17 and high-impact systems18 – persons who manage such systems; persons who make such systems available; and persons who make such systems available for the first time in the course of cross-border trade.
- In relation to machine-learning models19 – persons who first make a machine-learning model available, for incorporation into a high-impact system, in the course of cross-border trade.
Core issues that the AI Regulations seek to address
AIDA's stated purposes are: (i) to regulate cross-border trade in AI systems by establishing common requirements applicable across Canada, for the design, development and use of those systems; and (ii) to prohibit certain conduct in relation to AI systems that may result in serious harm to individuals or harm to their interests (in particular, biased outputs).20
Risk categorization
If an AI system's intended use(s) falls within one of seven classes identified in the Proposed Amendments, that AI system will be considered to be a "high-impact system."21 Specifically, an AI system will be considered "high-impact" if it is used:
- To determine employment matters (i.e., to determine employment, recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer or termination).
- In matters relating to service access, specifically, AI systems used: (i) to determine whether to provide services to an individual, or the type or cost of services to be provided to an individual; or (ii) to prioritize services to be provided to individuals.
- To process biometric information in matters relating to: (i) the identification of an individual, except where such information is processed with the individual's consent to authenticate their identity; or (ii) the assessment of an individual's behavior or state of mind.
- In matters relating to content moderation and prioritisation, specifically, AI systems used in: (i) the moderation of content that is found on an online communications platform, including a search engine or social media service; or (ii) the prioritization of the presentation of such content.
- In healthcare or emergency services matters, with certain exceptions, pursuant to the Food and Drugs Act.
- By a court or administrative body in determinations relating to an individual who is a party to proceedings.
- To assist a peace officer, as defined in the Criminal Code, in the exercise and performance of their law enforcement powers, duties, and functions.22
Key compliance requirements
AIDA adopts a detailed approach to compliance requirements.23 Some of these requirements (such as those relating to risk management measures) are specific to particular roles in the AI value chain. For example:
- Persons who first make a machine-learning model available for incorporation into a high-impact system, in the course of cross-border trade (i.e., developers), must establish measures to identify, assess and mitigate against the risks of biased output before the model is made available.24
- In contrast, persons who first make high-impact systems available must ensure that risk mitigation measures are in place, and that they have been tested.25 Those who manage the high-impact systems are responsible for actually conducting tests to establish the mitigation measures effectiveness.26
Other AIDA requirements are tailored to the distinct challenges and objectives at each point in the AI value chain:
- Persons who first make a machine-learning model available for incorporation into a high-impact system, in the course of cross-border trade, must ensure that measures respecting the data used in developing the model have been established in accordance with regulations.27
- In contrast, high-impact system operators must establish measures that allow for users to provide feedback on the system's performance.28
One of the key obligations introduced by the Proposed Amendments is the establishment and maintenance of written accountability frameworks by persons who make a general-purpose system or high-impact system available, or who manage the operations of such systems. Written accountability frameworks shall include (among other things): (i) a description of the roles, responsibilities, and reporting structure for all personnel who contribute to making the system available or managing its operations; (ii) policies and procedures respecting the management of risks relating to the system and the data used by the system; and (iii) anything else required by regulation.29
Regulators
Under the Proposed Amendments, the Artificial Intelligence and Data Commissioner ("the Commissioner") nominated by the minister in charge of the Act has central responsibility for enforcing AIDA. If the Commissioner is absent or incapacitated, or if no Commissioner is designated, the relevant minister will fulfil the role of the Commissioner.30
Enforcement powers and penalties
The Commissioner has the power to:
- Compel the provision of an accountability framework, and provide guidance or recommendations as to corrective measures that need to be taken in relation to that framework.31
- Compel a person who makes available or manages any AI system to provide an assessment as to whether the AI system is one of those subject to AIDA.32
- Conduct an audit, require any person to conduct an audit, or require any person to engage the services of an independent auditor if the Commissioner has reasonable grounds to believe that a person has contravened or is likely to contravene certain sections of AIDA.33
- Disclose and receive information to and from other regulators, including (among others) the Privacy Commissioner, the Canadian Human Rights Commission, the Commissioner of Competition, and the Financial Consumer Agency of Canada.34
The penalties available under AIDA were unchanged by the Proposed Amendments, although further revisions will likely be necessary.35 Currently, contravention of sections 6 through 12 and certain other offences are punishable as follows:36
- On a conviction on indictment, by: (i) a fine of not more than the greater of CAD 10 million and 3 percent of gross global revenues in the preceding financial year for companies; and (ii) a fine at the discretion of the court, for individuals.
- On a summary conviction, by: (i) a fine of not more than the greater of CAD 5 million and 2 percent of gross global revenues in the preceding financial year for companies; and (ii) a fine of not more than CAD 50,000 for individuals.
General offenses37 are punishable as follows:38
- On conviction on indictment, by: (i) a fine of not more than the greater of CAD 25 million and 5 percent of gross global revenues in the preceding financial year for companies; and (ii) a fine at the discretion of the court, or a term of imprisonment of up to five years less a day, or both, in the case of an individual.
- On summary conviction, by: (i) a fine of not more than the greater of CAD 20 million and 4 percent of gross global revenues in the preceding financial year for companies; and (ii) a fine of not more than CAD 100,000 or to a term of imprisonment of up to two years less a day, or both, in the case of an individual.
1 See here.
2 See here (in French).
3 See here.
4 See here.
5 See here and here.
6 See here.
7 See government press release here.
8 Ibid.
9 Note that if Bill C-27 is adopted, PIPEDA will be replaced by the Consumer Privacy Protection Act and the Electronic Documents Act.
10 See here (in French).
11 See the Proposed Amendments, p.17.
12 See the Proposed Amendments, p.19.
13 See the Proposed Amendments, p.19.
14 See the Proposed Amendments, p.19: "For greater certainty, an artificial intelligence system may be a general-purpose system and a high-impact system at the same time".
15 See the section entitled "Application", on page 19 of the Proposed Amendments.
16 See the Proposed Amendments, pp.7-8. The Minister explains that the concept of "persons responsible" has been replaced with "distinct obligations based on each organization's role with regard to the system."
17 See the Proposed Amendments, p.20, Section 7(1) (persons who make a general-purpose system available in the course of cross-border trade for the first time) and page 21, sections 8(1) (persons who make a general-purpose system available) and 8.2 (persons managing the operations of a general-purpose system).
18 See the Proposed Amendments, p.23, Sections 10(1) (persons who make a high-impact system available in the course of cross-border trade for the first time) and 10.1(1) (persons who make a high-impact system available), and page 24, section 11(1) (persons who manage operations of a high-impact system).
19 See the Proposed Amendments, p. 22, Sections 9(1) and 9(3) (person who makes a machine learning model available for incorporation into high-impact system).
20 See AIDA, Section 4. This remains unchanged by the Proposed Amendments.
21 "High-impact system" is a concept originally introduced in AIDA. The definition was amended by the Proposed Amendments, p.17. The schedule is available at p.38
22 See the Proposed Amendments, p.38.
23 As noted above, obligations vary depending on: (i) the type of AI system that persons are involved with; and (ii) their position along the AI value chain.
24 See the Proposed Amendments, p.22, Sections 9(1), and 9(1)(b).
25 See the Proposed Amendments, p.23, Sections 10(1)(b)-(c).
26 See the Proposed Amendments, p.24, Section 11(1)(c).
27 See the Proposed Amendments, p. 22, Section 9(1)(a).
28 See the Proposed Amendments, p.24, Section 11(1)(e).
29 See the Proposed Amendments, p.25, Sections 12(1)-(5).
30 See the Proposed Amendments, p.32 ("Administration and enforcement" and "Absence, incapacity or no designation").
31 See the Proposed Amendments, p.26, Sections 13(1) and (2).
32 See the Proposed Amendments, p.26, Section 14(1).
33 See the Proposed Amendments, p.26, Section 15(1).
34 See the Proposed Amendments, p.31, paragraphs 26(1)(a)-(h).
35 The Proposed Amendments did not change the sections of AIDA related to fines. Under AIDA, certain fines were issuable for certain acts related to "regulated activities" (a defined term, see Section 30(2) of AIDA). However, the Proposed Amendments have removed the concept of "regulated activities" (see page 10 of the Proposed Activities PDF). It appears that AIDA's provisions related to fines may need to be revised to account for the fact that the concept of "regulated activities" no longer exists.
36 See AIDA, Sections 30(3)(a) and (b).
37 Under AIDA, Part 2, Sections 38 and 39, general offences are committed if a person:
(i) for the purpose of designing, developing, using or making available for use an AI system, possesses – within the meaning of subsection 4(3) of the Criminal Code – or uses personal information, knowing or believing that the information is obtained or derived, directly or indirectly, as a result of (a) the commission in Canada of an offence under an Act of Parliament or a provincial legislature; or (b) the an act or omission anywhere that, if it had occurred in Canada, would have constituted such an offence; or
(ii) (a) without lawful excuse and knowing that or being reckless as to whether the use of an AI system is likely to cause serious physical or psychological harm to an individual or substantial damage to an individual's property, makes the AI system available for use and the use causes such harm or damage; or (b) with intend to defraud the public and to cause substantial economic loss to an individual, makes an AI system available for use and its use causes that loss.
38 See AIDA, Sections 40(a) and (b).
McCarthy Tétrault contributors
Charles Morgan, Christine Ing and Francis Langlois.