Six Ways An Organization Can Benefit from an Internal Generative AI Use Policy

Bennett Jones LLP
Contact

Bennett Jones LLP

Generative Artificial Intelligence (GenAI) tools can be incredibly beneficial for businesses, enhancing productivity by streamlining administrative tasks, reducing redundancy, automating processes and improving data analysis. However, these powerful tools also introduce significant risks, particularly related to how employees use them. While AI-specific legislation may not yet exist in Canada, it is important to recognize that other legal frameworks—such as data protection regulations, intellectual property laws and industry-specific legislation and standards—indirectly govern AI use.  Implementing an internal Generative AI use policy not only helps businesses manage these risks and ensure compliance with current legal obligations but also prepares them for future AI-specific regulations.

Factors to Consider

AI-related risks for businesses can originate from internal sources, particularly from employees who misuse or misunderstand AI tools. These risks can arise from insufficient training, lack of oversight or the absence of clear internal policies. Below are key factors organizations should address to better manage the risks associated with employee usage of GenAI.

1. Unreliability

One of the most significant internal risks is the potential unreliability of AI-generated outputs. GenAI systems are trained on vast datasets which often contain both accurate and inaccurate information. AI tools are largely unable to differentiate between the two, therefore potentially generating inaccurate results or output. For example, GenAI can lead to the creation of false but convincing information—commonly known as "hallucinations", which can have serious repercussions where the hallucinations have been relied upon by the business or its clients1. Since businesses can be liable for the information the AI tools they use to provide services to their customers and to the public2, it's critical that employees verify AI-generated content before acting on it or sharing with clients. An internal policy should emphasize the need for human oversight and validation to prevent costly errors.

2. Data Privacy & Confidentiality

Even without specific Canadian AI legislation, existing federal and provincial data protection laws may still apply and indirectly govern how AI can be used within organizations.  If employees input personal and/or confidential information into AI tools—especially public domain models—this data could be used to train the AI and reappear in future outputs, compromising personal and confidential information. Unauthorized data usage or disclosure can lead to severe legal consequences and damage the organization’s reputation. Your internal policy must define clear guidelines for handling personal and confidential information when using AI, ensuring compliance with existing data protection laws and preventing privacy breaches.

3. Bias

AI systems trained on large datasets can reflect and amplify existing biases, leading to discriminatory outcomes if not carefully managed. Internally, employees using AI-generated insights for decision-making—such as in recruitment, performance evaluations or customer service—may inadvertently perpetuate these biases, resulting in unfair or discriminatory practices. While AI-specific regulations may not yet exist, businesses are still subject to anti-discrimination laws that indirectly govern AI use. An internal AI policy should include training for employees to recognize and mitigate bias in AI-generated content, ensuring that AI tools support fair and equitable decision-making.

4. Intellectual Property

When employees use GenAI to create content, they may unintentionally infringe on existing intellectual property rights if the AI-generated outputs are based on third party data, including public domain data. This could lead to legal disputes, particularly if the AI-generated content holds commercial value. 

The other issue for businesses to consider is Canada's current failure to allocate intellectual property rights to a work generated by AI. Discourse on the topic is active, however, and more conclusive guidelines are likely imminent. The Federal Court of Canada is currently being asked to declare that only humans can be considered authors under the Canadian Copyright Act. For now, businesses should not assume they own IP rights to works created solely by GenAI. An internal AI policy should include guidelines for reviewing AI-generated content to ensure it does not violate intellectual property rights and complies with current legal frameworks.

5. Malicious Use / Cybersecurity

Employees can also contribute to cybersecurity risks through the misuse of GenAI. AI tools can be exploited to create “deepfakes” or other forms of synthetic content that can mislead or manipulate others. These tools create significant cybersecurity risks, as businesses could fall victim to phishing attacks or data breaches. Furthermore, malicious use of AI-generated content can lead to violations of existing laws related to fraud and data security, even in the absence of AI-specific regulations. An internal policy should address the potential for misuse, provide clear security protocols, and train employees to recognize and mitigate these risks before they escalate.

6. ESG Considerations

The AI revolution is driving the surge in demand for data centre power3 and businesses need to consider the environmental impact of the AI tools they use. Employees might inadvertently select AI providers that do not align with the organization’s environmental, social and governance (ESG) goals, resulting in negative impacts on sustainability efforts. An internal AI policy can guide employees in choosing AI providers with strong ESG targets, such as those operating net-zero data centers or maintaining transparent supply chains. This ensures that AI usage aligns with the company’s broader ESG commitments and meets industry-specific sustainability standards.

Creating an AI Policy to Manage Risks from Employee Usage

Establishing a comprehensive workplace policy for AI use is important to managing internal risks associated with employee usage and ensuring compliance with these existing legal frameworks. A well-crafted internal policy will guide employees in the responsible and ethical use of AI, helping protect your organization from potential legal, financial and reputational risks.

We are grateful for the assistance of Alexia Armstrong, student-at-law, in connection with the preparation of this article.

1 See for example Zhang v. Chen 2024 BCSC 285.

2 See Moffatt v. Air Canada, 2024 BCCRT 149 at 27.

3 https://www.goldmansachs.com/insights/articles/AI-poised-to-drive-160-increase-in-power-demand

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Bennett Jones LLP

Written by:

Bennett Jones LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Bennett Jones LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide