top of page
  • Writer's pictureThe Moot Times UCalgary Law

Employers Beware: Managing AI in a Regulatory Desert

Contributed by: Athina Pantazopoulos



With reports coming in that employers across Canada and the US are turning to Artificial Intelligence (AI) to replace significant portions of their workforce, many employees are concerned that no job is safe. However, a quick profit in the short term may come back to bite employers in the rear as the AI legal landscape continues to develop. 


While AI might seem like the easy solution to improve workplace efficiency, the vastly underregulated landscape has created significant traps and pitfalls that employers need to be aware of before bringing AI into the workforce.


Replacing Human Labour


“I was replaced by a machine” has been a complaint of labourers since at very least the industrial revolution. While machines have improved the safety and efficiency of many workplaces, a recent study indicates that in order to successfully integrate AI and other sources of automation into the workplace, humans and machines will need to work together. 


While AI is very successful at identifying patterns in predictable environments, it is not as good at identifying “exceptions” or operating creatively. AI is capable of producing generative content, like visual art and writing, but the US Copyright Office and District Court of the District of Columbia (USA) have both ruled that content produced by AI is not copyrightable. While Canada has not yet rendered a similar decision, discussion in the legal world suggests that a version of this problem may arise under Canadian law. The Canadian Copyright Act does not currently have any provisions prohibiting AI from acting as an “author” or “maker;” however, the prevailing sentiment, seen also in the US cases above, is that copyright law exists to protect human creators and does not contemplate the legal rights of non-humans. It is yet to be seen what level of human involvement will be required to render AI generated work “copyrightable.”


Additionally, the development and training of AI to perform creative tasks is continuing to be challenged through legal action. In Canada, artist Amel Chamndy brought a claim for $40,000 in damages for infringement of copyright when an AI art exhibit created an art piece which replicated her work with 85.81% accuracy after being trained on a database that included her original pieces. While this case has not gone to court as of 2024, similar cases are popping up with alarming frequency in the USA and beyond. In 2023 alone, class action suits against AI companies have been brought by the New York Times, the Author’s Guild, and a number of independent authors, stating that their copyrighted works were scanned and used to train AI programs without permission, constituting a violation of their copyrights


In a number of environments that have adapted AI successfully, human intuition, interaction, and expertise is still used to account for adaptive and creative problem solving. Dr. Muhammad Mamdani of Unity Health Toronto recently said in an interview with CBC News: "AI is not going to replace clinicians, but clinicians who use AI are going to replace clinicians who don't use AI."


What are the implications of reducing the human element of your workforce?

Be prepared to assess which elements of your workforce are highly “susceptible to automation,” or well suited for AI to perform, and which are not. The long-term cost of reducing human capital where AI is unable to perform might be seen in increased rate of error, cost of maintaining and upgrading AI software and hardware, and cost of terminating workers only to rehire them when AI systems fail. 


Liability and Harm


Just as an employer can be vicariously liable for the conduct of their employees, businesses are likely to be held responsible for inappropriate behaviours perpetrated by AI. 


There is very little Canadian litigation on how to determine when private companies should be held liable for the harm that AI systems cause to consumers. However, employers should be aware of when AI decision making has the potential to impact the legal rights and interests of human individuals. This may arise in areas of constitutional importance, such as human right violations, or common torts such as negligence.

Additionally, employers should be on the lookout for the introduction of new standards regulating not only the design, development, and use of AI systems, but also the responsibility to address and mitigate harms. These developments are on the horizon with the potential introduction of a Federal Artificial Intelligence and Data Act proposed under Bill C-27. While much of the Act is broad, leaving specific standards to be set by regulation, the Government of Canada anticipates that this proposed law would take effect in some form or another by 2025.


Studies have demonstrated that AI is consistently biased and widely perpetrates existing power structures and systemic discrimination. Because of a lack of regulatory oversight, there are no legal frameworks in Canada that set standards for AI training, leading to the circulation of low quality or biased training data. Scholars and Canadian Courts have challenged the use of AI in the legal profession for exactly this reason (among others). Where the outcome of work performed by an AI has significant impacts on the rights, freedoms, and wellbeing of individuals, companies have a legal and ethical obligation to ensure that bias is not introduced into the results. Private corporations should take care that they are not allowing AI to perpetrate discrimination in violation of human rights legislation.


To avoid discrimination and violation of human rights, AI bias should be carefully monitored through human oversight. An example of this has arisen in light of criticism and concern levied against the Government of Canada for the use of AI into the screening of applications for immigration. So far, allegations of discrimination and lack of procedural fairness have not been successful in Federal Court because AI is not left with final decision-making power. In 2023, Justice Henry S. Brown ruled in Haghshenas v Canada (Citizenship and Immigration) that while the immigration decision in question was made with input from AI software, the decision was ultimately made by a Visa Officer, and that decision was found to have been reasonable. 


One area for employers to direct specific attention is the potential for discrimination in the hiring of employees. In 2023, the US Equal Employment Opportunity Commission brought a claim against iTutorGroup Inc. for persistent age discrimination that resulted from the use of AI-hiring software. Similarly, in Ontario, Bill-149 has proposed legislation that would require employers to disclose when they are using AI in their hiring process to screen, assess, or select applicants. 


Looking outside the human rights context, corporations should be aware of the potential for AI use to result in negligence or other torts. In the 2024 case of Moffatt v Air Canada, Air Canada was found to be liable for negligent misrepresentation advanced by one of their customer service chat bots and ordered to pay $812.02 in damages resulting from this error. While this is a minor financial burden for the company, it does demonstrate that Canadian courts are willing to hold corporations accountable for their reliance on AI, and could have major implication for, for example, companies that rely on AI investment management software or companies that rely on AI to screen insurance applications


Be prepared to assess how your AI programs are trained, and whether developers took steps to address and mitigate bias and faulty data. Because regulation of AI development and training is in its infancy, companies should be prepared for legal regulation to be unpredictable and rapidly changing in the coming years. In addition to ensuring that AI does not violate any legal rights, companies will have to ensure that any AI systems implemented in their workforce continue to comply with evolving standards. 


Privacy and Information Security


Private corporations have a legal obligation under privacy legislation, such as the federal Personal Information Protection and Electronic Documents Act or provincial Personal Information Protection Act in Alberta, to take reasonable measures to protect personal information of consumers and employees. There may also be specific proprietary information that such companies want to protect. Employers should be aware of the vulnerabilities that AI may introduce into a company’s information security framework before deciding whether or not to take the risk.


Research in the areas of cybersecurity and data protection have revealed that AI, particularly open-source AI (where the coding of the AI is made publicly available), present a number of avenues for private information to be leaked, corrupted, or stolen

The Canadian Center for Cyber Security outlines a number of common threats to AI tools, including risks specific to the use of generative AI. Among these risks are the increased risk of faster and more efficient theft of corporate data and risk of the harvesting of personally identifiable information to impersonate employees.


Even before a data breach occurs, companies are responsible for ensuring that private data is managed responsibly by their AI systems. For example, a recent Investigation Report by the Office of the Information and Privacy Commissioner in British Columbia found the Canadian Tire Corporation to be in violation of privacy legislation because they scanned shoppers using facial recognition technology without notification, consent, or a demonstrated “reasonable purpose” to collect or use this personal information.

Be prepared to assess whether or not your AI systems are collecting or accessing data that is privileged or private. Consider also what security measures are in place to prevent third-parties from poisoning data sets, extracting sensitive information from training data, or accessing past queries and prompts. The Canadian Center for Cyber Security also recommends establishing AI usage policies with oversight and review processes required to ensure technology is used appropriately. 


Final Considerations


If you are thinking about incorporating AI into your workplace, a good first step would be to consider the following questions:

  1. What role is this AI performing? 

  2. How was this AI trained?

  3. Who has access to this AI’s output?


Working through the answers to these questions will be crucial in the coming days as more and more machine intelligence becomes incorporated into our workplaces, businesses, and fields. 


36 views0 comments

Recent Posts

See All
Logo PNG.png
bottom of page