Canada: Investing in the Future: Navigating the Risks and Rewards in AI-Related Transactions
Following the release of ChatGPT in late 2022, the significant developments in artificial intelligence (“AI”) over the last several years – and in generative AI in particular – have caught the attention of investors and business leaders throughout Canada. In Ontario alone, there have been approximately $1.16 billion in AI-related venture capital investments over the past year, with 312 companies investing in the Ontario AI ecosystem and 27 new AI companies established in the province. In addition, there has been a 23% increase in AI research and development spending or budgets, with 68% of chief executive officers reporting adopting AI solutions and 248 new AI-related patents filed across Canada.1
This increased investment activity in the AI industry has been driven primarily by generative AI; however, the scope of potential AI applications is much broader and, like other general purpose technologies, will be implemented across virtually every industry over time. Like every new technology, AI’s potential benefits also introduce new and complex risks. Where a potential transaction involves AI assets, organizations should conduct detailed due diligence to ensure they are able to unlock the potential value of these assets and avoid or account for the associated risks.
This article summarizes issues that will be consistent among AI-related transactions. We note, however, each transaction will have its own unique set of issues to be considered in the due diligence process. Organizations looking to acquire an AI company or a company with AI assets should consider their particular circumstances – for example, their intended use of the AI assets or industry-specific regulatory concerns – as well as the unique characteristics of the AI company or assets.
Overview of AI
AI is a broad term that encompasses efforts to create machines capable of “intelligent” behavior, to “think” and act like humans so they can make decisions based on data. Early implementations relied on rule-based systems; however, these systems were limited by their creators’ ability to craft a set of comprehensive rules. Recently, machine learning (“ML”) algorithms have advanced to learn from data rather than following explicitly programmed instructions. As computational power and the quantity and quality of datasets increased, researchers were able to implement more complex algorithms, enabling the creation of artificial neural networks (“neural networks”).
Neural networks are comprised of layers of interconnected nodes that perform specific computations. Like neurons in a brain, each node receives inputs from other nodes, performs its respective computation and produces an output that is sent on to one or more other nodes. The information passed between nodes is given specific weights or biases (“parameters”), much like the varying strength of synapses in a brain. These parameters are tweaked throughout the ML process to improve the neural network’s performance, with larger models like GPT-4 having up to 1,700 billion parameters.2
The advance in neural network research that provided the foundation for today’s leading models reached a milestone in 2012 at the University of Toronto by a team focused on improving image recognition. The University of Toronto research team used “deep learning” – neural networks with multiple layers3 – to guess what an image was most likely to be across all of the labels in its dataset, rather than relying on a set of assumptions and baked in knowledge about vision and images, as was the case with classical computer vision methods. Under this formulation, deep learning models train on large data sets by adjusting their parameters to minimize the difference between the model’s predictions and the true labels in that dataset. Subsequent research showed deep learning models could be trained using vast quantities of unlabeled data and then refined using labeled data sets for multiple downstream tasks in a process called transfer learning (such models at scale are called “Foundation Models”).
Notably, these Foundation Models, on which most of today’s leading AI applications are built, do not necessarily “understand” what a user is asking them to do. Rather, they take the user’s input and, using the techniques above, predict the output that is most likely, among an array of all possible outputs, to be the “correct” answer. Accordingly, today’s leading AI models are fundamentally predictive technologies.
The potential ways to leverage the predictive power of AI are much broader than generative AI models. Foundation Models can be leveraged in virtually any process where there is data of sufficient quality and quantity, powering innovating solutions in applications such as drug discovery, medical imaging, quality control, automation, data analysis, fraud detection, predictive analytics and more.
While it is virtually impossible to list all potential applications of AI, there are a handful of key factors that will be consistent across use cases. Important focal points for an organization’s due diligence process should include:
- Data Sets: The value of an AI application that uses ML depends largely on the quality of the data used to train and operate it. The data used by the model will have downstream implications, both in terms of its functionality (e.g., biases in the data sets) and its viability (e.g., intellectual property infringements).4
- Transparency: One of the thematic issues in the AI industry is while researchers and developers generally know the technology works, they do not always know why it works. This is particularly true for Foundation Models, which demonstrate emergent behavior: complex, often unexpected behavior or phenomena that arise from the interaction of components within a model. This “black box” or “explainability” problem will likely lead to complex problems around liability, whether contractual (e.g., performance levels and indemnity obligations in commercial agreements), regulatory or otherwise.
- Applications: The uses or potential uses of an AI application will also be important to consider, as different uses give rise to different legal risks. This is reflected in the various regulatory approaches being considered in Canada (focusing on “high-impact” AI systems) and the European Union (focusing on unacceptable and high-risk applications), where legislators are creating as much flexibility as possible to account for the general use nature of the technology.
Due Diligence Considerations
While the three general factors above should be taken into consideration throughout the due diligence process, set out below are a handful of specific issues organizations should also examine. We note this is neither an exhaustive nor definitive list of issues and, as with any other transaction, each AI-related transaction should be considered in light of its particular circumstances.
Regulatory and Compliance
AI has the potential to be one of the most impactful technological advances in generations. At the same time, AI has the potential to be one of the most dangerous technological advances since the nuclear bomb. Many prominent individuals and leading figures in the AI industry, including Mustafa Suleyman, co-founder of Google’s DeepMind, believe containing AI is an existential problem much like the non-proliferation efforts of the Cold War era were. They are not alone – governments around the world are scrambling to address the potential risks posed by AI. Accordingly, the AI regulatory environment – both existing and proposed – should be reviewed and considered in light of the target company’s use of AI technology.
Existing Regulation
Canada’s existing regulatory framework applies to a number of potential uses of AI. For example, the Personal Information Protection and Electronic Documents Act (“PIPEDA”), Canada’s anti-spam law (“CASL”) and other regulatory frameworks for consumer protection, human rights, and criminal law may apply to the target’s development and use of its AI product, including the Canada Consumer Product Safety Act, the Food and Drugs Act, the Motor Vehicle Safety Act, the Bank Act, the Canadian Human Rights Act (and provincial human rights laws), and the Criminal Code. Which regulatory frameworks may apply depends on the particular use of the target’s AI product.
Bill C-27 – The Artificial Intelligence and Data Act and the Consumer Privacy Protection Act
Although existing laws apply to AI products, they do not comprehensively cover the broad range of novel risks posed by the technology. To address these gaps, countries around the world are working to enact AI-specific legislation. In Canada, Bill C-27, which was tabled in November of 2021, includes a number of proposed acts and consequential amendments that would apply to AI companies based or operating in Canada, including the Artificial Intelligence and Data Act (“AIDA”) and the Consumer Privacy Protection Act (“CPPA”).
AIDA is a skeletal piece of legislation that leaves most substantial points to be addressed by regulation. This structure was chosen to allow for flexibility and responsiveness; however, the result is substantial regulatory uncertainty. AIDA’s focus will be on “high-impact AI systems” to prohibit reckless and malicious uses of AI. The criteria for high-impact systems will be defined in regulation, but the following points have been identified as key factors in the determination:
- evidence of risks of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences;
- the severity of potential harms;
- the scale of use;
- the nature of harms or adverse impacts that have already taken place;
- the extent to which for practical or legal reasons it is not reasonably possible to opt-out from that system;
- imbalances of economic or social circumstances, or age of impacted persons; and
- the degree to which the risks are adequately regulated under another law.
The target company’s AI application should be reviewed in light of these factors to determine its potential regulatory exposure.
In addition to AIDA, the CPPA brings significant changes to PIPEDA. For AI companies that handle personal information – particularly sensitive personal information, like biometric or health data – these changes may have substantial implications. For example, the CPPA will strengthen consent requirements and impose more onerous financial penalties for contraventions.
Data Rights and Privacy
The diligence review should confirm whether the target has the necessary rights to use the data their AI application leverages – either for training or in its use. This would include reviewing any associated licenses as well as ensuring the target has obtained the necessary consents for the use of any data that constitutes personal information under PIPEDA. Ensuring the target has appropriate data rights applies to both the data used to train its AI model as well as the data generated by the use of the model – for example, the prompts entered into the AI model and associated outputs, which may be valuable for the future refinement of the AI application.
The target’s data security policies and measures should also be considered. For example, the target’s cybersecurity history, including past incidents of hacks, breaches, or vulnerabilities should be confirmed, external testing reports should be reviewed and any vulnerabilities considered in light of the potential liabilities involved.
Reliance on Third Party Supply
Given the substantial costs associated with developing a proprietary AI model, many companies rely on third party components in their AI applications (e.g., the Foundation Models discussed above). This could be through a license or through an open source model. Software licenses, open source licenses, and other matters affecting the target’s access to these components should be reviewed carefully. The risks noted in this article – like the regulatory risks noted above – and their application to any key suppliers of the AI components should similarly be considered in the diligence review.
Where the company uses third party service providers, consideration should also be given to the confidentiality protections afforded to the information that is shared, to ensure both that the company’s confidential information and trade secrets are adequately protected and that the company is in compliance with its own obligations regarding confidential information in its possession.
Intellectual Property Rights
The target’s intellectual property (“IP”) rights should be reviewed to determine: (i) ownership of the IP required to operate the AI application, and (ii) IP rights associated with the application’s output, where applicable. With respect to the former, ownership of the AI algorithms, software and related technologies should be confirmed, including whether all or part of the target’s AI application is developed in-house or utilizes third party software. To the extent the AI solution is developed in-house, the diligence review should include the employment agreements with the individuals who worked on the product to ensure the relevant IP rights were properly assigned to the company. Where part or all of the AI solution is licensed, the terms of the relevant licenses should be reviewed to confirm there are no issues regarding, for example, potential infringements arising from the company’s use of the technology.
With respect to the latter, if the company intends to create value by using the AI application for content creation, complications regarding the IP protection available for that content should be considered. For example, copyright protection may not be available where the subject matter does not contain a sufficient amount of human input. Companies should note that the Government of Canada is closely considering this matter and launched a consultation process on October 12, 2023 to consider, among other things, (i) the use of copyright-protected works in the training of AI system, (ii) authorship and ownership rights related to AI-generated content, and (iii) liability, especially when AI-generated content could infringe existing copyright-protected works.
These developments should similarly be considered with respect to the patentability of AI-generated inventions, which similarly may not be patentable without substantive human input.
Material Contracts
AI models may be subject to “drift” – a phenomenon where model performance decreases over time and with use. Similarly, AI models may produce “hallucinations”, where they generate confident responses that are not justified by their training data. Accordingly, an AI application’s performance may generally decrease over time or without consistent supervised training. The target’s service agreements with its customers, however, may include warranties or guaranties regarding performance levels. Similarly, the target may have indemnity obligations or other liability under its material contracts for damages caused by performance issues in the AI model. Accordingly, a review should be conducted of potential liabilities arising from the AI models’ errors, biases or other malfunctions.
Public Grants / Funding
A substantial amount of research and development in the AI industry is funded through public grants or other public sources of financing. Some of this funding may come with stipulations or requirements that may be implicated by the investment in or acquisition of the target. Any arrangements should be reviewed to ensure there are no concerns regarding, e.g., ownership of the target’s IP resulting from any such public funding arrangements.
Looking Forward
The above summary of due diligence considerations in AI-related transactions is not exhaustive. As AI continues to evolve and become more deeply embedded in various industries, the specific issues and risks associated with AI transactions will also continue to evolve. Furthermore, the broad range of AI applications and potential uses make it difficult to establish a “one-size-fits-all” due diligence checklist.
Investing in or acquiring AI assets requires a thorough understanding of both the technical and regulatory landscapes. While AI offers enormous potential for value creation, it also comes with a unique set of risks that can materially impact the value of an AI company or its assets. In light of the growing interest in AI investments and the increasing complexity of the technology, organizations should be proactive in their due diligence efforts. This will involve not only understanding the specifics of AI technology and its applications, but also vigilance about the evolving regulatory landscape and potential risks.
Goodmans’ technology practice offers unparalleled expert advice to allow our clients to capitalize on the benefits of new technologies like AI while remaining abreast of the complex laws and evolving regulations that govern it. Our lawyers deliver innovative and sophisticated solutions to meet the challenges of this new and evolving technology in the areas of corporate and securities laws, venture financing, investment funds, and tax law for clients across a range of industries.
As AI radically changes business as we know it, Goodmans remains ahead of the pack in this new technology space. For additional information on these issues and opportunities, please contact any member of our Technology Group.