As artificial intelligence (AI) regulation takes shape globally, public companies face the challenge of navigating this evolving landscape. In Canada, securities regulators are actively deliberating on how best to regulate AI, while in the US, companies are already making AI-related disclosure. In October 2023, the Ontario Securities Commission (OSC) published a report detailing the use cases of AI and the associated risks within Ontario’s capital markets. In their Statement of Priorities issued in November 2023, the OSC acknowledged the immense potential of AI but also emphasized “the potential for misuse in deceptive practices and fraud.” To thrive in this environment, companies should prioritize AI governance, educate their boards on AI, and thoughtfully consider the impact of AI technologies on their disclosure.

The AI landscape

Given the transformative potential of AI, it is becoming increasingly important that companies engage in robust AI governance. Many companies are currently adapting to the rise in prominence of AI technologies. For example, in a 2023 study of board practices published by Deloitte, 36 percent of companies reported that they were currently considering having an AI use framework, AI policy or policies, or AI code of conduct. Additionally, 42 percent of companies reported that they were currently considering revising their corporate policies relating to cyber and privacy risks to address the use of AI.

In addition to the increased use of AI by companies, governments around the world have also made advances toward regulating AI and addressing risks posed by this technology. A notable instance in Canada is Bill C-27, which includes the Artificial Intelligence and Data Act, which, if enacted, would require that private sector companies disclose how AI is being used within their organization. The extent of disclosure depends on the AI system’s classification, with “high-impact” AI systems requiring the most detailed disclosure.

Moreover, on March 21, 2024, ISS published a report (the ISS Report), outlining its analysis on board oversight and directors’ skills in relation to AI. The ISS Report found that approximately 15 percent of S&P 500 companies include at least some AI disclosure in proxy statements of board oversight. This disclosure of board AI oversight was found largely in the Information Technology sector, where 38 percent of companies disclosed a degree of board oversight or expertise. The Information Technology sector was followed next by Health Care, where 18 percent of companies provided disclosure.

AI governance

AI governance refers to the strategies companies can implement to address and reduce risks posed by AI. This might include a variety of practices, primarily directed toward improving board oversight of operational practices that utilize AI. Several important forms of AI governance include: (1) establishing an AI governance framework; (2) ensuring sufficient board education on AI; and (3) conducting regular audits of AI usage.

(1) AI governance framework

By implementing an AI governance framework, boards can ensure their organizations are taking a cohesive approach to adopting AI tools and mitigating risk. Establishing a framework also enables companies to better prepare for new or evolving laws, regulations, or societal norms. This preparedness may reduce the risk of scrutiny associated with the use of these technologies in the future. The Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology provides an example of what an AI governance framework can look like for an organization developing AI. Part I of this Framework outlines various characteristics of “trustworthy AI systems” and Part II outlines four specific functions organizations can implement to oversee the development of AI, being governing, mapping, measuring, and managing.

(2) Board expertise

Sufficient education for board members will help equip them with the tools needed to make meaningful governance decisions about AI usage. Board education can be achieved in several ways. For example, a board can seek AI training and education by bringing in external presenters to speak generally about AI. Alternatively, a board may wish to consult with subject matter experts on an ad hoc basis regarding specific issues that involve AI. A board’s approach can vary greatly based on the degree to which AI is integrated into the operations of the business.

The ISS Report notes that only 1.6 percent of the S&P 500 companies provide explicit disclosure of AI oversight by the full board or a committee. When oversight responsibility on AI is delegated to a committee, typically, an existing committee’s scope is often expanded to oversee this area of AI risks and opportunities instead of creating a new committee. For example, certain companies have recently added technology-related risks, including cybersecurity, as well as AI-related risks to their respective audit committee’s risk oversight responsibilities.

(3) Continuous monitoring

By continuously monitoring internal AI systems, companies can ensure ongoing compliance with AI regulations. Through a combination of an AI governance framework and sufficient board expertise, as well as regular assessments of AI implementation and usage, companies can encourage accountability and ensure that they are putting strategies into practice that minimize the risks posed by AI.

Canadian companies navigating the AI landscape – a warning against AI washing

“AI washing” is a term used to characterize the practice of companies making false, misleading, or exaggerated statements about how they are either using or developing AI systems.

Consequences of AI washing are already playing out in the US. Earlier in March, the US Securities and Exchange Commission charged investment advisers Delphia (USA) Inc. (Delphia) and Global Predictions Inc. (Global Predictions) for deceptive claims related to their AI expertise. Specifically, Delphia was charged with exaggerating its use of AI in various public disclosure documents including press releases, filings, and its website. Global Predictions, on the other hand, advertised itself as being the “first regulated AI financial advisor” and that its platform provided “AI-driven forecasts.” Both firms agreed to settle the charges, resulting in combined civil penalties of USD$400,000.

With this landscape unfolding in the US, Canadian companies should consider how to protect against claims of AI washing, especially as the prevalence of AI and other related technology continues to grow.

What to keep in mind

Boards that implement strategies to formalize and address the uses of AI will not only benefit by minimizing the associated risks of AI-related technologies, but they will also be well prepared to respond to future requirements that might be imposed by securities regulators. The OSC has signalled in its 2023 report and Statement of Priorities that it intends to investigate AI regulation further. Additionally, in international markets, securities regulators have publicly remarked about the possibility of mandated AI-related disclosure in the future. For the 2024 proxy season and beyond, public companies will need to prioritize AI governance, in order to craft effective, responsive strategies for engaging with AI technologies.

For additional guidance and best practices on AI in capital markets, visit our publication on this topic here.

The author would like to thank Julia Kafato and Melissa Paglialunga for their significant contribution to this article.