Building Trust in the Age of Algorithms: Transparency and the Future of AI Governance in Canada

Author

Ben Faust

Editor

Samyukta Srinivasan

Publications Lead

Gianluca Mandarino


Introduction


In an effort to transform the Canadian economy, Mark Carney and the Liberal Party placed Artificial Intelligence at the centre of their vision for growth. Since taking office, Carney has doubled down on that ambition, creating Canada’s first Ministry of Artificial Intelligence under the leadership of Evan Solomon. The government’s mandate letter outlines a clear goal: to embed AI across federal operations and decision-making. The Canadian government has been attempting to deliver upon AI at scale through departmental initiatives such as GCtranslate or CANchat and others, MOU with Canadian-based Cohere, along with a variety of other initiatives spanning from transportation to forestry to agriculture. However, Canadians rank near the bottom globally for trusting AI. If we are going to take this approach of embracing AI, it is important to be transparent, showing Canadians that AI is being implemented in a responsible manner. Hiding or obscuring AI use may help alleviate concerns in the short run but will eventually lead to lower trust in government and lower trust in institutions. This paper begins with discussing Canadians’ mistrust of AI, then offers a global comparison of other people’s views on AI, then makes the argument for why transparency is necessary and concludes with the policy steps that the Canadian government could take to make sure the transition to AI occurs in a transparent and trusted manner.


Drive for Adoption


While there are plans to increase the efficiency of government services for Canadians and to address Canada’s productivity crisis through the adoption of AI across Canadian society, there remain several challenges to overcome. Critical among these is the issue of Canadians’ level of trust in the technology. Compared internationally, Canadians’ confidence and comfort in AI ranks near the bottom globally. A recent study conducted by KPMG and the University of Melbourne found that 46% of Canadians believe that the risks of using AI outweigh the benefits, while 70% of Canadians desired federal level regulation of AI. This raises a dilemma as there is a desire for the Canadian government and Canadian businesses to embrace AI technology, but the public remains cautious about the social implications of AI use. Many Canadians fear potential job loss, misinformation, reliability or ethical implications, with a particular emphasis on AI’s use in sensitive areas such as immigration, criminal justice, and job applications. These fears stem from the fact that most current AI technology is based on large language models, which are “black boxes” that make it difficult to tell how an answer is arrived at, despite attempts through different explainable or x-AI approaches such as decision trees. At the same time, headlines are increasingly dominated by stories of misuse and the gloomy future which AI may hold for society, from mass unemployment, to deepfakes, the growth of autonomous weapons, or discrimination by AI-powered hiring. Stanford University and IPSOS found that in 2022, only 32% of Canadians found the benefits of AI in products and services outweighed the drawbacks of AI. While this number increased to 40% in 2024, wide-spread adoption of AI within Canada has been lower than in other nations. For example, in the second quarter of 2025, StatsCan found that only 12.2% of businesses reported using AI to produce goods or services. This, however, was a doubling from the second quarter of 2024, where it was 6.1%.


Canadian Worries


This level of concern about AI use is not consistent across all countries. For example, the AI Index Report found that well over 70% of people interviewed in India, Indonesia and China viewed AI use positively while nations who are similar in development to Canada, such as South Korea and Singapore, held positive sentiments in AI use over 60% of the time. The gap in perception may stem from several factors. First, higher levels of trust in AI lead to greater comfort with its use, as seen in the United States. However, research in Canada, particularly in high-risk areas like healthcare, suggests this isn’t always the case, with studies showing higher usage among individuals occurs with lower levels of AI knowledge. Second, there may be greater optimism about the potential benefits of AI, such as economic growth, outweighing concerns about its risks. Third, some countries tend to be more optimistic about new technologies overall, showing less concern about ethical issues and more confidence that companies will use AI responsibly. The exact reasons as to why individuals decide to trust artificial intelligence are complex and multifaceted. Each country has unique demographics, economics and history. Furthermore, other studies found that individuals’ perceptions of AI’s positive and negative aspects were relatively similar—one survey reported that 20% of respondents in Canada and 23% in Korea viewed AI as mostly good for society, contradicting other reports. Nonetheless, AI adoption continues to march forward. This is seen by numerous AI strategies being laid out by national governments, with billions in subsidies and tax incentives given to build AI centres necessary for powering bigger AI models.

 

Canada and other nations are approaching AI as a solution to increasing productivity and governmental efficiency. However, we as Canadians are a democracy and our approach to this technology should represent the concerns of Canadians. The private sector and government continue to push an increase in AI use and adoption, but this does not necessarily mean it has the requisite social license behind the technology. Social license is critical for several reasons; at its core, any democratic government needs to maintain the trust of its populace. Greater levels of trust between institutions and citizens makes for a better society but institutions must earn that trust. For those that want to continue to drive faster AI adoption, social trust is a critical part of the equation. Canadian society has significant concerns about AI which need to be addressed. Both government and industry should be responsive to their employees and those they serve while implementing AI use.  One key way to do this is through transparency.


Global Views on AI


There have already been some positive steps made by the Canadian government such as federal calls for a directive on automated decision making. The Directive on Automated Decision Making governs certain high-risk decisions in governmental deployed AI systems. These directives are important, but they do not cover all systems, nor do they ensure citizens are aware of where and when AI is being used. The federal government is also reportedly working on the delivery of a national registrar of all uses of AI within government, which should be a priority so that Canadians can be informed. While it is important to acknowledge the competitive concerns that new governmental regulation or rules may cause, Canadians have a significant desire for AI regulation and responsible AI. Polls found 92% of business leaders were also in favour of some form of regulations. A core element of these regulations should be transparency.


Policy Recommendations and Steps Forward for Increasing Transparency and Public Trust for AI:


1. Federal requirement for public disclosure of the governmental use of AI.

a. The Canadian federal government already discloses some systems through its Automated Decision Making Directive and through its Algorithmic Impact Assessment (AIA) tool. These tools indicate some governmental use of AI but do not cover all AI usage.

b. The federal government is reported to be embarking on the development of a wider registry of AI system use within government. Completion of this is key for government transparency to the public.

2. Mandate the Canadian non-governmental sector, both private and NGO, to inform the public when they are using AI systems.

a. This is similar to an already existing framework that is growing out of directives that govern Canada’s lawyers and European regulations.

b. Canadians interact with non-governmental entities who are looking to increase the use and adoption of AI. Canadians have a right to know when AI is in use.

3. Fund academic comparative research into why other countries, particularly in Asia, exhibit more positive attitudes toward AI.

a. This should be funded by the federal government.

b. It is unknown if these positive attitudes towards AI are driven by greater public comfort, different legal or regulatory frameworks, cultural factors, or government measures that alleviate public concern such as education. Once we understand the factors that increase other nations’ positive attitudes towards AI, it may be possible to use these measures in Canada. While it may take time and effort to help Canadians to develop more positive views towards AI, it is necessary to take these steps as the government and businesses continue to push to increase AI use in Canadians' everyday lives.

With the high levels of concern held by the Canadian public about AI use, these measures are the least we can do if we are going to continue to push forward for greater AI adoption. Taking these steps in transparency may help us to retain the necessary societal trust in our institutions as these institutions look towards an increased use of AI in hopes to gain in efficiency and productivity. 


Previous
Previous

Imagining an International Liability Framework for Autonomous and Semi-Autonomous Weapons

Next
Next

AI in Healthcare