Protecting Trust in the Age of AI Generated Images

Author

Angel Liang

Editor

Keevyn Hirschfield

Publications Lead

Gianluca Mandarino


A convincing photo used to mean something. In public communications, journalism, advertising, and political discourse, images are often treated as evidence: not always definitive proof, but a strong indication that an event – a moment – really happened. AI image generation weakens that signal at scale. When anyone can generate a realistic image in seconds, the policy problem becomes that public trust in institutions that rely on visual credibility erodes.

Canada has begun to address this challenge in how it governs generative AI inside the federal public service. The Government of Canada’s Guide on the Use of Generative Artificial Intelligence defines generative AI as a type of AI that produces content, including images based on a user prompt and stresses that institutions must evaluate risks, limit use to situations where risks can be managed, and align with responsible-use principles. Yet the broader ecosystem that shapes public trust, like political campaigns, social media platforms, advertisers, and everyday users still operate with fewer guardrails, leaving significant gaps in how synthetic images are created and shared.

AI image generation has become an ethical governance issue now because it combines low barriers to misuse with regulatory lag, while shifting the costs of harm onto artists, targeted individuals, and the public’s ability to distinguish real from synthetic content. Parliamentary research in Canada, including Deep Fakes: What Can Be Done About Synthetic Audio and Video, has warned that synthetic imagery can undermine democratic integrity and cause reputational harm. Canada should respond with a targeted framework that emphasizes transparency (labeling and watermarking), rights protection (privacy and intellectual property), and risk-based deployment rules, starting with public sector standards that can set a wider norm.

Copyright, Consent, and Data Provenance

One of the most contested ethical issues in AI image generation is how models are trained. Most large image generation systems are trained on massive datasets that include photographs, illustrations, and artwork scraped from the internet. In many cases, the original creators were not asked for permission and were not compensated.

Canada has formally recognized this legal and regulatory uncertainty. Innovation, Science and Economic Development Canada launched a consultation on how copyright law should apply to artificial intelligence systems, including questions about training data and authorship. This consultation highlights a central tension: existing copyright frameworks were designed for human creators, not for systems that learn patterns from billions of images.

The issue is not only legal compliance; it is about economic asymmetry. AI platforms can generate commercial quality images in seconds, reducing demand for commissioned artists and stock photography, while creators whose work helped train these systems may receive no credit or compensation. The U.S. Copyright Office, in its 2025 report on copyright and artificial intelligence, similarly notes unsolved questions about authorship, ownership, and infringement in generative outputs.

In addition to copyright concerns, data privacy raises further ethical questions. Generative image systems may incorporate photographs or original works of individuals, meaning personal data can be absorbed into models without those individuals’ knowledge or control. The Office of the Privacy Commissioner of Canada has emphasized that generative AI systems must respect transparency, accountability, and meaningful consent principles. Without clearer standards, individuals have little ability to challenge or remove the use of their likeness in commercial AI systems.

Transparency is another challenge. Most image-generation companies do not publicly disclose the full contents of their training datasets. Without clear documentation of what data was used, it is difficult for artists to know whether their work was included, or for institutions to assess infringement risk. This lack of dataset transparency has been at the center of high-profile legal disputes, including litigation initiated by Getty Images over alleged unauthorized use of its image library in model training.

If Canada aims to support both innovation and creative industries, it will need clearer standards for where training data came from, licensing models, and institutional due diligence. Ongoing legal uncertainty currently places both creators and downstream users at risk. Public sector procurement could require vendors to document data sources at a high level, take contractual responsibility for copyright violations, and clarify ownership of generated outputs. These measures would not eliminate all disputes, but they would reduce uncertainty and help rebalance the relationship between platforms and creators.

Misinformation, Deepfakes, and Democratic Risk

AI image generation tools make it easy to create realistic but false visuals. A fabricated photo of a public figure, a staged event, or a crisis scene can spread quickly online, often before it can be verified or corrected. Canada’s Library of Parliament has warned that synthetic media can be used to mislead voters and distort public debate.

Federal cybersecurity authorities have similarly noted that advances in AI are increasing the sophistication and accessibility of disinformation techniques, including manipulated and synthetic media. Canada’s Canadian Centre for Cyber Security warns that rapidly evolving AI tools are lowering the technical barriers to create false content, particularly during election periods and politically sensitive moments. When realistic but fabricated images circulate online, the risk is not only reputational harm but confusion about what constitutes reliable evidence. In politically sensitive contexts, even brief uncertainty can distort public debate, reinforcing the need for clear labeling in high-risk public communications.

What Canada Should Do: A Targeted Governance Package for AI Image Generation

Canada does not need a sweeping ban on AI image tools. It needs targeted, use-based rules that focus on high-risk contexts and clarify responsibility.

1) Mandatory disclosure in public interest contexts

Where AI generated images are used in political advertising, government communications, or other public facing materials, audiences should be clearly informed.

Policy option:

  • Require clear labeling of AI-generated images in political campaigns and government communications.

  • Standardize disclosure language so notices are visible and understandable, not buried in fine print.

This approach addresses democratic risk without restricting legitimate creative or commercial uses.

2) Clear copyright and data transparency safeguards

Innovation policy should not shift legal and economic risk entirely onto creators or downstream users.

Policy option:

  • Require vendors to provide high-level documentation about where training data came from.

  • Refine licensing frameworks or opt-out mechanisms that allow creators to control whether their work is included in future training datasets.

  • Clarify ownership and permitted use of generated outputs in public sector contracts.

These measures would not halt innovation, but they would signal that creative works have value and that the benefits of AI image generation should not come at the unchecked expense of artists.

3) Clear markers on government generated images

Policy institutions should set the baseline for responsible deployment.

Policy option:

  • Require any AI generated image used in government communication to include a visible marker showing it was created using AI.

  • Ensure those markers cannot be easily removed or hidden.

  • Develop tools to identify when image was generated rather than captured by camera or human work.

When government uses AI generated images, the public should not have to guess. Clear markers reduce confusion and help prevent synthetic visual from being mistaken for real world evidence.

Conclusion

AI image generation tools are not automatically harmful, and they can support creativity and economic growth. However, when realistic images can be created instantly, it is no longer safe to assume that every photo reflects something that truly happened. Canada does not need to ban these tools, but it does need clearer rules. Clear labeling of AI generated images in political and government contexts can reduce confusion. This is especially important during federal elections that engage millions of voters and shape national policy direction. Stronger safeguards around training data can better protect artists whose work may be used to build these systems. Canada’s culture sector contributed $63.2 billion Canada’s GDP in 2023, representing about 2.3% of the total economy’s GPD. Public institutions should model responsible use by making synthetic images easy to identify. This can help prevent disputes and set clearer expectations for the broader market. Together, these steps would support innovation while protecting trust in public institutions and creative work in an environment where images increasingly shape public understanding.


Previous
Previous

Balance of Power and (Data) Sovereignty in the Age of AI: Canada’s Defence Challenge

Next
Next

The Digital Panopticon: Mobilizing Artificial Intelligence for Surveillance and Policing