Bipin Kumar Badri Narayanan

As artificial intelligence (AI) becomes more common in everyday life—from loan approvals and job applications to immigration decisions and healthcare—there’s a growing risk that these systems can cause harm. People have already been unfairly denied benefits, misidentified by facial recognition tools, or targeted by biased algorithms. But when things go wrong, affected individuals have few options to seek help or accountability.
Currently, Canada’s laws focus on regulating how AI is built, but not what happens after it’s deployed and used. This proposal fills that gap by recommending new rules and systems to protect people when AI causes harm.

It calls for creating clear liability laws that make it easier for consumers to file complaints and seek redress. It also proposes a new AI Ombudsperson Office to investigate complaints, and a national AI Incident Reporting Platform to track problems and help regulators respond.

These changes would make Canada a global leader in responsible AI governance. They aim to build public trust by ensuring that when an AI system makes a mistake, there’s a clear path to fix it—protecting individuals, encouraging ethical innovation, and ensuring technology works in the public interest.

Disclaimer: The French version of this text has been auto-translated and has not been approved by the author.