AI and digital innovation needs science too

Published On: December 2025Categories: 2025 Editorial Series, Editorials

Author(s):

Monique Crichlow

Monique Crichlow Headshot – Monique Crichlow
Disclaimer: The French version of this text has been auto-translated and has not been approved by the author.

The introduction of Prime Minister Mark Carney’s ministerial cabinet marks a significant milestone in Canadian policy on artificial intelligence (AI), with the appointment of a new Minister of AI. This should be a moment of cautious optimism—a signal that the federal government recognizes the profound importance of AI to our country’s future. It also reflects a natural progression in Canada’s deep and celebrated history as a pioneer in this transformative field. From being the first country to launch a national AI strategy to, more recently, University Professor Emeritus Geoffrey Hinton’s Nobel Prize, Canada has earned global recognition by fostering fundamental research, attracting top talent, and nurturing a community dedicated to advancing scientific inquiry.

Yet, one detail in the new ministerial line up stands out: the conspicuous absence of “science” in the description of ministerial focus. Instead, AI is being framed predominantly through the lens of industry, adoption, and economic growth. This early framing raises a crucial question: Are we risking a narrow view of AI as a tool for short-term economic gain, while overlooking the scientific foundations that make this technology possible and meaningful?

The value of fundamental science

Canada’s gains in AI have been built on a foundation of fundamental research, the true engine behind today’s breakthroughs. The powerful machine learning models and transformative applications we see today stem directly from decades of patient scientific exploration and experimentation. Institutions like the University of Toronto, CIFAR, and the Vector Institute are central to this story, representing Canada’s long-standing commitment to open inquiry, scientific excellence, and curiosity-driven research.

AI is advancing rapidly and becoming increasingly integrated into the systems that shape society—from healthcare and education to communications infrastructure, cybersecurity, and the global economy. These technologies are powerful, fast-moving, and deeply interconnected. They offer significant opportunities to improve human life, but they also present complex risks: from reinforcing inequality, to automating labour, or even weakening democratic institutions by producing cascading effects that are difficult to anticipate.

Science can sometimes be provocative. Recently, the AI research community has been actively debating questions of safety and the future trajectory of AI’s capabilities, including the potential for adverse consequences for humanity. These conversations are essential and engaging with them meaningfully requires more than technical expertise; it requires interdisciplinary perspectives.

Institutes like U of T’s Schwartz Reisman Institute for Technology and Society are focused on building scientific consensus and strengthening society’s ability to assess emerging technologies, identify potential harms early, and support governance approaches that maximize public benefit while minimizing risks.

Perhaps most importantly, debates about AI’s risks and the future are not outliers; they reflect the way science has always grappled with uncertainty, sought consensus, and engaged in scenario planning. Questions about risk, restraint, and guardrails are necessary parts of science if we are to advance innovation and reap its benefits.

And yet, there are signs that science is being asked to diminish its role in favour of accelerated adoption. The framings at international gatherings like the recent AI Action Summit in Paris, or the evolving mandate of the UK’s AI Safety Institute—now renamed the UK AI Security Institute—reflect this shift in emphasis. The focus has increasingly turned toward acceleration, adoption, and advances in productivity. While these are important goals, they cannot be the only priorities.

A moment to lead

The upcoming ministerial mandate letters will provide clarity on the federal government’s priorities in the days ahead. We can harness AI for economic growth—but only if we remember that true innovation stems from curiosity, openness, and thoughtful governance. If we want AI that truly works for people, we must invest not only in technology and industry, but also in the science and societal structures that make AI possible and beneficial.

Let’s not mistake the impressive outputs of AI for the engine that drives it. That engine is science and the people who pursue it, question it, and ensure it serves the public good. That’s where Canada’s future in AI can truly flourish.

More on the Author(s)

Monique Crichlow

Schwartz Reisman Institute for Technology and Society at the University of Toronto

Executive Director