Who is Responsible for Governing AI for Science and Engineering?

Published On: November 2022Categories: 2022 Conference editorials, Editorials

Author(s):

Teresa Scassa

University of Ottawa

Canada Research Chair in Information Law and Policy

Teresa Scassa Headshot

Scientific inquiry and innovation are rapidly being transformed by artificial intelligence (AI) technologies. AI promises to solve some of the most difficult challenges facing humans, from finding new non-microbial-resistant antibiotics to addressing critical climate change issues. Yet the impact of AI on science goes beyond using this powerful new technology as a tool; AI is poised to transform many other dimensions of scientific inquiry. This includes decision-making about what research avenues to pursue, what research projects to fund, and which researchers to support or to hire. AI may also prove instrumental in peer review of research and grant applications, thus shaping the career trajectories of scientists. AI technologies will pose new epistemological challenges for scientific research, as these sometimes-inscrutable technologies raise new issues of reproducibility and explainability. These are just some of the challenges identified in Leaps and Boundaries, the Council of Canadian Academies’ 2022 report on legal, ethical, social and policy issues raised by AI in science and engineering.

Cover of a book titled 'leaps and boundaries' with an icon of a man leaping over a bar graph

Realizing the full potential of AI technologies in science and engineering means anticipating and addressing the challenges these technologies pose for our institutions and our values. How do we ensure research quality and integrity when complex and adaptable AI tools may operate in ways that are less than transparent? How do we reconcile the goals of open science with proprietary data and algorithms in a highly competitive commercialization context? How do we ensure that AI technologies that are used to assess, rank or evaluate researchers and their work do not replicate past systemic discrimination in a domain that is now striving to achieve equity, diversity and inclusion? And how do we build ethics into technologies that manifest such power to transform how we live and interact with one another?

The risks of bias and discrimination in AI are real. These technologies will be only as good as the data that fuel them, and we have stores of data that have been amassed within systems that have consistently privileged some groups over others, creating lasting and systemic discrimination that we risk embedding within our ‘intelligent’ technologies. Although these biases are most evident with AI systems that are built upon human-derived data (such as personal health information), it should not be overlooked that bias and discrimination have also impacted past decisions about what questions are considered worthy of study, and from what angle or perspective they should be studied. Such practices shape the available data and may embed biases into AI in subtle ways that must be critically considered and addressed.

For the time being, it is easier to raise the questions that must be asked than to answer them, although there is considerable work already underway in the policy sphere. A proliferation of ethical frameworks for AI have been developed by public, private and not-for-profit actors. We are also beginning to see emerging forms of AI risk regulation such as the EU’s proposed AI Act and Canada’s Artificial Intelligence and Data Act that is part of Bill C-27, currently before Parliament. Yet, while these initiatives are important, they are not sufficient on their own. Addressing the legal, ethical, social and policy issues with AI is not a responsibility that can be delegated to a single actor, institution or piece of legislation. The governance of AI will require the engagement of national and regional governments, as well as international norm-setting bodies. It will require the development of ethical standards around which inclusive global consensus can be built. It will also demand engagement by research ethics bodies, granting agencies, professional associations, universities – and ultimately by scientists themselves.