In recent years, the conversation about artificial intelligence (AI) has been dominated by two contrasting narratives. On the one hand, AI represents an exciting world of possibilities; on the other, it poses immense challenges to our institutions and fundamental values.
Both perspectives are equally valid.
It is undeniable that AI, a general-purpose technology (like electricity or the personal computer), will have significant economic and societal impacts. For instance, in the mining industry, AI can expedite the detection of potential mineral deposits (one company used it to predict the location of 86% of Abitibi’s gold deposits by examining data related to just 4% of this region). In agriculture, AI assists greenhouse farmers in predicting plant growth under varying conditions and optimizing available resources (as a result, the Dutch, who have strongly embraced AI, use 15 times less water per kilogram of fruits and vegetables produced than Americans). In healthcare, AI already assists in cheaply screening diabetics for eye disease, enables the personalization of patient treatments based on genomic profiles or accelerates new drug discoveries. In education, AI paves the way to personalized learning, ensuring that each student’s particular needs are better catered to (thanks to AI, the users of Duolingo, a language-learning app, learn better and are more engaged and challenged).
As a result of innovations such as these, Goldman Sachs estimates that in just 10 years, AI could lead to a nearly CAD10 trillion increase in the value of the world’s annual GDP (akin to the appearance of two countries the size of Germany and France on the planet).
However, the rapid development and (so far, not so rapid) adoption of AI also comes with concerns.
For example, many worry that the advent of generative AI like ChatGPT (for language production) or Midjourney AI (for image creation) could seriously contribute to disrupting our democracies by making synthetic propaganda easier and cheaper to produce and helping malicious actors to turbocharge disinformation campaigns. Others report that AI often infringe on human rights, for example by taking biased decisions that lead to discriminatory practices (a study recently showed that an AI system used by the city of Rotterdam would flag a citizen as being at higher risk of committing welfare fraud simply for “being a parent, a woman, young, not fluent in Dutch, or struggling to find work”). In numerous industrial sectors, workers are getting afraid, often with good reason, that their employer might use AI to replace them (in 2022, 19% of American workers were in jobs “that are the most exposed to AI” like budget analysts, technical writers, tax preparers or Web developers) or to monitor when they log on and off, who they are writing to, what they are saying, what their mood is, etc. (something which can negatively impact their well-being, surveilled employees more often reporting that they feel micromanaged or experience emotional exhaustion at work).
Faced with these challenges, global and local entities are springing into action to help minimize the negative consequences of AI’s development and deployment while maximizing this technology’s more positive ones.
Last summer, the UN Secretary-General emphasized, correctly, that when disruptive technologies emerge, international cooperation is essential. To successfully address the fact that powerful AI systems are already available to the public (and to actors with malicious intentions), that “unlike nuclear material and chemical and biological agents, AI tools can be moved around the world leaving very little trace” and that no major technology has ever been as controlled by giant firms as AI is, new international rules, new treaties and new global agencies will have to be created.
But there are limits to what international organizations like the UN can do, and individual states also need to take charge to develop and deploy AI that is reliable while being safe for all. To support the responsible development and deployment of AI, countries like Canada will have to do multiple things like (1) develop more robust and adequate legal frameworks (Canada’s yet-to-be-adopted Artificial Intelligence and Data Act, which is part of Bill C-27, is a step in the right direction); (2) conduct more intersectoral research on the effects of AI and, more importantly, the major changes needed to turn AI into a powerful development lever; (3) and adopt new policies and programs (e.g. to support the integration of AI by subject matter experts (SMEs), protect workers who might lose their job because of AI or help students and teachers learn new ways of doing things). Additionally, Canada and its allies need to equip all segments of the population with the capacity to address AI issues. The Canadian Institute for Advanced Research recently highlighted this need. They found, after examining millions of AI-related online posts and searches, that while Canadians generally view AI positively, they should enhance their critical understanding of AI’s potential influence on their lives and communities.
Ultimately, ensuring the responsible development and use of AI will require more than a single solution. We will need a continually evolving set of tools, both technical and non-technical, to guarantee a secure AI landscape for upcoming generations. It will be especially crucial to actively involve citizens in shaping this future. This is highlighted by the Québec Government’s recent decision to commission the Conseil de l’innovation du Québec to lead a widespread public discussion about responsible AI practices. The participation of over 450 individuals, ranging from experts to ordinary citizens, underscores the significance and necessity of these endeavors.
Let’s be clear: the journey to develop and use AI responsibly is going to be filled with hurdles. I’m involved in the health sector, where the opportunities to improve patient care using AI are significant. However, numerous challenges lie ahead. The very nature of health data, combined with the profound consequences of healthcare decisions on human lives, means that AI in this sector must be both highly dependable and morally sound. Recognizing these complexities, I joined forces to help create the prototype of a code of ethics specifically designed for health professionals using AI. This was crafted together with representatives from 25 health and human relations regulatory colleges in Quebec. Such guidelines can offer direction to healthcare professionals navigating the evolving landscape of AI, ensuring they can harness its potential in their daily practice while upholding ethical standards. I believe that collaborative efforts like these, bridging academic insights with real-world expertise, hold immense promise and value.
As we approach the brink of the AI revolution, this is a clear call to action. We must wisely tap into AI’s potential while diligently mitigating its risks. Achieving this balance will be tricky, but the benefits, I am also sure, will be substantial.