Bipin Kumar Badri Narayanan

Program Coordinator

Canadian Science Policy Centre

Improving Transparency & Accountability in Deployment of AI Systems

Connected Conference Theme: Science and Society
Proposal Inspiration:

Algorithms or AI systems are increasingly being deployed to improve services in various fields and sectors of the economy, such as health care, financial services, judiciary systems, and policing.

While Algorithms and AI systems have tremendous advantages and help improve access to services and increase deliverability, these systems lack transparency and accountability. The lack of this transparency and accountability causes general distrust in these systems. People cannot challenge these decisions because they are not aware of how these systems work, nor are they aware of the use of these systems, as these systems can deny access to various services such as access to credit, insurance, health care, mortgage, etc.

Governments must regulate these systems before they are widely used in society, as they cause discrimination to already marginalized groups and increase inequality. It is impossible to avoid using these systems in the current age of Big data. Still, I believe that with sufficient mechanisms to regulate these systems, we can ensure that these systems do not discriminate against marginalized groups and enjoy the enormous benefits of AI.

Need/Opportunity for Action:

As more and more of our world gets connected and automated, we are entering a new era where the government or private entities predetermine our access to most services/opportunities by deploying Computer /AI Algorithms. Such systems are deployed without prior consultation or public knowledge, nor are these systems accountable to any standards or regulations. These algorithms have typically been trained on data collected by scraping data from the internet or using data obtained from other sources such as data brokers. Data used for training is collected without explicit consent. This type of data predominantly is biased data or reinforces biases that already exist in society. Usually, the algorithms impact the marginalized people in the community, such as indigenous people or people of colour. Typically these algorithms are deployed assuming that computers and AI would remove discriminatory practices, but in reality, they are automating inequality and racism.

AI applications influence us in various ways, including what information we see online by predicting what content engages us by tracking our web activity and other data such as location from phone’s GPS systems and financial data from banks or credit cards. AI also captures and analyzes data from faces to enforce laws or personalize advertisements and is used to diagnose and treat cancer. In other words, AI affects many parts of our life[1]. The lack of transparency in AI can cause various types of individual harm, such as financial loss, loss of employment opportunity or access to credit, loss of freedom, and collective social damages. Numerous studies show that AI used to determine hiring decisions has been shown to amplify existing gender discrimination or racial lines[23]. Law enforcement agencies are rapidly adopting predictive policing and risk assessment technologies that reinforce patterns of unjust racial discrimination in the criminal justice system[24].

AI systems also shape the information we see on social media feeds and can perpetuate disinformation when optimized to prioritize attention-grabbing content. This can adversely affect political discourse and significantly impact civil society and democracy as society becomes increasingly partisan. It also becomes extremely difficult to agree on the basic set of facts, as different groups tend to believe other sources of information. As AI becomes increasingly omnipresent in our society, it becomes increasingly essential that AI deployed in the public spaces for services be free of bias to increase trust in the system and decrease inequality.

In June 2022, the Government of Canada tabled Bill C-27, introducing updates to the federal private sector privacy regime and a new law on artificial intelligence. If passed, the Artificial Intelligence and Data Act (AIDA) [2] would be the first law in Canada regulating the use of artificial intelligence systems. The AIDA adopts a risk-based approach designed to focus on areas that create the highest risk, similar to the approach found in the proposed Artificial Intelligence Act in the EU [1], by focusing on areas where there is the most significant risk of harm and bias by establishing rules for the use of artificial intelligence systems that are “high-impact,” which will be defined by the regulations.

Proposed Action:

To increase transparency and accountability of AI systems, the Government of Canada should establish an algorithmic audit process to ensure the AI systems and algorithms are free of bias and discrimination. An independent party evaluates an algorithmic procedure for bias, accuracy, robustness, interpretability, and privacy. The audit would identify problems and suggest improvements or alternatives to the developers.

The Government of Canada should establish an independent National AI Oversight body along the lines of The National Institute of Standards and Technology (NIST) in the US, the proposed European Artificial Intelligence Board by the EU, which establishes and enforces regulations and conducts regular AI Audits or Algorithmic Impact Assessments (AIA)[25] to ensure that algorithms are not discriminatory.

The independent oversight body should define what an Algorithmic audit is. The concept of an algorithmic audit or algorithmic impact assessment (AIAs) is relatively new, definitions vary depending on the source, and the research is ongoing. Defining what should be included in algorithmic audits for biases can encourage more rigorous future audits and provide a comparison benchmark for completed audits. [3,4].

The oversight body should establish clear guidelines on the audit process, such as Auditor independence, representative analysis, Data, Code, and model access. The oversight body must conduct the audit on highly sensitive AI, such as facial recognition and biometric systems.

The audit should also define and consider other dependencies and documentation, such as the data collection process, ensuring the data is representative of all demographics. It should also consider the effects of any dependencies the algorithm builds on, such as libraries, software packages, and pretrained algorithms. The audit should review the documentation for models and ensure that the documentation accurately communicates the functionality of the models.

The oversight body should also review how the model performs, such as how the algorithm scores different factors or variables, how it performs on historical data, and whether the model performs equally on different subgroups, to ensure variables common in a particular group do not lead to discriminatory outcomes, for the subset. The audit should carefully consider the problem definition, whether the problem statement considers the end-user circumstances or just the needs of the clients using the system, how the outcome variables are chosen, and how algorithms might enable biases.

The companies should also disclose the data on nutrition labels [17], so it is easier for the public to understand the labels and types of data. The oversight body should make disclosure of data nutrition labels mandatory, along with AIAs.

The oversight body should also establish a periodic monitoring process to understand how the algorithms/models update over time and what variable has caused the shift in the model. The oversight body should also consider this in determining the frequency of the audit process for the algorithm/model. High-Risk Algorithms that use biometric data should be reviewed periodically, and the audit results must be publicly available.

As AI adoption increases, the number of systems used in the public domain is bound to increase, and it is not always possible for the oversight body to review all AI systems. The oversight body should also establish a certification process for non-critical AI along the framework developed by the Responsible AI Institute (RAII) [5] and based on the OECD’s AI principles [6]. The oversight body should also establish the process and requirements of the organizations that provide the certification process. The results of the certification process should be publicly accessible to ensure transparency.

The government should mandate companies to disclose when AI/Algorithms are used for decision-making, increasing the end-users transparency and awareness when interacting with an algorithm/AI.

The Government of Canada should follow the EU in banning real-time biometric identification/facial recognition systems in public spaces for law enforcement purposes. The Canadian government should also ban all social scoring systems, which lead to unjustified and disproportionate treatment.

Lay Abstract:

Algorithms or AI systems are increasingly being deployed to improve services in various fields and sectors of the economy, such as health care, financial services, judiciary systems, and policing. While Algorithms and AI systems have tremendous advantages and help improve access to services and increase deliverability, the lack of transparency and accountability causes general distrust in these systems. AI systems can deny access to various services such as access to credit, insurance, health care, and mortgage, as people cannot challenge these decisions because they are not aware of how these systems work. Neither are they aware of the use of these systems.

To increase transparency and accountability of Algorithms and AI systems among the public and society, the Canadian government should establish an independent oversight body to regulate and monitor the AI industry and ensure that the systems are free of bias and discrimination. The oversight body should define the audit process to ensure all systems are evaluated on an equal footing and make the audit results available to the public, thus providing a participatory process by all stakeholders and civic organizations, which will lead to improved systems over time and also build consumer trust and confidence.

Novelty:

As per the current proposal of the AIDA act, the government places the responsibility of assessment and risk mitigation measures on developers/companies developing/deploying AI systems, i.e. person/organization responsible for creating a system is responsible for measuring its performance. The other aspect of AIDA is that the audit is not mandatory, which means there is no mechanism to verify/audit the developers’/companies’ claims before public deployment. Another issue with the AIDA act is that it does not mandate the system to disclose that they are interacting with an AI system.

The proposed system mandates to inform the end-user when they are interacting with the AI, and it also calls for the establishment of an independent National AI Oversight body along the lines of The National Institute of Standards and Technology (NIST) in the US, the proposed European Artificial Intelligence Board by the EU. The oversight body establishes and enforces regulations and conducts regular AI Audits or Algorithmic Impact Assessments (AIA) to ensure that algorithms are not discriminatory and would mandate audits before they are publicly deployed. This would lead to the assessment of the systems by independent third parties before they are publicly deployed and ensures continuous monitoring to prevent significant harm to citizens/consumers.