To co-exist with ChatGPT, schools need to teach AI literacy ASAP

Published On: August 2023Categories: 2023 Editorial Series, Editorials, Theme 3: ChatGPT

Author(s):

Zier Zhou

Queen's University

Incoming Medical Student

Headshot of a young Asian woman in a grey t-shirt.
Disclaimer: The French version of this editorial has been auto-translated and has not been approved by the author.

Launched by OpenAI in late November 2022, ChatGPT is a natural language processing tool that has become the shiny new toy on the global playground. Smart and well-rounded, it can answer random questions, translate foreign texts, and generate creative lyrics. It can pen essays and pass exams with ease and much more speed than the typical human. Students are bound to see ChatGPT as a helpful resource. However, this chatbot is not as harmless as it may seem at first glance. It therefore makes sense for everyone to learn early on about the broad implications of this AI technology. 

One common but limiting concern about ChatGPT is that it facilitates cheating and hinders learning among students. To prevent plagiarism, certain institutions have already banned the use of ChatGPT, including Sciences Po in Paris and some public schools in New York City and Los Angeles. Although institutions have a responsibility to uphold academic integrity, banning ChatGPT completely is a rigid approach that removes the opportunity for students to interact with this technology in ways that could enhance their education. Instead, schools should update and incorporate AI literacy into their curriculums to ensure their students use ChatGPT and other upcoming AI tools with caution and critical thinking.

Why should we be careful when extracting answers from generative AI models like ChatGPT? To answer this, it helps to know that ChatGPT is trained on vast amounts of data from the Internet. Its algorithms focus on drawing patterns from this data rather than scrutinizing what is true. So when the model comes across topics not previously encountered, it can “hallucinate” and fabricate unreliable information. By providing information without citing original sources, ChatGPT can further lead to copyright issues and errors in misattribution. Aviv Ovadya, AI Researcher at Harvard’s Berkman Klein Center, even worries that this widespread problem of inaccuracy will snowball into what he calls “reality apathy.” In his February interview with Forbes, he warns, “There is real risk that we are moving toward a world where people just don’t have any sense of what is real.”

Regulatory oversight in response to the risks surrounding AI is still in its early stages. It was only June 2022 when our federal government proposed the Digital Charter Implementation Act, which strives to maintain digital trust by promoting the ethical use of data. While teaching AI literacy is not explicitly outlined in the Charter, the Digital Literacy Exchange Program is included as a relevant action item “to equip Canadians with the necessary skills to use computers, mobile devices and the Internet safely, securely and effectively.” Established under Canada’s Advisory Council on Artificial Intelligence, the Public Awareness Working Group also advocates for AI literacy. One key recommendation from last year’s report titled “Learning Together for Responsible Artificial Intelligence” is to design and deliver a free online AI literacy course accessible to all Canadians. Another initiative mentioned in the report is AI for All, a nationwide AI literacy project that intends to empower communities with a good understanding of AI by collaborating with Canada’s 3,350 public libraries.

AI literacy skills play a key role in countering misinformation, which has surfaced as a clear threat to society in recent years. Rumours about COVID-19 origins and interventions continue to create division, respectively fueling anti-Asian racism and the anti-vaccine movement. A 2021 Stanford University study found that high school students, at the cusp of voting age, largely cannot detect fake news on the Internet. How can we count on people to make rational choices without the right facts? To suppress our susceptibility to misinformation, we must educate communities and start with schools. Policymakers in New Jersey have already mandated media literacy courses in all K–12 public schools to address concerns about the spread of misinformation and online extremism. These courses aim to teach students research and critical thinking skills, so they get better at differentiating between facts and falsities. 

Currently in Canada, digital literacy is not a separate subject in the curriculum but can be integrated into other subjects such as science. Several provinces, including Ontario and British Columbia, have outlined their own goals to build digital literacy. MediaSmarts, Canada’s Centre for Digital and Media Literacy, has also begun advancing algorithmic awareness by conducting conversations with young people about the impacts of AI on data sharing and privacy. The next logical step is for schools to teach AI literacy, so students can leverage this technology and optimize their education. No matter which industry they eventually enter, AI will likely be there as well. AI literacy involves understanding the basics of how AI works, its benefits for problem-solving in various settings, and the potential consequences of this technology. Whereas existing digital literacy curriculums may teach learners how to use search engines effectively or select credible sources, lessons in AI literacy would help students learn how to detect AI-generated content and validate answers from ChatGPT. 

Ensuring the safe and responsible use of AI is an immediate responsibility that falls on all of us as learners and leaders in education and innovation. Users of ChatGPT and similar information tools should regularly consider risks and benefits related to their unique situation. Teachers would also benefit from AI literacy training as they update their lessons and pass on knowledge to their students. Moreover, in a time when content is constantly shared online, participating in plagiarism may not always be clear or intentional. Educators should thus clarify assignment expectations and anti-plagiarism policies at the beginning of each term. Increasing transparency and awareness about AI further requires cooperation with the engineers and mastermind corporations behind this technology (which is, of course, easier said than done).

In summary, nobody is exempt from adapting to our ever-evolving technological landscape. Being AI-literate means being equipped to make informed decisions about under which circumstances it is appropriate to take advantage of AI, and there is no time like the present to find our middle ground between caution and optimism. To co-exist harmoniously with new AI-based tools like ChatGPT, schools need to teach AI literacy ASAP.