Skip main navigation

How can we develop students’ critical AI literacy?

In this article, Dr Anna Verges Bausili and Maria O’Hara explain the importance of critical AI literacy, and demonstrate how to develop this.
In the previous step, you learned about opportunities for you to re-balance your assessment in the age of GenAI to create authentic tasks for your students. In this step, you will review the notion of critical AI literacy and its particular relevance in the context of higher education. You will also learn how this notion aligns not only with core academic principles and values but also supports the development of academic skills and will be asked to consider how developing critical AI literacy can be developed both directly and indirectly.

Critical AI literacy

Critical thinking is seen as one of the most desirable outcomes of higher education. While there is an increasing recognition that we need critical AI literacy, there is not yet a consistent definition of AI literacy.
Broadly speaking, critical AI literacy can be defined as the active awareness of affordances and limitations of AI technologies.
We would suggest that critical AI literacy is an extension of existing critical thinking and digital literacies that seeks to help students develop a critical awareness of generative AI models, how they work, why their content should not be treated as a single source of truth and what their social, intellectual and environmental implications might be. We should also consider what essential skills and capabilities AI may potentially circumvent and put at risk, such as the process of learning and writing skills or information literacy.

Developing critical AI literacy

Critical thinking

Developing critical thinking is intrinsic to higher education. As a key graduate attribute, it is both taught and assessed. Similar to developing critical thinking, developing critical AI literacy can be supported directly with activities modelling the acts of critiquing, rewriting or discussing AI outputs.
It can also be supported indirectly, by highlighting how automatic text generators can circumvent and challenge the development of essential academic skills and capabilities that are integral to higher education (learning, writing and research skills, information literacy) as well as cognitive processes such as elaboration which underpin deep learning.
Over-reliance on AI tools at the early stages of learning is a risk as students may accidentally circumvent the development of core skills. Learning how to develop ideas, organise thinking, support propositions or conduct research are vital parts of the curriculum that students need to be able to do independently.

Reading, research and writing

Reading, research and writing are entwined activities that help form judgments and encourage students to believe more firmly in their own voices. In an academic environment, elaboration and argumentation is intrinsic to critical thinking; in turn, argumentation is facilitated by the process of writing. The writing process helps students elaborate and learn deeply, ie:
“Rather than writing simply being a matter of presenting existing information or furnishing products for the purpose of testing or grading, writing is a fundamental means to create deep learning and foster cognitive development.” [1]

Information literacy

Critical AI literacy also relies on sound information literacy: the ability to critically evaluate sources and AI output. This literacy is about finding, interpreting, evaluating, managing and sharing information. Generative AI can answer complex enquiries with relatively detailed responses and is useful for exploring theories and ideas, but it is not a substitute for engaging widely with the scholarship on a topic to build a deep and nuanced understanding.

Assignments produced by an information literate student can critically evaluate the strengths and weaknesses of information they read and support their arguments by citing a wide range of credible sources as evidence. When using tools like ChatGPT, students will need to critically evaluate the quality of its output and corroborate this with other, credible sources. This needs to be emphasised to students using the tools.

Plus ça change, plus c’est la même chose – the more things change, the more they stay the same

Technology moves fast. Since OpenAI’s release of GPT-3.5, the industry leading large language model (LLM) and its competitors continue to improve their capabilities rapidly. Such a rapidly changing technology environment puts educational institutions and educators on a treadmill of constant catch-up. Amidst such a fast-paced environment, a return to core academic practice skills is worthwhile.

Long-standing guidance around evaluating the reliability of sources holds true for AI as well. So is the guidance to adopt a robust method for evaluating the strengths and weaknesses when using AI as a source of information for scholarly output?

Treating AI-based assistance the same way one would treat collaboration with other people, and AI output as one would with any other source, is a simple and powerful message. Activities that show students examples of inaccuracy, bias, logical and stylistic problems in automated outputs can be instrumental to this end.

Implications including assessment

Developing AI critical literacy requires staff – both in academic and learning support roles – to create and facilitate the conditions that support its development. Direct and indirect approaches to develop critical AI literacy should be employed and both be a vital part of the curriculum.

Assessment can also play a role in developing critical AI literacy. It is well known that assessment focuses student effort. Assessment that promotes and targets the development of good academic practice, rather than the assessment of outcomes alone, can be a mechanism to build critical AI literacy.

Providing guidance to students around how to achieve specific skills (eg how to read or write critically), and activities where students have to assess their own performance or those of peers against discipline-based criteria and standards are ways to internalise and apply evaluative skills that are transferable across units and to AI-generated content.

Now that you have completed this step, you have seen how, particularly in the context of higher education, the notion of critical AI literacy aligns not only with core academic principles and values such as critical and independent thinking, but also supports the development of academic skills. In the next step, you will learn about how AI can be used in marking and feedback.

References

  1. Association for Writing Across the Curriculum Executive Committee. Statement on Artificial Intelligence Writing Tools in Writing Across the Curriculum Settings. 2023 Jan 30.
  2. Webb ME, Fluck A, Magenheim J, Malyn-Smith J, Waters J, Deschênes M, Zagami J. Machine learning for human learners: opportunities, issues, tensions and threats. Educational Technology Research and Development, 2021;69(4):2109–2130.

Additional resources

If you want to explore this topic further, here is an additional resource:

Critical AI literacy resources from Anna Mills

Join the conversation

How can the well-established principles (and practice) around information literacy and academic skills be extended and applied to AI and its outputs?

What role can or should critical AI literacy play in institutional level responses to AI, in programme level curricula and the role of direct critical AI activities?

© King’s College London
This article is from the free online

Generative AI in Higher Education

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now