Navigating AI in Open and Higher Education: Critical Guidelines and Practical Applications

Maha Bali
Equity Unbound / American University Cairo

Before we discuss how educational institutions need to respond to the presence of generative AI in our lives, and assess its impact on teaching and learning, we need to take a step back and remind ourselves what good learning is.

One of the main challenges teachers face when new technologies become widespread, is that they themselves need to learn new literacies in order to use these technologies critically, and they also need to discover how their learners might be using these technologies, whether they as teachers choose to embrace or reject those technologies. We cannot choose to ignore AI, because it exists in the world now, and students have access to many of the free tools.

Developing teachers’ critical AI literacies is a priority, and it needs to be something that is institutionalized and incentivized, not something we expect teachers to do in their own time. The challenge of already-overworked and burnt-out teachers means that institutions need to give teachers resources, and peer support in order to understand how Large Language Models (and any other new technology) works, have opportunities to try them hands on and to recognize the challenges and risks of using AI tools. As with any technology, inequalities exist at the start, among teachers and among students, as to who has access to paid vs free tools, who has stronger internet and electricity access versus others, and who already has a stronger digital literacy to begin with and has had more time to explore such tools, versus others who have bigger demands on their time, or tend to need more time to become confident with new tools. We need to be cautious about adopting tools “just because new things must be good”, or rejecting them outright in a deterministic way. We must remember what we know about good learning: the importance of human interaction and scaffolding that is part of social constructivism, and to question what we could lose if we replace a human with an AI chatbot? What happens to the elements of social emotional learning when we replace elements of human feedback with chatbot-written feedback? What happens to student motivation, if AI gives them feedback on their writing rather than a human being who has a conversation with them about the meaning and significance of what they write? What happens to power imbalances in the classroom when we shift the locus of power not from teacher to student, but from teacher to AI platform. When we use AI tools to do what companies claim is “adaptive” or “personalized” learning, are we taking away the agency of the students to develop metacognition to set their own learning path, and are we taking away the accountability of the teachers to care for their students?

The biggest risk of using AI tools is their tendency to produce incorrect information, what is sometimes called “hallucination”, and state it with a confident tone. Moreover, these tools are known for reproducing implicit bias, thus reproducing epistemic inequality, and any teacher or student using it may not recognize the kinds of biases inherent in AI output, which they may perceive as “neutral”. This means that teachers considering using AI to give feedback to students may reproduce normative, dominant ways of writing and end up discouraging students whose writing is more creative or original, especially if they belong to a minority group and express their ideas differently. This may also mean that students prompting AI in order to learn something may get information that is either incorrect or biased, and not recognize that AI tools are not fully credible. Moreover, AI tools don’t cite their sources, so we cannot trace information upstream to see if the sources are good quality.

It is also useful to take an Sustainable Development Goal (SDG) lens to looking at AI: the training and use of AI has a negative impact on climate and water resources; whenever people discuss AI as something that can enhance quality education, they ignore the risks of hallucination and bias; when we consider the use of AI to support people with disabilities, we are not necessarily considering how the bias and hallucination within AI would impact them when they use it. For example, the use of AI in translation can be really helpful; however, if you let AI translate into a target language that you don’t understand at all, the small mistakes in translation can be disastrous. This is even worse when people allow companies to create “deep fakes” of themselves speaking another language - we then risk giving corporations free rein to use our likeness in audiovisual form in ways we cannot imagine. Until there are sufficient guardrails within AI to protect people’s privacy and counter the kind of exploitation of human labor in the processes of creating AI, educators should remain cautious of inviting students to use AI tools, especially at the K-12 level.

Once the above elements of critical AI literacy have been established, teachers need time to explore appropriate ways of getting the best out of these tools and reflect on potential opportunities for using them in the classroom (a useful free resource for sample prompts for teachers), while remaining cautious of the importance of their own positionality, context and intersectional identities of their students - so treating AI output with skepticism and not losing their own critical and creative insights in the process of using AI to speed things up.

Institutions cannot ignore the student side of this equation. Schools also need to develop students’ critical AI literacies, and be transparent with them about any AI use in administrative school tools, as well as ensure teachers are transparent about when student AI use is allowed and when it is prohibited.

Key guidelines I’ve seen and consider to be needed are the following:

  • Criticality: ensure all users are aware of critical AI literacy before using such tools, in order to alert them to potential biases and hallucinations and other risks
  • Transparency: when someone uses AI, they need to be transparent about where and how they’ve used it and whether student or teacher, have a brief AI statement at the end of the piece of work.
  • Accountability and verification: each person is accountable for anything they submit or produce or any decision they make; using AI tools that do not explain their processes is insufficient - one must also try other ways of finding the information and verifying it
  • Agency: both educators and learners must not be forced to lose agency over how they do their work by using AI tools to make decisions for them; if we use AI tools to support learning analytics, both learners and teachers need to have access as much as possible to the data input and output, and companies must be lobbied to try to make as much of their processes transparent as possible
  • Privacy: guardrails to protect users’ privacy are essential; this will differ by country, context and age of learners
  • Equity: we need to ensure that the use of AI does not become another way we differentiate between students, such that those who are able to pay get more human contact, and those who cannot end up taught by machines and not humans
  • Minimization: given the harm to the environment, we should probably consider not overusing AI tools to reduce that harm

Once we have taken the above precautions, we can consider some potential practical uses for AI in education, including open education, uses that help promote the values of open education including equity and accessibility.

Five Potential Uses for AI in Open Ed and Higher Ed (with cautions)

I would never use AI to create open content. Firstly, because it is unlikely to be quality content, given the possibility of hallucinations, and secondly, because of the kind of implicit biases it is likely to perpetuate. But also, I don’t feel it is ethical to offer content whose original source (of an idea, if not exactly verbatim text) I cannot trace - this is important both for attribution and assessing credibility, but also because many whose work was used to train AI tools never gave permission for it to be used this way.

However, whereas I would not use AI to generate content, there are some examples of potential of AI use in open ed and higher ed:

  1. Reading: Learners are probably already using AI to summarize or rephrase text that may be too difficult or jargon-filled for them. Since open textbooks can’t know the entire population of their audience, this allows each learner to get some support with readings. It won’t help them develop their reading skills, but it will help them get the gist of what they would otherwise not be able to read at all.
  2. Accessibility: There are AI tools specialized in describing images to those who are visually impaired (e.g. BeMyAI) as do general tools like Google Lens and GPT4o. When educators develop materials, they can use these tools to kickstart alternative text, as long as a human revises it for accuracy; PowerPoint has a built in auto ALT text option as well. However, from my testing, this is still more likely to misunderstand things like hand-written Arabic text and dates written in Arabic - so its accuracy is culturally skewed. I also often hear that people who have ADD/ADHD can benefit from AI support to help them get started on work. I support such uses, but also suggest that we need to support learners to find their own approaches that would work for them if the technology is not available. I have also heard people talk about how someone on the autism spectrum can use AI to modify their writing tone to sound more “normative”, and while I understand that kind of use, I would push back against this and hope that we can all learn that people can be neurodivergent and to accept this diversity in communication styles. We should all adapt to accepting differences, rather than continually ask people on the margins to adapt to us.
  3. Translation: Machine translation has existed for a long time, and most people, even many professional translators, use machine translation along the way to a final translation. However, there needs to always be a human who understands both languages available to check for accuracy. This ends up speeding up the translation process for some languages. Other languages are not translated as well, or in some topics may be difficult to translate. Even for languages that are well translated, it may be time consuming for a human to find the few mistakes that might be detrimental to meaning.
  4. Custom GPTs. One way of reducing hallucination in education-based AI is to create our own custom GPTs (watch how to do it without programming background via tools like poe.com), where you upload particular documents for the tool to work with, ensure randomness is “cold” to reduce hallucination, and then train the tool to do particular tasks you design, such as design practice MCQs or discussion questions for students, or answer learners’ own questions about the text, some of these (like Google NotebookLM, which already allows you to treat it like a customGPT) show where in the text the AI tool’s responses come from. This can help someone using an OER create assessments from the OER itself that will likely include few hallucinations; even students can do this for themselves for practice. But users will always have to verify the accuracy of responses because we cannot remove hallucinations altogether.
  5. Feedback: Some who teach writing have discussed using AI for feedback on writing. I think this can be useful for basic writing skills for beginners in a new language that the AI tool knows well, where focus is more on grammar and form that AI is likely to get right, but not for more advanced writing that has more voice and personality and purpose (see this research paper on how this can go wrong). As a social constructivist, I would prefer peer feedback to this, but some autonomous learners of open material don’t have a cohort of peers to support them, so AI feedback can stand in for that, sometimes.

I believe that as educators and learners develop critical AI literacies and keep agency, accountability, and verification at the forefront, they can benefit from AI tools in using open educational material in some of the ways I have outlined above. The field is changing quite rapidly, and more potential uses and cautions may emerge in the coming few months… or days!

Resources for critical AI literacy in education

Articles