With the rise in artificial intelligence (AI), educators are forced to adapt. I’ve been teaching about AI for over a decade, but the last two years have added new pedagogical challenges to my plate: working with students who are increasingly using AI as they complete assignments, and learning how to evaluate and include a growing list of AI tools that promise to do everything from grade papers to write lesson plans. But even before AI shook up classrooms, education had been rapidly changing.
Students are increasingly demanding more relevant learning experiences. A one-size-fits-all approach doesn’t work when the student body includes traditional and nontraditional students, lifelong learners, and those seeing deeper workforce-aligned skills. We need to create engaging learning experiences that meet the needs of all of these vastly different students via a more personalized model. AI can help us if we build it on a foundation of open, equitable practices.
We can’t hide students from AI. Just as the internet swept through education (and everything else) in the 1990s, AI will soon be everywhere – and AI literacy will be as fundamental as reading, writing, and math literacies. AI literacy includes students understanding AI’s capabilities and limitations. For example, we must help students understand how to use AI to brainstorm ideas while maintaining academic integrity, how to iterate on prompts to get more useful responses, how to use AI as an agile search tool while verifying answers for accuracy, and a million other usages that we haven’t yet learned, but will likely learn from our students. They won’t just be using AI; they will be shaping it.
Of course, we need to be clear-eyed about the risks of AI in education – biased algorithms, eroding privacy, diminished human interaction, and a widening digital divide, to name a few. The task before us is to create and advocate for Responsible AI that moves education forward. We need to understand AI’s abilities and flaws to help it reach its full potential as an education tool. This requires increasing AI and Responsible AI literacy of educators, students, and even some tech professionals.
As exciting as AI is, we must steer this ship with care. Take the issue of biased training data sets. AI can help create dynamic, individual learning experiences if it is trained on – and restrained to – authentic, vetted sources of truth. However, today’s large language generative models, the brains behind the most popular AI systems, are trained on datasets that often reflect the biases and inequalities of the society that created them. Unsurprisingly, training a model on biased data results in biased AI. For example, a dataset that doesn’t include sufficient representation of women in STEM fields could result in an AI tool that generates fewer educational or career opportunities for women in these areas. This is both a technical problem and an ethical one.
The open education ethos, centering equity, democratization, transparency, and deeper learning, can help us navigate the next few years as AI sweeps into education.
The more transparent and collaborative AI development is, the easier it will be to spot and correct bias. By definition, open-source training datasets and AI models can be more easily scrutinized for bias and ethical issues. Open knowledge also equips the public to act as a line of defense. When users are literate in AI design — meaning how a model works and how it has been trained – it becomes much easier for users to sound an alarm when inequities and biases are noticed. This would also hold developers accountable for their building and training processes. As they say, the best offense is a good defense.
Here’s another wrench in the works: AI tools are increasingly putting premium versions behind paywalls, leaving those who can’t afford those services with fewer, less powerful options. This divide exacerbates existing inequalities, particularly in education. AI can only close knowledge gaps if the students who would benefit most from these tools can actually use them. This is another reason to push for open AI practices; by prioritizing free and open-source AI resources, we can help ensure more equitable access to AI.
As the growing Responsible AI movement works to address the potential harmful aspects of AI tools in education, the focus should be twofold. First, we need to open up AI training data sets, algorithms, and practices to make sure that gender, racial, and socioeconomic biases don’t creep into our education tools, and that all students can benefit from them. If we build an AI trained on open content, then we would truly have a model that would be by the people, for the people.
Second, we need to build educational tools that are designed to support teachers, not replace them. AI is reshaping how we think about teaching, learning, and the dissemination of knowledge, but a human teacher’s guidance can never be replaced. The way I see it is: teachers bring the heart, AI brings the horsepower. If AI can take care of grading and mundane data-crunching, then teachers can lean into their role as learning facilitators and mentors – curating, remixing, and contextualizing knowledge like educational DJs.
We’re on the brink of creating an educational ecosystem that can be more inclusive, innovative, and effective. But to create Responsible AI for education, we must build it on an open, equitable foundation. We’ve seen the global impact of open content and open pedagogy – I believe that the open education movement’s third frontier is open AI tools. Let’s collaborate, innovate, and push through the boundaries of what’s possible.