As one of the most ‘technologically transformative’ moments in history unfolds, evangelists tout AI’s potential benefits for learners and educators: improving student assessment and support, expanding access to multilingual and assistive technologies, and solving efficiency challenges like high school scheduling.
However, practitioners, policymakers, families, and students remain cautious. Educators are asking how we ensure new applications and use cases actually prioritize public needs and interests. In the words of Frank van Capelle, the Global Lead for Digital Education at UNICEF: “Can AI transform learning for the most marginalized? The answer is a resounding yes. Time will tell, however, whether AI will reduce or further widen inequalities.”
To ensure AI serves all students, we must proactively shape its development as a public good rather than passively accept a future dictated by private interests.
The current state of AI development and regulation is one of concentrated industry power, with private technology companies having significant data, computing power, and geopolitical advantages and control. The K-12 sector remains wary, having been positioned as a ‘downstream’ beneficiary of past technological advancements—from personal computing to the internet—controlled by private developers and commercial interests. The benefits we’ve gained have rarely been transformative or reduced inequities.
If AI is indeed to be a widespread, ubiquitous force for change, K-12 leaders must demand more on behalf of equity and the future of human development. To shift ‘upstream’ in AI development, we must stop asking whether AI can be a public good and start managing it as one—an open, non-excludable resource that benefits society as a whole. In other words, AI, like clean air or public parks, can be freely used by anyone without reducing its availability to others. Together, we must create AI ecosystems that prioritize public goods, public orientations, and public use cases neglected by private industry and commercial interests.
Is this even possible? Yes.
Other sectors—national defense, space exploration, and transit—have already built public AI infrastructure. The work will be complex, requiring coordinated, collective action across development, governance, and regulatory domains to create and sustain AI infrastructure and tools that are freely accessible, open-source, and tailored to diverse educational needs.
Where should we begin? Here are three places to start:
Develop and sustainably resource open public models as counterpoints to commercial interests.
Open public models are AI systems that are freely accessible for adaptation and use, similar to open-source software. Numerous entities, from Mozilla to the United Nations, are calling for an AI strategy that prioritizes the creation and governance of these models, which developers, educators, and nonprofits can use to create and customize tools for local contexts.
Publicly governed AI reduces reliance on a few dominant companies, providing accessible infrastructure and tools to advance educational equity—much like open-source software and open educational resources (OER). Open, publicly created and governed models, much like OER, enable the creation of equitable educational tools by leveraging diverse cultural, linguistic, and socio-economic datasets. Similarly, in the same way open educational practices empower educators to adapt curricula to local needs, open public AI models can empower schools to create contextually relevant tools and use cases that prioritize the needs of marginalized communities, bringing the power of the publicly-created data to train private models to the full public benefit.
Skeptics might point to the vast monetary and energy resources needed to develop and train models as a barrier to open approaches. However, public investment in AI is not only feasible but necessary. Governments, already absorbing costs from AI’s environmental impact and inefficient private competition, are well-positioned to lead—investing strategically while balancing public tradeoffs. They already fund AI research through defense and infrastructure projects. Redirecting a portion of this investment toward education AI models would ensure that advancements serve the public interest. Finally, costs will likely go down; recent initiatives like DeepSeek, an open-weight large language model, prove that well-funded public models can rival commercial AI, reducing barriers to access and enabling customized, equity-driven applications.
Beyond direct investment, public leaders must take concrete steps to ensure AI development prioritizes the public interest. This includes enacting policies that strengthen regulation, set performance standards, and reduce harm. This could include prioritizing ‘green AI’ models and prioritizing sustainable computing approaches that reduce energy costs and impact.
Invest and commit to infrastructure for transparency, accountability, and learning.
Our public investments, including in models and governance, need to include mechanisms for understanding which models are being used in which applications for whom as well as means to gather data on outcomes to validate performance, share data openly to solve big shared problems, and improve models for future use. We also need means for the public to transparently flag and address feedback and harm, fostering more inclusive and equitable educational solutions.
Other fields are further along in this area than the K-12 sector, and we can use exemplars and ideas to inspire our design. For example:
- In the development space, UNESCO has already proposed ethical frameworks for the protection of human rights and dignity that can and should inform field-level decision-making.
- AI regulatory efforts in financial technology stress-test models for bias and reliability before deployment. A similar validation system could be applied to educational AI to ensure it meets equity and performance standards.
- In healthcare, leaders are trying to figure out how to adjust clinical data collection in ways that better support local validation and improvement of new AI-supported technology use, which fundamentally depend on the conditions in which they are used and populations served, along with means for aggregating learnings at a higher level to support broader adoption and regulation. Experts have proposed the development of local federated registries designed to record and track all health AI technologies deployed in clinical care and operations at a given health system, connected to larger national aggregators.
Remove foundational barriers– systemic challenges such as lack of internet connectivity– that prevent equitable participation to access.
The same digital divides that have plagued us in the past will prevent equitable use and access of AI in the future. If AI will indeed be central to learning and participation in workforce and democracy, we cannot afford, ethically or competitively, to leave these communities behind. AI-ready infrastructure must be a public priority, much like electricity and clean water.
For those of us who spend our days online, it can be difficult to fully recognize the depth and breadth of pervasive and pernicious access inequalities that exist today. On the connectivity front, it’s estimated that 2.6 billion people worldwide lack broadband access, including 24.2 million (or roughly 7%) here in the United States. These numbers dramatically increase in communities that are already furthest from educational opportunity (22% of rural, 18% of tribal, and 18% of low-income populations). While the majority of adults report access to smartphones, discrepancies in mobile access mirror those of home broadband.
Some AI applications, like Kolibri, are designed for offline use via local data storage and intermittent syncing. However, these solutions require robust devices, such as pre-loaded solar-powered tablets, and significant local storage and computing power. As backstops, they lack the same level of quality and advanced models connected users have.
We need a public strategy to fully and finally address community- and individual-level gaps in future-ready ways. This includes protecting and sustaining federal efforts like E-Rate and Universal Access Funds, as well as devising new approaches (such as using AI itself to identify and dynamically target gaps).
AI can function as a public good—widely accessible, open, and designed for the collective benefit—if we create the conditions that ensure its sustainability and ethical use.
Its future in education is not preordained and is ours to shape. Philanthropists, educators, and policymakers must act now. This means funding open-source AI, advocating for equitable digital infrastructure, and holding companies accountable for ethical development. Our kids and future generations deserve nothing less. With proactive leadership and sustained collaboration, we can ensure AI fulfills its potential to create a truly equitable educational future.