Procurement of AI tools in K-12 and Higher Education

Patti Ruiz and Sierra Noakes
Digital Promise

How do education leaders evaluate AI and emerging technologies to inform procurement decisions?

In communities such as Iowa City School District and Lynwood Unified School District, they led with learning and relationship-building. The education leaders established AI Task Forces and engaged in empathy interviews with their teachers, students, families, and community members to understand more about experiences with the technology, reasons for using or not using certain technology, and how people feel technology will impact their quality of work. In collaboration with eight other districts, these districts began crafting Responsible Use policies that centered AI literacy, safety, ethics, transparency, implementation, and evaluation and impact. Now these communities are beginning to integrate their vision for AI into their evaluation and procurement processes.

Our work has found time and again that sharing key messages with education leaders through open resources is critical to building thoughtful procurement processes. Historically, processes used to procure edtech across PK-16 education systems have been hindered by inadequate, and in most cases, non-existent, evaluation and decision-making processes. We are collaborating with districts across districts across the country to learn about, codify, and share broadly their processes and learnings to support districts in building or iterating on their existing systems to inform procurement. By ensuring education leaders have relevant and useful research, resources, and information, they will be empowered to create rigorous, sustainable procurement processes.

Beyond collaboration across districts, we also believe nonprofits have a vital role in lifting some of the evaluation burden off of the shoulders of districts. Digital Promise, in collaboration with a number of nonprofits called the EdTech Quality Collaborative, provides product certifications to signal to districts when products align with indicators of quality. We collaborate with districts and higher education systems to support them in integrating certifications into their Request for Proposals, renewal decisions, and beyond to streamline access to information about product’s research-basis, alignment to learner variability, and partnership with educators in developing the tool.

Addressing Barriers to AI Adoption and Procurement

Product certifications and research-based initiatives can help to improve the evaluation and decision-making process for AI systems and tools. For example, qualitative data across various PK-12 districts and the California Community Colleges Chancellor’s Office demonstrates that education leaders are struggling to formalize evaluation processes to inform edtech procurement decisions. This is increasingly and especially challenging with the introduction of AI. Several key factors coalesce to make this work difficult, including (1) decentralized procurement systems that span across individual schools within a district, (2) siloed departments that do not collaborate on procurement decisions, and (3) cultures that prioritize purchase first and (perhaps) ask questions later. These challenges, coupled with the realities that the effectiveness of edtech at large remains inconclusive and a sense of disempowerment from education leaders, have overwhelmed PK-12 and higher education systems, often stopping them from codifying and improving procurement processes.

Digital Promise is collaborating with PK-12 districts across the League of Innovative Schools and the California Community Colleges Chancellor’s Office to develop strategies that empower intentional procurement and ongoing evaluation. By understanding education leaders’ priorities – such as research and evidence, and ethical AI–we co-design standards and create an application for edtech vendors to submit evidence to demonstrate that the tool meets the needs of the learning community. Education systems can then use these certifications to inform procurement decisions, whether through Request for Proposals (RFPs) or program evaluation request forms from educators and schools. Throughout our co-design process with education leaders and their communities, opportunities arise to help solidify what matters most to each community when considering edtech. In collaboration with the EdTech Quality Collaborative, we offer key questions leaders can ask vendors to empower their evaluation systems and support them in delving deeper into the products that meet the bare minimum requirements.

Key Factors for Evaluating AI Tools in Education

Through co-design sessions with over 20 districts, we’ve started to identify key considerations that education leaders should prioritize when evaluating AI-enabled technologies. Before jumping into using these tools, we encourage communities to formalize their “why:” What is the intended outcome and how will AI-enabled technologies serve as a tool toward meeting those goals? Many subsequent considerations focus on transparency: What student data is being collected and why? How is it stored and used? Is it sold or used by third parties? What measures and approaches are in place to continuously identify and address bias in the AI model(s)? What procedures are in place for data breach and incident response plans? Moreover, education leaders should ask about the training datasets: Were those datasets built from learners that reflect the district’s or institution’s community? If not, how does the development of the model account for these gaps in lived experiences and continually train for improvement? Check out this report for sample acceptable use policy language from districts around the U.S.

Collaborative Approaches to AI Adoption and Procurement

In our recent reports, AI-Powered Innovations in Mathematics Teaching & Learning and An Ethical and Equitable Vision of AI in Education: Learning Across 28 Exploratory Projects, one key finding discussed as an essential strategy to mitigating the multitude of risks that come with AI is the importance of human-in-the-loop workflows. Education leaders should require vendors to share their processes for users to share bugs or usage issues, and beyond that, understand how the vendor intends to work with the learning community to improve the tool to better meet their needs. The same is true, too, for education leaders building out processes to evaluate and purchase AI-powered tools. Educators often raise concerns about AI taking over their jobs, along with concerns around data privacy breaches, inaccurate outputs, and mistakenly being in noncompliance with usage policies (WEF, 2024, Krall et al., 2024). Education leaders have an opportunity to create open conversations with educators about their priorities and concerns, and collaboratively develop an evaluation system for procurement that considers the needs of the full community. This not only alleviates major concerns for educators, but also gains buy-in and trust.

Streamlining AI Adoption and Procurement in Higher Education: Lessons from PK-12

Digital Promise recently launched the Digital Equity Framework, a hub of principles, guidelines, and policy recommendations to inform edtech decisions. Additionally, over the past year, researchers in our Responsible, Ethical, and Effective Acceptable Use Policies for the Integration of Generative AI in US School Districts and Beyond project have worked with district leaders from across the country representing diverse identities and district demographics to co-design key policies and frameworks necessary for streamlining the procurement and implementation of GenAI. Through this work, we’ve identified five key topics leaders should include in their districts’ responsible use policies and procurement processes:

  • Safety: Prioritize protecting student, teacher, and community data and privacy while managing potential cybersecurity risks.
  • Ethics: Emphasize the need to be responsible, fair, and equitable, and acknowledge the biases in both humans and the synthetic outputs of GenAI.
  • Transparency: Be open about the processes you use when selecting GenAI tools for your schools and about the development of and changes in your guidance.
  • Professional Learning and Implementation Guidance: Include guiding language on ways teachers, students, and others can implement GenAI tools to the best of their ability and generate results that benefit them and their students.
  • Evaluation and impact: Regularly evaluate AI systems and tools, and the impact of their use: is it beneficial or causing harm?

Overall, the integration of AI systems and tools in PK-12 and higher education settings presents both opportunities and challenges. In order to harness its potential responsibly, informed decision-making needs to be prioritized and that involves establishing robust evaluation processes, fostering collaboration among community members, and developing clear policies that address equity and ethical considerations such as data privacy and transparency. We encourage those leading this work to share samples and examples broadly and to disseminate stories of obstacles and challenges to enable the field to work together on identifying strategies for AI’s role in education.