Implementing AI With Integrity Requires a Coordinated Policy Approach

By Sharmila Mann, Strong Start to Finish Advisory Board Member 

Last April, Strong Start to Finish (SStF) hosted a pair of virtual workshops for postsecondary faculty — one each for math and English – that focused on the use of artificial intelligence (AI) in corequisite classrooms. Workshop conversations were focused on generative AI. Generative AI is a form of artificial intelligence that can learn patterns and relationships in existing data and then use those patterns to generate new content. Workshop leaders encouraged participants to share their own AI experiences and concerns, then led thoughtful conversations around the utility and constraints of AI use in corequisite classrooms. 

The workshops underscored both the transformative potential of generative AI and the associated critical risks. When used with integrity, generative AI can, among other benefits, personalize learning and supports for students and assist faculty in the creation of deeper learning prompts and authentic assessments. However, any information fed into an AI tool becomes part of the public domain, raising data privacy concerns. Any content generated by an AI tool reflects biases present in publicly available information, elevating the potential for discrimination. Robust and coordinated AI policies at the national, state and institutional levels can help mitigate these risks. Core areas for AI policy consideration include governance, data privacy, equity, ethical use and support for faculty and student implementation. 

  • Governance – A comprehensive governance framework is critical to harnessing the potential of AI ethically and responsibly in postsecondary education. The National Institute of Standards and Technology (NIST) has taken a leading role in developing voluntary standards and frameworks to govern the trustworthy design, development and use of AI systems. States can create AI governance strategies to address regional educational needs. At the institutional level, AI governance committees can help develop and enforce tailored policies. 
  • Data Privacy – Implementing AI systems in education raises data privacy and security concerns around collecting and storing student data. Updating federal laws like the Family Educational Rights and Privacy Act (FERPA) and establishing national standards for data encryption and access control can help protect sensitive student information. State policy can specify allowable uses of postsecondary system data, while robust data management practices and regular audits of AI systems can help institutions maintain high standards of data security and integrity. 
  • Equity – AI systems can exhibit bias based on the data on which it’s trained, and must be used ethically to avoid discrimination. At the national level, regulations can mandate regular bias testing and fairness audits for AI systems. To avoid distribution bias, states can commit to monitoring and addressing disparities in AI resource allocation across institutions. Institutions can help protect students and faculty from AI-based discrimination by ensuring that high-stakes decisions supported by AI tools, such as hiring, promotion, grades, and disciplinary action, always include a human touchpoint. 
  • Ethical Use Ethical use and transparency are fundamental to building trust and accountability in the use of AI in education. Comprehensive ethical guidelines developed at the national level, such as those put forward by the U.S. Agency for International Development (USAID), can provide a framework for transparency in AI algorithms and decision-making processes. States and institutions can adopt clear policies on the ethical use of AI. They can set rules for responsible use and require transparency in decision-making processes. 
  • Faculty and Student Supports – Preparing faculty and students to use AI-enabled tools appropriately and effectively is critical to postsecondary AI implementation. State funding for educator training programs, including certification programs for AI competency, can be used to ensure that faculty are well-prepared to leverage AI in their classrooms. At the institutional level, introducing AI literacy into the curriculum can equip students and educators with the knowledge and skills needed to understand and utilize AI effectively. 

Ethical use of AI in postsecondary classrooms requires a coordinated policy approach at the national, state and institutional levels. National regulations can provide a robust framework for data security and bias mitigation. State policies can tailor strategies to regional needs and professional development. Institutional policies can ensure practical and ethical AI use. Policymakers at each of these levels should take action to address common areas of concern, creating safe spaces for corequisite educators and students to harness the transformative potential of generative AI.