Opportunities and Challenges: Insights from North Carolina’s AI Guidelines
Early guidance helps all schools seize the technology’s potential and mitigate the risks.
Recognizing generative artificial intelligence (AI) as an “arrival technology”—one that has permeated K-12 classrooms and homes with or without schools’ officially adopting it—the North Carolina Department of Public Instruction (NCDPI) developed guidelines in 2023 for responsible implementation in schools under the direction of Dr. Vanessa Wrenn, chief information officer, and Superintendent Catherine Truitt. These guidelines provide a roadmap for harnessing generative AI’s potential to enhance learning and creativity while mitigating risks related to privacy, data security, academic integrity, and the spread of misinformation.
The November 2022 release of AI tool ChatGPT caused widespread panic among K-12 and higher education leaders across the country, followed by widespread blocks and bans of the technology. But as thousands more AI tools subsequently flooded the market and most long-standing educational tools began incorporating generative AI, it became apparent that blocking them all was infeasible and impractical: Doing so would leave schools with few digital tools to use and students and educators alike could easily access these tools on their personal devices.
North Carolina districts and schools took a variety of approaches during the first months to a year after ChatGPT became publicly available. Some banned it, some left it open for teachers and students to explore on their own, and others adopted policies somewhere in the middle. But few issued clear guidance around its use or provided training to staff or students on how to use the tool effectively and responsibly.
Purpose and Process
Dr. Wrenn convened a steering committee to develop AI guidelines for two reasons: to ensure that the technology was implemented safely and responsibly and to ensure that all North Carolina students have opportunities to develop AI literacy and to benefit from what this powerful new technology offers. The Office of Digital Teaching and Learning partnered with others across the agency to garner advice from those with expertise in career and technical education, computer science, data privacy, and K-12 cybersecurity.
The main goal of the steering committee was to provide guidance on integrating AI responsibly to empower individuals and foster continuous learning. The committee recognized AI literacy as crucial for closing digital equity gaps and preparing students for the future workforce, and members emphasized the importance of teaching students to work alongside AI while honing uniquely human skills.
The committee began by conducting a landscape analysis of existing resources, drawing particular inspiration from model AI resources developed by AI for Education, Teach AI, and the U.S. Department of Education, in addition to resources I created for educator training sessions delivered across the state. The committee adapted these tools to fit the unique needs and priorities articulated in North Carolina’s Digital Learning Plan.
The rapid pace of AI advances stands in contrast to the more gradual adoption of the internet.
While some of the challenges of AI are like those that other technologies have posed, the rapid pace of AI advances stands in contrast to the more gradual adoption of the internet. It therefore necessitates a more proactive, agile approach (box 1).
Box 1. Adoptive versus Arrival Technologies
Generative AI can be defined as artificial intelligence that can create text, images, video, and audio based on the data set used to train it and a user’s request. While AI has been around since the 1950s and integrated into our cell phones for a decade, the general public had not experienced generative AI until the advent of ChatGPT in November 2022. Since then, generative AI has caused more disruption to education than any other technology, even the internet of the late 1990s and early 2000s.
But a comparison may be instructive. It took nearly three decades for education institutions to adapt and fully incorporate the internet into their academic integrity policies, as well as teaching, modeling, and reinforcing its appropriate use with their students. While some students certainly continue to plagiarize internet sources, wisdom prevailed: Schools came to realize that effective internet use was an important job skill that students needed to be competitive in the global economy.
In contrast, MIT referred to laptops, PCs, and the internet as “adoption” technologies. That is, an organization had to officially adopt it to effect large-scale change in its operations. Not adapting to adoptive technologies such as the internet early may have resulted in a few missed opportunities in education, but it did not necessarily cause widespread harm because of the relatively slow pace of adoption.[1]
In contrast, generative AI is an “arrival technology,” which does not require adoption in order to spread. Students with robust home internet and devices already have access at home through hundreds of apps. Those with cell phones have powerful generative AI in their pockets. But lower income students who rely on the school for connectivity, computers, and high-quality training on how to use it responsibly will not have the same opportunities. Due to these kinds of inequities, the harms from ignoring or mismanaging arrival technologies are much greater. More privileged students will have greater, better-quality access, compounding their advantages when they enter the workforce and college, while less-affluent students will be at even greater disadvantage.
A recent study by Microsoft found that 75 percent of knowledge workers are using AI for their work, a 46 percent increase over just six months prior. Ninety percent say it saves them time, 85 percent say it helps them be more creative, and 84 percent say it helps them enjoy their work more. In addition to these findings, the report found that 66 percent of leaders say they would not hire someone without AI skills, and 71 percent say they would rather hire less experienced candidates with AI skills than more experienced candidates without them.[2] AI is a technology education leaders cannot afford to ignore. Not implementing it effectively into teaching practices will be detrimental to students.
Given the fast-moving nature of the technology, NCDPI created a “living document” that will be updated to reflect new developments and lessons learned during implementation. The first version of guidance, “NC Generative AI Implementation Recommendations and Considerations for PK-13 Public Schools,” was published in January 2024.[3]
What Is in the Guidance
The guidelines span five areas: leadership and vision, human capacity, curriculum and instruction, data privacy and security, and technology infrastructure and devices.
The leadership and vision section features a phased implementation roadmap adapted from one created by AI for Education. It is designed to help schools gradually build capacity and comfort with generative AI tools through piloting, professional development, and community outreach. This guidance recommends that all staff receive high-quality, job-embedded professional development and time to increase their own AI literacy, ask questions, express concerns, and share ideas before generative AI tools are made available for students to use. The guidance also includes information on developing AI policy, suggested language for updating acceptable-use policies and academic integrity policies to encompass generative AI, and a framework for evaluating the quality and safety of AI-powered education technology products, adapted from AI for Education’s framework.
The human capacity section of the guidance defines AI literacy and how schools can foster it for all staff and students. AI literacy comprises a fundamental understanding of what AI is and is not, how generative AI works, its potential benefits and limitations, and how to use it responsibly, ethically, and effectively. It also encompasses media literacy, particularly in relation to AI-generated content.
Even younger students who are not old enough to learn with AI can and should learn about it in age-appropriate ways. Due to concerns surrounding data privacy, bias, and potential inaccuracies in AI tools, all users ought to receive basic AI literacy training before working with generative AI.
Schools play an important role in ensuring an AI-literate citizenry. AI literacy can be enhanced in all grades and curriculum areas by infusing it into the durable skills included in the ISTE Standards for Students (which North Carolina has adopted), the North Carolina Portrait of a Graduate, and state computer science standards. AI-generated text, images, videos, and audio have advanced at a remarkable pace. Thus, perhaps the most concerning aspect of generative AI is that today’s students will live the remainder of their lives in a world in which they cannot trust that anything they read, see, watch, or hear is human generated. This risks great harm to people’s reputations and to our fragile democracy: Bad actors will certainly produce false text, images, video, and audio to manipulate viewers and spread misinformation. Having an AI-literate population who knows to critically evaluate everything they see is the best defense against these dangers.
Today’s students will live the remainder of their lives in a world in which they cannot trust that anything they read, see, watch, or hear is human generated.
Ethical Uses
Beyond offering tactical advice, the NCDPI guidelines grapple with thorny ethical questions: How can schools cultivate a culture of academic integrity when AI tools can produce human-like text on demand? How should intellectual property and attribution norms evolve in the age of generative AI? The guidelines do not claim to have all the answers but provide frameworks for navigating these dilemmas.
A predominant concern of educators continues to be the ease with which students can have these very conversational models do their work for them. However, in the not-too-distant future, it will be a common assumption that all writing, from academic papers to news reports and emails, may be generated with AI. Educators will need to rethink their ideas of what constitutes plagiarism and cheating and adapt their teaching, assignments, and expectations to this new reality. It is shortsighted to cling to the binary thinking that all uses of AI are cheating and not using AI is not cheating. Because of its ubiquity in everyday apps and all aspects of the digital world, such approaches will not be feasible.
In addition, there is no fair, accurate way to determine if a text has been generated using AI. A March 2024 study estimated an accuracy rate of only 39.5 percent for such detection tools, with a rate of 33 percent of false positives for human-written text. Furthermore, the accuracy rate plummeted to 17.4 percent when the user paraphrased AI-generated content or used more advanced prompt engineering to manipulate the output to make it seem more human. Some of these techniques reduced the accuracy to as low as 4 percent.[4] Additionally, AI detection tools falsely identifies the work of nonnative English speakers as AI-generated with higher frequency. Given the fast evolution of the technology, there is very little chance that there ever will be an accurate AI detection tool.
False accusations can ruin a student’s emotional health, academic record, and future prospects. Thus the North Carolina guidance strongly recommends against the use of AI detectors to counteract cheating.
Educators instead should teach and model effective, responsible use. If they have concerns about a student’s work, they should have a conversation with the student to evaluate their understanding of the topic, which will typically clarify the amount of learning that took place. If the educator determines that AI was used in a way that is not aligned with the academic integrity policy, this could be a teachable moment to reinforce the academic integrity policy and appropriate use of AI as a tool. Just as with the internet, students will need to be taught the appropriate way to work with AI. Before making assignments, teachers should clearly communicate the consequences for violations to students and parents.
The guidelines recommend that teachers design “AI-resistant” assessments that would be more difficult to complete with AI, even as it acknowledges that there is no AI-proof assignment and that skilled prompters can produce plausible assignments regardless of the prompt, just as they can fool AI detectors. But a benefit of AI-resistant assignments is that they require higher levels of critical and creative thinking, which students have long needed in any case.
A benefit of AI-resistant assignments is that they require higher levels of critical and creative thinking, which students have long needed in any case.
Generative AI will require a seismic, rapid shift in the collective mind-set of education leaders and teachers that focuses on process over product, allows for its use, and trains students to use it to enhance but not replace their creativity and intelligence. To guide this shift, the guidance recommends teaching students about AI at all grade levels and provides a framework for schools to understand the continuum of possible uses (figure 1). This framework was recently simplified, and a new level, AI Empowered, was added to demonstrate the technology’s potential to empower students to create new content and solve complex issues that would not have been possible previously.
While the guidelines provide ample flexibility for local customization, they set clear guardrails to protect student data privacy, prevent cheating, and promote responsible use. Schools are advised to designate a person to field concerns, procure tools that are purpose-built for education, and prohibit AI use on high-stakes assessments. At the same time, the guidelines encourage schools to embrace the potential of generative AI to personalize learning, provide timely feedback, and empower students as creators.
Implications for Policy
While the guidelines were designed for voluntary adoption in North Carolina, they hold important lessons for state and local policymakers nationwide. As generative AI reshapes the education landscape, education leaders have a crucial role to play in setting standards, building capacity, and advancing equity. Key priorities include the following:
- Providing guidance and model policies. States should develop clear guidelines for the ethical, effective use of generative AI in schools, as North Carolina has. These guidelines should be regularly updated to keep pace with technological and pedagogical innovations.
- Investing in professional development. To realize the benefits of AI while mitigating risks, teachers will need significant training and support. States should allocate funding for high-quality opportunities focused on AI literacy and integration.
- Investing in and ensuring equitable access to robust AI literacy training for all students. Because the technology will affect them most, students must be learning about AI in age-appropriate contexts in all grades and receive high-quality opportunities to learn with it in increasingly complex ways in secondary schools.
- Updating academic standards. In a world where factual knowledge has been a Google search away for years and in which concise answers are accessible in many generative search tools such as Perplexity, states must rethink which competencies are essential for students to master. Standards should place greater emphasis on critical thinking, collaborative problem solving, and metacognition—skills that complement AI’s capabilities.
- Procuring quality AI tools. States can support schools by vetting and procuring AI-powered educational software that is effective, ethical, and secure. Establishing criteria and continuously monitoring the evolving AI market will ease burdens on local leaders.
- Advancing research and evaluation. With AI poised to transform K-12 education in ways both profound and unpredictable, states must invest in ongoing research to understand its impacts and identify best practices. Engaging universities and forming research-practice partnerships will accelerate knowledge building.
- Advocating for equitable access. Given the risk that AI could exacerbate existing educational disparities, states must proactively ensure that all students have access to AI literacy curricula and tools, regardless of geography or socioeconomic status. Establishing AI readiness grants for high-needs schools is one promising approach.
Conclusion
As generative AI continues to propel us into uncharted educational territory, North Carolina’s experience developing AI guidelines for schools offers a roadmap for other states to follow. By engaging diverse stakeholders, providing concrete guidance while allowing for flexibility, and committing to continuous learning and iteration, NCDPI has positioned its schools to seize AI’s educational potential while mitigating safety, equity, and ethical risks.
The work in North Carolina is far from finished. With generative AI evolving at breakneck speed, states and education leaders will need to be nimble, proactive, and steadfast in advancing policies that harness these powerful technologies to improve student learning and prepare students to live and work in a world in which they will interact with AI in increasingly more complex ways that we can scarcely imagine. This will require ongoing investment, collaboration, and a willingness to embrace change while holding fast to timeless values of academic integrity, critical inquiry, and human judgment. For state boards of education bold enough to take up this charge, the NCDPI guidelines provide a strong foundation on which to build.
Vera Cubero is lead contributor to the North Carolina AI Guidelines for the North Carolina Department of Public Instruction.
Notes
[1] Eric Klopfer et al., “Generative AI and K-12 Education: An MIT Perspective,” March 27, 2024, https://doi.org/10.21428/e4baedd9.81164b06.
[2] Microsoft and LinkedIn, “AI at Work Is Here. Now Comes the Hard Part,” 2024 Work Trend Index Annual Report (Microsoft, May 8, 2024).
[3] The most recent version of the guidelines can be viewed at North Carolina Department of Public Instruction, The North Carolina Generative AI Implementation Recommendations and Considerations for PK-13 Public Schools, go.ncdpi.gov/AI_Guidelines.
[4] Mike Perkins et al., “GenAI Detection Tools, Adversarial Techniques, and Implications for Inclusivity in Higher Education,” preprint (March 2024).
Also In this Issue
State Education Policy and the New Artificial Intelligence
By Glenn M. Kleiman and H. Alix GallagherThe technology is new, but the challenges are familiar.
Opportunities and Challenges: Insights from North Carolina’s AI Guidelines
By Vera CuberoEarly guidance helps all schools seize the technology’s potential and mitigate the risks.
Connecting the National Educational Technology Plan to State Policy: A Roadmap for State Boards
By Julia FallonState leaders can use the plan to gauge whether their policies are expanding technology access, teachers’ capacity, and the learning experience.
Navigating Systemic Access to Computer Science Learning
By Janice MakReal advances to broaden participation in K-12 computing will come when state boards take a 360-degree view.
Ensuring Student Data Privacy through Better Governance
By Paige KowalskiState boards should champion laws to stand up robust cross-agency boards and advocate for best practice.
Advancing Policy to Foster K-12 Media Literacy
By Samia Alkam and Daniela DiGiacomoSome state leaders are moving to provide students with what they need to better navigate the digital world. More should.
Shielding Student Data: The Critical Role of State Boards in K-12 Cybersecurity
By Reg LeichtyA coordinated push is needed to ward off increased threats and mounting costs.