Today, many priorities for improvements to teaching and learning are unmet. Educators seek technology-enhanced approaches addressing these priorities that would be safe, effective, and
scalable. Naturally, educators wonder if the rapid advances in technology in everyday lives could help. Like all of us, educators use AI-powered services in their everyday lives, such as voice
assistants in their homes; tools that can correct grammar, complete sentences, and write essays; and automated trip planning on their phones. Many educators are actively exploring AI tools as they are newly released to the public. Educators see opportunities to use AI-powered capabilities like speech recognition to increase the support available to students with disabilities, multilingual learners, and others who could benefit from greater adaptivity and personalization in digital tools for learning. They are exploring how AI can enable writing or improving lessons, as well as their process for finding, choosing, and adapting material for use in their lessons.
Educators are also aware of new risks. Useful, powerful functionality can also be accompanied with new data privacy and security risks. Educators recognize that AI can automatically produce output that is inappropriate or wrong. They are wary that the associations or automations created by AI may amplify unwanted biases. They have noted new ways in which students may represent others’ work as their own. They are well-aware of “teachable moments” and pedagogical strategies that a human teacher can address but are undetected or misunderstood by
AI models. They worry whether recommendations suggested by an algorithm would be fair.
Educators’ concerns are manifold. Everyone in education has a responsibility to harness the good to serve educational priorities while also protecting against the dangers that may arise as a
result of AI being integrated in EdTech. To develop guidance for EdTech, the Department works closely with educational constituents. These constituents include educational leaders—teachers, faculty, support staff, and other educators—researchers; policymakers; advocates and funders; technology developers; community members and organizations; and, above all, learners and their families/caregivers. Recently, through its activities with constituents, the Department noticed a sharp rise in interest and concern about AI. For example, a 2021 field scan found that developers of all kinds of technology systems—for student information, classroom instruction, school logistics, parent-teacher communication, and more—expect to add AI capabilities to their systems. Through a series of four listening sessions conducted in June and August 2022 and attended by more than 700 attendees, it became clear that constituents believe that action is required now in order to get ahead of the expected increase of AI in education technology—and they want to roll up their sleeves and start working together. In late 2022 and early 2023, the public became aware of new generative AI chatbots and began to explore how AI could be used to write essays, create lesson plans, produce images, create personalized assignments for students, and more. From public expression in social media, at conferences, and in news media, the Department learned more
about risks and benefits of AI-enabled chatbots. And yet this report will not focus on a specific AI tool, service, or announcement, because AI-enabled systems evolve rapidly. Finally, the Department engaged the educational policy expertise available internally and in its relationships with AI policy experts to shape the findings and recommendations in this report.
Via Edumorfosis, Jim Lerman, John Evans
Massive report!