Malevolent Predictions of the Future of AI. (Part 3 of 3)

Malevolent Predictions of the Future of AI.

In the final of three blogs, Dr. Thomie Timmons, Chief Learning Officer at EnvisionEdPlus, examines the darker side of AI in education, drawing parallels with the infamous HAL from “2001: A Space Odyssey.” This blog speaks to the malevolent predictions of AI in education, such as widening educational gaps, discriminatory outcomes, surveillance and privacy concerns, loss of autonomy, and dehumanization. It wraps up this 3 part series with suggested strategies to remit a better outcome for students to ensure they thrive in school and beyond. 

HAL

I remember watching on a late night movie channel titled “2001: A Space Odyssey”. This movie had an AI called HAL that this 1968 film created as the antagonist of the story. HAL, almost prophetically, embodies my fears of what can go wrong with AI. I hope that reviewing the negative potential outcomes of an AI will emphasize the urgency for educational leaders to engage in the three key components mentioned previously.

If AI was used in schools without addressing equity, bias, and ethical concerns, it could lead to several detrimental consequences. It would increase the possibility of strengthening negative outcomes in the status quo that we are already trying to overcome.

  1. Widening Educational Gaps: AI systems trained on biased data could perpetuate existing inequalities, disadvantaging certain groups of students based on race, gender, socioeconomic status, or other factors. This could widen educational gaps and reinforce systemic inequities. It could disproportionately misclassify or underestimate the potential of students from marginalized groups. This could lead to unfair tracking, limited opportunities, and widening achievement gaps.
  2. Discriminatory Outcomes: AI-powered decision-making could lead to discriminatory practices, such as unfair grading, biased student profiling, or unequal access to resources. AI-based decision-making could reinforce harmful stereotypes and biases, particularly if algorithms rely on data that reflects societal prejudices. This would harm students’ academic trajectories and limit their opportunities. 
  3. Surveillance and Privacy Concerns: Extensive use of AI could create an environment of surveillance, eroding student privacy and creating a chilling effect on their behavior and expression. Students may feel constantly monitored and judged, impacting their learning experience and personal development. Unregulated use of AI could lead to violations of student privacy, as AI systems collect and analyze sensitive data. This could expose students to potential harm or misuse of their personal information.
  4. Loss of Autonomy and Trust: AI-based decision-making could reduce student autonomy and limit their ability to make independent choices in their learning journey. This could stifle student agency and hinder their development as self-directed learners. If these interactions are perceived as unfair or biased, it could erode trust between students, teachers, and educational institutions. This could undermine the learning environment, hinder student engagement and hinder social-emotional development.
  5. Over-reliance on AI and Dehumanization: Overdependence on AI could lead to a deskilling of teachers and a loss of human connection in education. Students may feel reduced to data points and lose out on the benefits of personalized interactions and mentorship. AI-driven personalization could lead to a narrowing of the curriculum, focusing on standardized assessments and neglecting broader educational goals, such as critical thinking, creativity, and social-emotional learning. This could lead to a dehumanization of education and diminish the human connection and empathy that are essential for effective teaching and learning.

To prevent these negative outcomes, it is crucial to prioritize Equitable Access, AI Bias, and Ethic concerns.  Make no mistake this is not a call to bar AI from the classroom, because AI is here and will become embedded into our educational systems. Educational leaders must be willing to use these strategies.

  • Prioritize equity in AI implementation: Educational leaders should develop and implement AI policies and practices that promote equity for all students. This includes ensuring that AI tools are accessible to all students and that they are used in a way that does not perpetuate existing inequalities.
  • Conduct due diligence on AI tools and systems: Educational leaders should carefully evaluate AI tools and systems before implementing them in schools. This includes assessing the tool’s accuracy, fairness, and potential for bias. 
  • Provide professional development on AI for educators: Educational leaders should provide educators with training on AI, including its potential benefits and risks. Educators should also be trained on how to use AI tools effectively and responsibly, and how to identify and address potential bias.
  • Empower students in the learning community: Ensure teachers are able to support students in developing critical thinking skills that will give them the ability to examine their interactions with AI, identify bias in these interactions, and empower all students regardless of the inequities they may face. 
  • Engage stakeholders in decision-making: Educational leaders should engage with students, parents, teachers, and other stakeholders in discussions about AI implementation. This helps to ensure that everyone’s voice is heard and that concerns are addressed. By communicating with the school community about how AI is being used in schools, this will lay the foundation of trust and understanding.
  • Develop and implement AI governance policies: Educational leaders should develop and implement AI governance policies that outline the principles and practices that will guide the use of AI in schools. These policies should address issues such as equity, fairness, privacy, and security.
  • Monitor and evaluate AI implementation: Educational leaders should monitor and evaluate the implementation of AI in schools to identify and address any potential problems. This includes monitoring the impact of AI on student learning, engagement, and well-being.

Whether benevolent or malevolent we can predict that these outcomes will be mixed and that as educational leaders addressing these three critical components will be an ongoing challenge. The history of education in the United States has been marked by both progress and inequities. AI has the potential to amplify both of these trends, depending on how it is implemented. Educational leaders’ ability to rise to this challenge must be worthy of all of our students and their future educational experiences. I can only hope that there will be a little less HAL and a lot more Familiar.  

Want to join the discussion? Join us on Tuesday, December 19th from 3-4pm for a virtual fireside chat with Dr. Thomie Timmons and explore the world of AI and how it will impact education. 

Want to stay updated with EnvisionEdPlus?

Reach out to our Team. 

Join Our Newsletter.

Have specific interests?
This field is for validation purposes and should be left unchanged.

Related Blogs

Thank you.

Ohio’s Comprehensive Literacy State Development Grant

Cybersecurity Pathway Design Lab Takeaways