Artificial Intelligence (AI) is changing the way we teach and learn. From personalized learning platforms like DreamBox and IXL, to AI-powered writing assistants like Grammarly and Quill, these tools are showing up in classrooms across the country—including in the Title 1 schools I work with. As an educator and developmental geneticist, I’ve seen how AI can offer real benefits: adaptive feedback for students, reduced grading time for teachers, and insights into learning patterns we might otherwise miss. But with all of this innovation, there’s something we don’t talk about enough: the ethics of it all. As we hand over more decision-making power to algorithms, we need to stop and ask who’s making sure these tools are fair, transparent, and safe for our students?
The Problem of Bias in AI
One major concern is bias in AI systems. We often think of machines as neutral, but that’s a myth. AI systems are trained on human data, and that data can carry hidden assumptions and prejudices. For instance, an AI used to predict which students are “at risk” of failing might be trained on data from a school district where students of color were historically over-disciplined or under-supported. If the model picks up on those patterns, it may unfairly flag certain students based on race or zip code.
This isn’t just hypothetical. In 2020, the New York Civil Liberties Union (NYCLU) raised concerns after Lockport City School District in New York deployed facial recognition technology that misidentified students of color at disproportionately high rates. Research from the MIT Media Lab, led by Joy Buolamwini, found that commercial facial recognition systems from companies like IBM, Microsoft, and Face++ had error rates of up to 34.7% for women with darker skin tones compared to less than 1% for men with lighter skin tones. These findings reveal how racial and gender bias can be baked into the very tools being used to monitor and make decisions about students.
And the most troubling part? These decisions often happen behind the scenes, without teachers or students even knowing it. A student could be denied access to an advanced placement course or placed in a remedial track based on an algorithm’s output that no human ever questioned. In 2021, a UK algorithm used to assign final grades when exams were canceled due to COVID-19 sparked national outrage after it downgraded students from under-resourced schools while favoring those from wealthier institutions. That controversy led to the system being scrapped, but it serves as a warning: biased data leads to biased decisions, even when delivered by high-tech tools.
The Privacy Concerns of Student Data
Then there’s the issue of data privacy. AI tools require massive amounts of data to function effectively like attendance records, test scores, typing speed, voice recordings, and even facial expressions. Consider a platform like ClassDojo, which tracks student behavior and progress in real time. While it can be a useful communication tool between teachers and parents, it also stores sensitive behavioral data that could follow a child for years. Who owns this information? Where is it stored? And what happens if it gets into the wrong hands?
This isn’t a theoretical concern. In 2022, The Markup reported that several popular edtech platforms, including ClassDojo and Edmodo, were found sharing user data with third-party advertisers, despite being used primarily in K–12 classrooms. In some cases, the data isn’t just stored by the school, it’s held and controlled by the company that created the platform. That raises serious questions about whether this data could be sold, used to train unrelated AI algorithms, or shared with external vendors.
A 2020 report from the Center for Democracy and Technology (CDT) found that many edtech companies were collecting more data than necessary and lacked clear policies about how long that data would be retained. One investigation by the Australian Association for Research in Education (AARE) revealed that vast amounts of student data including behavioral records and digital engagement metrics are often stored in the U.S. under laws that may not align with local protections, leaving schools and families uncertain about how the data is managed or who has access to it.
Another example comes from a 2024 investigation by the Associated Press, which exposed major privacy concerns around Gaggle, an AI surveillance tool used to monitor students’ activity on school-issued Chromebooks. The report found that nearly 3,500 sensitive student documents including mental health notes and disciplinary reports, were accessible without proper protections, raising alarms about both overreach and data security.
Families are often left in the dark, with no easy way to opt out or control what’s being collected, and school districts may not always fully understand the implications either. As educators, we are entrusted with our students’ safety and well-being. That responsibility must now extend beyond the classroom, to protecting their digital identities, their privacy, and their long-term futures in an increasingly data-driven world.
The Black Box Problem: Lack of Transparency
Another concern is transparency, or the lack of it. Many AI systems function as “black boxes.” We see what goes in and what comes out, but we don’t know how the system made its decision. This becomes especially troubling in education, where some schools are using AI-powered tools to grade student essays. These systems often assign scores based on measurable criteria like sentence structure, grammar, and vocabulary use. However, they can fall short when it comes to evaluating creativity, nuance, or cultural context.
A notable example occurred in 2020, when the company behind the AI scoring tool used by EdX and other online learning platforms came under fire. In one case, a student submitted an essay with perfect grammar but little insight and received a higher score than another student whose work was deeply thoughtful and original but didn’t align with the algorithm’s preferred structure. Educators and researchers flagged this issue in multiple studies, including a high-profile MIT Technology Review article, which revealed that students were sometimes able to game the system by mimicking formulaic writing without demonstrating deep understanding.
So, how can AI be improved to fairly evaluate complex responses? More advanced systems now use large language models (LLMs), like GPT-4, which can better understand context and nuance. Some tools are even incorporating hybrid models, combining AI analysis with human review, to provide more balanced assessments. While these developments are promising, they’re not yet widespread in K–12 settings, largely due to cost, training barriers, and ethical concerns.
Ultimately, if teachers can’t explain why a student received a particular grade or if an algorithm’s decision contradicts their professional judgment we risk undermining trust in both the technology and the educational process. Ensuring that AI tools support, rather than replace, human insight is key to maintaining fairness, accuracy, and transparency in student evaluation.
What Can We Do to Address These Issues?
So what can we do about all this? First, we need more transparency from edtech companies. They should be required to disclose how their AI systems are trained, what data is being used, and how decisions are made. We also need independent researchers and educators to be able to audit these tools for fairness, accuracy, and bias. Second, schools and districts need stronger policies around data privacy and governance. Parents should have a clear understanding of what data is being collected, how it’s being used, and how to opt out. Students should have the right to request that their data be deleted or reviewed. Third, and maybe most importantly, we need to make ethics part of the conversation from the beginning. Decisions about which AI tools to adopt shouldn’t be left to IT departments alone. Teachers, families, and students must be at the table. We need to ask not just whether a tool works, but whether it aligns with our values as educators.
Embracing AI with Caution
AI in education isn’t going away, and it shouldn’t. These tools have the potential to make learning more inclusive, personalized, and responsive. But we can’t let convenience or novelty override our responsibility to the people behind the data. The students. The parents. The teachers. The communities. It’s not just about asking if we can use AI in the classroom. It’s about asking should we, and under what conditions.
Because at the end of the day, education is about people. Real, complex, creative people. And no matter how advanced our technology becomes, it should always serve them, not the other way around.