Mapping the Invisible
The Ethics of Generative AI in Education
When one does not see what one does not see, one does not even see that one is blind. — Paul Veyne

Mapping the Invisible

The Ethics of Generative AI in Education

Generative AI is reshaping education at every level — from the planetary systems that power it to the intimate spaces where students learn to think for themselves. But conversations about the ethics of this transformation keep fragmenting. One person raises environmental costs; another worries about student identity. A policymaker focuses on governance gaps; an educator grapples with what's happening in their classroom.

These aren't competing concerns. They're different scales of the same complex phenomenon. A policymaker worried about governance and an educator worried about student wellbeing are not discussing unrelated topics — they are observing different points along the same causal chain. Before we can have productive conversations about what to do, we need a shared vocabulary for recognizing how these concerns connect — a shared map of what we're talking about.

This interactive framework provides that map. Drawing on cognitive flexibility theory — the principle that complex, ill-structured domains require multiple traversals from different vantage points — it organizes the ethical landscape of AI in education across four nested scales, from global forces to individual experience, revealing how pressures at one level cascade into conditions at others.

How to explore

Scroll through four scales in the center column — Global, Institutional, Epistemic, and Individual — or use the navigation on the left. Each reveals a different dimension of the ethical landscape.

Open categories within each scale to see specific concerns. Click subcategories for detailed descriptions.

Trace a thread in the right column. Select a cross-cutting theme — Surveillance, Standardization, or Labor — to see how that concern transforms at each scale. Click any card to jump between scales.

I. Global

Planetary Forces & Economic Systems

The broadest forces creating the material and economic conditions within which all educational decisions about AI occur. Carbon emissions, resource extraction, labor exploitation, and digital divides — often invisible to those making classroom-level decisions, yet fundamentally shaping what tools are available, at what cost, and with what hidden consequences.

A. Environmental Injustice+

The physical and ecological costs of AI infrastructure, systematically externalized and rendered invisible to end users in educational settings.

Climate Impact & Resource Extraction

Massive computational requirements generate significant carbon emissions and consume enormous energy and water. Data centers disproportionately burden marginalized communities. Rare earth minerals — cobalt, lithium — are extracted under exploitative conditions in the Global South, perpetuating colonial patterns where wealthy nations consume technology while poor nations supply raw materials at tremendous human and ecological cost.

Digital Inequality & Access

AI tools require substantial computational resources, reliable internet, and current hardware — widening educational gaps. The ecosystem increasingly assumes AI literacy and access, creating new forms of exclusion operating between Global North and South, urban and rural, well-funded and under-funded districts.

Temporal Lock-in

Once educational systems reorganize around AI, reversal becomes extremely difficult — creating path dependencies that constrain future possibilities. Current decisions are made without input from future students who will inherit these dependencies, raising questions about intergenerational consent.

B. Economic Extraction+

How AI commodifies human creativity, labor, and learning while perpetuating global economic inequalities.

Labor Exploitation & Displacement

AI development relies on poorly compensated workers performing psychologically damaging content moderation — essential to making systems appear safe, yet invisible. Within education, AI threatens to displace educators while educators themselves inadvertently generate data that may automate their own work.

Intellectual Property & Cultural Theft

AI models are trained on copyrighted works without compensation or consent. This extraction extends to Indigenous knowledge systems and non-Western epistemologies consumed and repackaged for profit, perpetuating colonial dynamics of cultural appropriation.

Surveillance Capitalism & Data Commodification

At the systemic scale, surveillance operates as global data commodification — the transformation of human experience into extractable value. Educational platforms transform intimate details of student thinking into profitable commodities, extending surveillance capitalism into the most private spaces of human development.

II.Institutional

Organizations & Governance

Where global forces become local realities. Schools, districts, and universities negotiate with vendors, establish policies, and allocate resources — mediating between planetary pressures and classroom experience. This is where surveillance capitalism materializes as specific contracts, where corporate consolidation becomes vendor lock-in.

A. Corporate Control+

How commercial interests shape educational priorities and infrastructure for profit rather than pedagogical goals.

Market & Institutional Capture

Corporations prioritize engagement metrics over educational outcomes. Vendor lock-in reduces autonomy. Unlike previous technologies, GenAI embeds into existing infrastructure — Google Workspace, Microsoft Office, LMS platforms — making opting out nearly impossible and bypassing traditional consent mechanisms.

Surveillance & Algorithmic Harm

At the institutional scale, surveillance materializes through collection infrastructure and policy frameworks. Platforms collect vast data about student thinking, creating permanent digital artifacts. AI perpetuates biases with compounding effects across intersecting identities.

B. Failures of Governance+

The governance challenges and professional impacts facing institutions and educators adopting AI.

Governance Gaps & Policy Vacuum

Deployment has outpaced regulation, creating legal ambiguity around data ownership, accountability, and rights. Technology companies shape policy through regulatory capture. Crisis-driven adoptions establish precedents that persist long after emergencies end.

Erosion of Professional Integrity

Institutions adopt AI without adequate frameworks, placing educators in ethically complex situations without support. Algorithmic supervision undermines professional autonomy, contributing to burnout and alienation from the creative, relational aspects of teaching.

III.Epistemic

Knowledge & Pedagogy

How AI reshapes the fundamental processes of learning, knowing, and teaching. Where institutional decisions materialize as changes in practice — where "personalization" meets standardized output, efficiency gains cost pedagogical intentionality, and assessment systems break down.

A. Threats to Knowledge Integrity+

Epistemological challenges when AI-mediated systems reshape how knowledge is produced, validated, and circulated.

Crisis of Truth & Verification

AI generates confident but incorrect information, challenging students' epistemological development. Synthetic content floods information ecosystems, complicating authenticity and credibility assessment.

Academic Integrity & Research Automation

AI-generated research may justify AI adoption, creating circular validation. The automation of scholarship raises fundamental questions about what counts as knowledge.

Hidden Cognitive Curricula

AI encodes particular thinking styles — linear, productivity-oriented — while suppressing divergent, contemplative, or non-Western approaches. This hidden curriculum shapes not just what students learn but how they learn to think, narrowing what counts as legitimate thought itself.

B. Pedagogical Disruption+

How AI disrupts the actual processes through which learning occurs and teaching is practiced.

Learning Process Disruption

Eliminating productive struggle impedes critical thinking and metacognitive development. AI compresses learning time, collapsing hours of contemplation into seconds — eliminating the pauses, confusion, and consolidation that allow knowledge to become truly owned.

Pedagogical Standardization & Flattening

AI generates standardized responses that fail to account for infinite contextual variability. AI-generated materials appear sophisticated but lack depth — curriculum-shaped objects that mimic the form of good materials without their substance.

Erosion of Educational Purpose

Teaching involves ethical responsibilities beyond content delivery — modeling critical thinking, fostering moral development, nurturing intellectual courage. AI increasingly frames education through efficiency metrics rather than humanistic growth.

IV.Individual

Development & Agency

Where all upstream forces ultimately arrive. Students' psychological development, relationships, sense of agency, and formation of identity. A student experiencing learned helplessness or struggling to develop an authentic intellectual voice is experiencing the downstream effects of decisions made at every other scale.

A. Relational Harm+

How AI affects students' capacity for human connection, social-emotional growth, and meaningful relationships.

Artificial Relationships & Social Isolation

Students develop parasocial relationships with AI — one-sided attachments lacking reciprocity, growth, and authenticity. These may interfere with the development of social skills and capacity for genuine intimacy that emerge only through actual human interaction.

Emotional Dependencies

Students may develop dependencies on AI's consistent, non-judgmental responses — without the complications of human relationships. Teachers serve as crucial emotional anchors; technological substitutes create affective mismatches.

Surveillance & Loss of Trust

At the individual scale, surveillance transforms pedagogical relationships themselves. When AI monitors behavior, relationships shift from trust-based connection to algorithmic discipline. Students become less likely to take intellectual risks or engage authentically.

B. Loss of Agency & Identity+

How AI shapes students' sense of self, capacity for independent thought, and identity formation during critical developmental periods.

Threats to Identity & Authenticity

"Who am I as a thinker?" becomes confused when expressed thought originates from AI rather than personal struggle. The private nature of AI interactions means intellectual development occurs invisible to educators and mentors.

Agency & Learned Helplessness

Over-reliance creates dependency undermining confidence. This fosters learned helplessness — a belief that one's own thinking is inadequate. Yet students also actively negotiate, resist, and creatively reappropriate these tools, finding agency in unexpected places.

Developmental Disruption

Instant AI responses reinforce expectations for immediate solutions, undermining patience and persistence. The compression of developmental time prevents gradual maturation — the rhythms that allow confusion to resolve into clarity, struggle into competence.

This framework doesn't prescribe what to do. It offers a shared vocabulary for seeing how ethical concerns manifest differently depending on where we stand — and how they connect.

Credits & Acknowledgments

Work in progress. This interactive framework is under active development. We welcome your feedback — please review with a critical and skeptical eye. Reach us at punya@asu.edu and mkheath@loyola.edu.

Authors

Punya Mishra & Marie Heath

With Lindsey McCaleb and Nicole Oster

Part of a broader research project on the ethics of generative AI in education. Contact us for details.

Design & Development

Interactive framework designed and built in collaboration with Claude AI (Anthropic).

Theoretical Foundations

The interaction design draws on Cognitive Flexibility Theory (Spiro et al.) — the principle that complex, ill-structured domains require criss-crossing the same landscape from multiple vantage points to develop flexible understanding.

License

Content and design are offered for educational use. Citation appreciated.

Mapping the Invisible — 2026
Select a thread to trace a concern across all four scales