Video Review: Learning in the Age of AI: What Education Is Optimizing For and What Employers Should Be Watching
This week, I took a deliberate trip down education lane to better understand how artificial intelligence is shaping learning, not just in theory, but in practice.
That curiosity is both personal and professional. I previously taught as a New York State–licensed Career and Technical Education (CTE) teacher in a New York City PROSE school, a model built on the premise that students learn differently and that education must make room for real-world, applied skill development. Today, I work in executive support, partnering closely with senior leaders and organizations as they navigate operational considerations.
So when I watched “Learning in the Age of AI: Critical Insights” featuring Stanford Graduate School of Education Dean Dan Schwartz, hosted by Alpha School Co-Founder, MacKenzie Price, I wasn’t watching as a neutral observer. I was watching with a bias, and I think that matters.
Acknowledging My Bias Up Front
Dean Schwartz opens by noting that most people approach education with deeply held preconceived notions. I agree, and I include myself in that assessment.
My bias comes from teaching at the 11th and 12th grade level, the tail end of a student’s formal education. In CTE, there’s an unspoken contract: if I can’t help students leave with tangible, market-relevant skills, I’m not doing my job. While education absolutely exists to expose students to ideas and plant intellectual seeds, my lens is unapologetically workforce-adjacent. Rockefeller established the GEB for good workers, not for “good knowingness” for no reason. Public education was created because workers were (the first) widgets. That framing shaped how I heard everything that followed in the discussion.
AI as a Mirror: What Learning Science Is Finally Forcing Us to Admit
One of the most compelling points Dean Schwartz made is that AI has become a reflection of learning science itself. We used theories of how humans learn to train AI systems, and now AI is forcing us to confront an uncomfortable truth:
Many educational practices have been repeated for decades without strong empirical evidence that they actually work.
His example of traditional word problems landed for me. As educators, we often assume familiarity equals effectiveness. AI, ironically, is exposing where that assumption breaks down.
He also dismantled the idea that “learning” is a single process. Learning is actually multiple systems operating together, acquiring something new, retrieving known information, practicing fluency, each with different cognitive “appetites.”
That insight matters because while learning science increasingly understands these systems, education infrastructure hasn’t caught up. Schools are still built for standardization, not cognitive nuance.
From an executive operations perspective, this gap feels familiar. Organizations often know how work actually happens, but their systems, workflows, and incentives lag behind that knowledge.
Individualized Learning: A Promise With Employer Consequences
One of the most frequently cited benefits of AI in education is its ability to provide individualized learning at scale, something no single teacher managing 20+ students can realistically do.
In theory, this is fabulous news.
But here’s the question that stayed with me, especially given education’s historical role as a labor-force pipeline:
Will hyper-individualized education better prepare students for the workforce, or will it create such nuanced learning paths that employers are forced to fundamentally rethink how they recognize skill and talent?
Dean Schwartz emphasized creativity as a core competency for working effectively with AI. I agree. But if AI-driven education optimizes for highly personalized creativity, employers may soon face early-career candidates whose skills are deep but non-standardized, adaptive but difficult to benchmark.
That raises downstream questions for hiring, assessment, and workforce design, questions most employers are not yet asking loudly.
Automation vs. Transformation in Education Systems
Dean Schwartz voiced a concern that resonated with me: AI could simply automate existing educational systems, including the ones we actually want to change.
Dean Schwartz and Mrs. Price didn't fully unpack it, but I kept asking myself why this would be good and bad. I guess I'll have to come back to that bit.
In rule-based domains like math, AI is naturally well-suited to grading, tutoring, and feedback. That creates efficiency and frees up teacher bandwidth. In flexible school models, like NYC PROSE schools, that bandwidth could be redirected toward individualized, applied learning.
But that assumes school systems can:
Recognize individualized learning as legitimate
Measure it meaningfully
Operationalize it at scale
Large, urban school systems already struggle with data integrity even under standardized testing regimes. AI doesn’t remove that problem, it raises the stakes. We will need to fundamentally redefine what data we care about, why we collect it, and how it informs decisions.
AI, Observation, and the Changing Role of Teachers
Dean Schwartz mentioned emerging tools that can analyze classroom engagement via cameras, identifying disengagement or emotional states at a high level.
This immediately raised another set of questions for me, particularly given current political and demographic realities.
Teachers are increasingly working with students whose parents do not speak English. Many of them rely on tools like Google Translate. But could AI evolve into something more powerful, a genuine bridge between parents, students, and teachers?
If AI tightens that feedback loop:
Do parents become more engaged?
Do expectations become clearer?
Does accountability improve?
From an operational standpoint, this would represent not just efficiency, but a redesign of stakeholder communication in education.
Everyone Is Becoming a Creator: Will That Shift Consumption?
Another insight that stood out was Dean Schwartz’s observation that AI is turning everyone into a creator, not just a consumer.
That made me pause.
If AI lowers the barrier to creation across disciplines, does that fundamentally alter American consumer culture? Do students, and eventually employees, approach work less as passive recipients and more as active co-producers?
For employers, this has implications for:
Training models
Performance evaluation
Intellectual ownership
Risk governance
This is not just an education issue, it’s an enterprise design issue.
Experiential Learning, Corporate L&D, and Decision Making
Dean Schwartz highlighted how AI enables experiential learning in fields like architecture and engineering, allowing students to experience their designs in real time.
My immediate question was: Why isn’t this more prevalent in corporate learning environments?
Could AI-enabled simulation help employees:
Understand the second-order effects of decisions
Practice risk-aware judgment
See consequences before they materialize
For organizations grappling with AI governance, compliance, and operational risk, this feels like an underutilized opportunity.
Employers, Skills, and the Case for Self-Learners
Employers often argue that schools, especially higher education, don’t teach job-ready skills. Dean Schwartz pushed back, noting that universities can’t realistically train for every employer’s needs. Their role is to provide a broad foundation that employers then deepen.
I agree, but I think the new signal employers should watch for is something else entirely:
The ability to self-learn, cross-pollinate ideas, and pursue curiosity beyond formal job scope.
That aligns directly with Dean Schwartz’s warning that we’ve underestimated how much knowledge humans still need in order to use AI well, including the ability to evaluate outputs, recognize quality, and iterate intelligently.
AI doesn’t reduce the need for fundamentals. It raises the bar.
Cheating and Workforce Readiness
One moment that unsettled me was the claim that 60% of K–12 students cheat, and that this number hasn’t increased with AI. I still want to examine the source, but my compliance nose started twitching when I heard this.
This isn’t just a technology issue. It’s a character issue, and one with direct workforce consequences. If primary education tolerates or fails to meaningfully address this behavior, employers (again) inherit the downstream risk.
The question isn’t just how we catch cheating, but how we design systems that reinforce integrity, effort, and adaptive problem-solving.
Adaptability, Privacy, and Humans in the Loop
Dean Schwartz emphasized adaptability as the defining skill of the AI age, not creativity for creativity’s sake, but creation through adaptation. I agree wholeheartedly.
Parents raised valid concerns about privacy. Dean Schwartz suggested a compromise: if data is collected, it should be shared responsibly to improve learning tools, embedding social good into the system.
Another parent raised the idea that the future of work is managerial, humans managing agents at scale. That aligns directly with what many of us see coming: human-in-the-loop systems everywhere.
That, in my view, is the skill set students truly need to leave school with.
Is Four Years Necessary?
Finally, the question many institutions are asking themselves: Is a four-year degree still necessary?
If so:
What makes those four years valuable?
What should they cost?
What should graduates actually produce for employers?
Those questions are not just academic, they're also operational.
Plain + Simple
AI is not just changing how students learn. It’s also quietly renegotiating the contract between education and employers.