Meisa B. Meisa B.

China is Practicing. We're Still Planning.

AI leadership isn’t just about building better models—it’s about how quickly a society can learn to use them. After seeing cross-generational AI learning happening in China, this piece examines the White House’s new AI policy framework and asks a deeper question: is the United States treating AI as a policy problem, or as a learning challenge? From education systems to workforce development and governance structure, the real gap may not be regulation, but our ability to normalize AI learning across families and communities.

Two weeks ago, I saw something that stood out like a dandelion in fresh cut grass. It was a social media post showing people in Beijing gathered outside the Baidu office waiting (and sitting around teaching each other how) to use agentic AI, specifically OpenClaw. And it wasn’t just young people. It was everybody. Different generations, (I surmise) different backgrounds, all in the same space. Younger folks helping older learners, and everybody learning at the same time. It blew me away—but if I’m being honest, it didn’t surprise me. Because what I was looking at wasn’t just a tech demo. It was a society practicing how to learn together.

As a(n idle) career and technical educator, I’ve often said that public schools could shift outcomes almost overnight if we created structured opportunities for parents and students to learn together. Not occasionally. Not as an afterthought. But intentionally, once a week or even a couple of times a month, where families are required to engage in learning side by side. Because the reality is, you’ve got students going home with material their parents don’t understand, and parents who have been out of formal education for years, sometimes decades, trying to support learning in a world that has moved on without them. And we act like that gap doesn’t matter. It does. Now layer AI on top of that, and the gap becomes even more significant.

If we’re serious about AI dominance, then we have to be serious about lifelong learning, not as a catchy slogan, but as a system. Not just for workers. Not just for students. For families. That’s why what’s happening in China matters. Reporting from CNBC highlighted how companies like Baidu and Tencent are pushing public-facing agentic AI adoption through OpenClaw, with people showing up in large numbers to learn how to use it. What stood out wasn’t just the technology, it was the normalization. This wasn’t gated. This was public. In my opinion, AI dominance won't be about what country's companies build the best models. It’ll be about who builds the fastest learned/adapted society.

On Friday, the White House released its National Policy Framework for Artificial Intelligence, and I took the time to sit with it overnight. There’s a lot in the document that I agree with, and there are areas where I think we need to tweak. Starting with protecting children and empowering parents, I’m glad that’s where the framework begins, because we missed this moment with social media. We let things scale before putting real guardrails in place, and we’ve been playing catch-up ever since. This framework is trying to get ahead of that by calling for stronger parental controls, and protections against sexual exploitation and harm, and clear limits on how children’s data can be used in model training and advertising. It also makes an important point that Congress should avoid vague standards that lead to excessive litigation and should not prevent states from enforcing their own child protection laws, even when AI is involved. This felt fitting, a win for states right up front.

In the section on safeguarding and strengthening American communities, I think the framework gets some key things right, especially around protecting seniors from AI-enabled scams. That is already happening, and it’s only going to get more advanced. I also appreciate the focus on inclusion of small businesses, providing grants, tax incentives, and technical assistance to help them adopt AI tools. But I want to be clear about something: access is not the same as understanding. If we want people/communities to actually benefit from AI, we cannot just hand them tools. We have to create environments where they can learn how to use those tools with depth. That’s what stood out to me when I saw what China is doing with agentic AI. They are not just building technology, they are building users.

When it comes to intellectual property and supporting creators, I was genuinely glad to see the word “creator” centered in the framework, because it reflects acknowledgment of what our economy has been for a minute and where it's heading. The framework acknowledges that there is a real and valid debate around whether training AI models on copyrighted material is lawful and supports allowing the courts to resolve that issue. I agree with that approach. But when it comes to the suggestion that Congress should consider licensing frameworks without addressing when or whether licensing is required, I have questions. If Congress is not going to define that, then who is? Because leaving that level of ambiguity in place creates uncertainty and litigation...and litigation and uncertainty slow innovation.

On the issue of free speech, I think the framework is on solid ground. It makes clear that artificial intelligence should not be used to suppress lawful expression or be manipulated based on ideology or partisanship. I think there certainly should be mechanisms for people to seek redress if government overreach occurs. But when we move into the section on enabling innovation and ensuring American AI dominance, that’s where I think we need to take a harder look. The framework states that Congress should not create a new federal AI regulatory body and should instead rely on existing agencies and their subject matter expertise within. I understand the concern about expanding bureaucracy, but I don’t think that approach is sufficient. Artificial intelligence cuts across too many domains: education, labor, national security, infrastructure, for us to rely solely on existing structures without a centralized layer of coordination. Without that, we risk fragmentation, and fragmentation slows response. We have already seen how a lack of early coordination in emerging technologies (e.g. cryptocurrency) can delay national direction. If we are serious about AI, then we need a hybrid approach, one that leverages existing expertise but also creates alignment across the system.

The section on educating Americans and developing an AI-ready workforce is the reason I wrote this in the first place. I have no objections here. The framework calls for integrating AI into education and workforce programs, expanding research on workforce shifts, and supporting institutions in launching demonstration projects and youth development initiatives. That is exactly the direction we should be moving in. But I would push this even further. Public schools should become hubs for cross-generational learning. Parents and students learning together, not in isolation. And if government feels that is too large to take on alone, then employers should step in with structured programs that allow employees and their children to learn AI tools together, supported by (tax) incentives. That’s an effective way to normalize cross generational learning at the societal level.

Finally, when it comes to establishing a federal framework while preempting burdensome state laws, I understand the intent. Artificial intelligence is inherently interstate, and a fragmented regulatory landscape can slow innovation. But we cannot structure this in a way that sidelines states. States have been at the forefront of some of the most meaningful protections we have, particularly around children, fraud, and consumer safety, and the framework does preserve some of that authority. Still, I believe states should play a more active role in shaping the broader direction of AI policy (particularly as it relates to data centers and their effects on local communities). Their proximity to communities provides insight that a purely federal approach cannot replicate. And one area where I strongly disagree is the idea that states should not be allowed to penalize AI developers for third-party misuse of their models. We already have regulatory parameters in place that require companies to conduct due diligence when it comes to third-parties and risk. That expectation should not disappear simply because AI is involved. If anything, it should become stronger and states should be allowed to course correct.

When I step back and look at all of this, the OpenClaw learning environments in China, the White House framework, and the broader direction we are heading, I keep coming back to the same conclusion. We are spending a lot of time thinking about how to regulate/structure artificial intelligence, but not enough time thinking about how to learn it and use it strategically as a nation. And that is a big gap. Because the countries that lead in AI will not just be the ones with the most advanced models. They will be the ones where learning AI feels agile, public, and shared, where parents understand what their children are using, where workers can adapt without starting from zero, and where communities are not left behind as technology moves forward swiftly.

A question I think we should be asking post this framework drop is not just whether we can build AI, but whether we can build a society that knows how to use it independently and interdependently at scale (before another country does). Because right now, other countries are practicing. And we are still planning. And this...THIS is the war I want to win: U.S. AI dominance.

Sources:

The White House: A National Policy Framework for Artificial Intelligence - March 20, 2026

CNBC. How China is getting everyone on OpenClaw, from gearheads to grandmas. https://www.cnbc.com/2026/03/18/china-openclaw-baidu-tencent-ai.html

Read More
Meisa B. Meisa B.

The Gender Pay Gap Is Widening. AI Adoption May Be Part of the Story.

Two developments reported this week raise an interesting question about the future of workplace equity.

New data shows the gender pay gap widened again in 2024, with women earning 81 cents for every dollar paid to men, down from 83 cents in 2023 and 84 cents in 2022.

At the same time, survey data suggests men are currently adopting artificial intelligence tools more often than women, and are more likely to see AI as a useful assistant rather than something to be skeptical of.

Individually, these trends might seem unrelated.

But together they point to a larger issue: if AI becomes a core productivity tool in knowledge work, uneven adoption could shape who benefits most from the next phase of workplace transformation.

The full post explores why this moment may represent an early inflection point, and why women professionals should start thinking carefully about how they engage with AI now.

Two articles published today caught my attention. With Women’s History Month underway and International Women’s Day approaching on Sunday, the timing feels particularly relevant.

One article reports that the gender pay gap is widening again, reversing progress that had been made in recent years. The other highlights that men are using artificial intelligence more frequently than women. The first point is troubling. The second, unfortunately, is not surprising.

According to a Glassdoor analysis cited by CNBC, progress toward closing the gender pay gap has been slow and inconsistent. In fact, the gap widened for the second consecutive year in 2024. Women earned 81 cents for every dollar paid to a man, down from 83 cents in 2023 and 84 cents in 2022.

At the same time, another CNBC report highlights a separate but related dynamic: men are currently using artificial intelligence tools more often than women. In the CNBC article “AI’s Got a Gender Gap: Women Are More Skeptical,” survey data suggests that men are more likely to view AI as a valuable assistant, while women tend to approach the technology with greater skepticism.

That perception gap may help explain the difference in adoption.

For decades, many administrative and operational roles have been disproportionately held by women. Today, approximately 80% of administrative professionals are women, according to workforce data from the International Association of Administrative Professionals.

The challenge is that many of the tasks historically associated with administrative work, scheduling, information gathering, documentation, coordination, are precisely the types of activities that AI systems increasingly assist with.

When a new technology enters a domain that has historically been “owned” by a professional group, it can easily be perceived as a competitor rather than a collaborator.

That reaction is understandable.  But it may also be strategically risky.

According to Microsoft estimates referenced earlier this year, approximately 16.3% of the global population is currently using generative AI tools as of early 2026. This means most people are still experimenting with the technology, and relatively few professionals are building structured workflows around it.

The implication is important: the field is still wide open.

Artificial intelligence does not simply replace tasks, it can also expand the scope of what professionals are able to do.

For example, instead of spending hours manually gathering information for a briefing or presentation, a professional could use AI to rapidly summarize multiple articles, extract key insights, and generate a first draft of a research memo in minutes. The human role then shifts from performing the basic task to interpreting the insights, refining the analysis, and making strategic decisions based on the information.

Used this way, AI becomes less of a threat and more of a capability multiplier.

For women who want to maintain, and expand, the workplace gains made over the past several decades, engaging with AI cannot remain optional. It must move beyond occasional experimentation and toward intentional use in research, decision support, workflow design, and operational analysis.

Artificial intelligence will continue to reshape how work gets done across industries.

The question is not whether this transformation will happen. It already is.

The more important question is who chooses to learn how to work with these systems, and who chooses to sit on the sidelines while others define the next generation of work.

As International Women’s Day approaches, this moment may be less about celebrating past progress and more about thinking strategically about the next frontier of professional leverage.

Sources

CNBC. AI’s Got a Gender Gap: Women Are More Skeptical
https://www.cnbc.com/2026/03/06/gender-gap-in-ai-revealed-in-cnbc-surveymonkey-women-at-work-survey.html

CNBC. Gender Pay Gap Doubles Over the Course of Women’s Careers
https://www.cnbc.com/2026/03/06/gender-pay-gap-doubles-over-the-course-of-womens-careers-glassdoor-report.html

Read More
Video Reviews Meisa B. Video Reviews Meisa B.

Video Review: Learning in the Age of AI: What Education Is Optimizing For and What Employers Should Be Watching

Artificial intelligence is forcing a long-overdue reckoning in education, not just in how students learn, but in what learning is actually for. In this piece, I reflect on insights from Stanford Graduate School of Education Dean Dan Schwartz and examine what AI-driven, individualized learning could mean for workforce readiness, employer expectations, and early-career hiring. Drawing on my experience as a former Career and Technical Education teacher and my current work in executive support, I explore the growing tension between education systems optimized for personalization and employers still structured around standardization, and why adaptability, self-learning, and human-in-the-loop thinking may become the most valuable skills of all.

This week, I took a deliberate trip down education lane to better understand how artificial intelligence is shaping learning,  not just in theory, but in practice.

That curiosity is both personal and professional. I previously taught as a New York State–licensed Career and Technical Education (CTE) teacher in a New York City PROSE school, a model built on the premise that students learn differently and that education must make room for real-world, applied skill development. Today, I work in executive support, partnering closely with senior leaders and organizations as they navigate operational considerations.

So when I watched “Learning in the Age of AI: Critical Insights” featuring Stanford Graduate School of Education Dean Dan Schwartz, hosted by Alpha School Co-Founder, MacKenzie Price, I wasn’t watching as a neutral observer. I was watching with a bias, and I think that matters.

Acknowledging My Bias Up Front

Dean Schwartz opens by noting that most people approach education with deeply held preconceived notions. I agree, and I include myself in that assessment.

My bias comes from teaching at the 11th and 12th grade level, the tail end of a student’s formal education. In CTE, there’s an unspoken contract: if I can’t help students leave with tangible, market-relevant skills, I’m not doing my job. While education absolutely exists to expose students to ideas and plant intellectual seeds, my lens is unapologetically workforce-adjacent.  Rockefeller established the GEB for good workers, not for “good knowingness” for no reason. Public education was created because workers were (the first) widgets.  That framing shaped how I heard everything that followed in the discussion.

AI as a Mirror: What Learning Science Is Finally Forcing Us to Admit

One of the most compelling points Dean Schwartz made is that AI has become a reflection of learning science itself. We used theories of how humans learn to train AI systems, and now AI is forcing us to confront an uncomfortable truth:

Many educational practices have been repeated for decades without strong empirical evidence that they actually work.

His example of traditional word problems landed for me. As educators, we often assume familiarity equals effectiveness. AI, ironically, is exposing where that assumption breaks down.

He also dismantled the idea that “learning” is a single process. Learning is actually multiple systems operating together, acquiring something new, retrieving known information, practicing fluency, each with different cognitive “appetites.”

That insight matters because while learning science increasingly understands these systems, education infrastructure hasn’t caught up. Schools are still built for standardization, not cognitive nuance.

From an executive operations perspective, this gap feels familiar. Organizations often know how work actually happens, but their systems, workflows, and incentives lag behind that knowledge.

Individualized Learning: A Promise With Employer Consequences

One of the most frequently cited benefits of AI in education is its ability to provide individualized learning at scale, something no single teacher managing 20+ students can realistically do.

In theory, this is fabulous news.

But here’s the question that stayed with me, especially given education’s historical role as a labor-force pipeline:

Will hyper-individualized education better prepare students for the workforce, or will it create such nuanced learning paths that employers are forced to fundamentally rethink how they recognize skill and talent?

Dean Schwartz emphasized creativity as a core competency for working effectively with AI. I agree. But if AI-driven education optimizes for highly personalized creativity, employers may soon face early-career candidates whose skills are deep but non-standardized, adaptive but difficult to benchmark.

That raises downstream questions for hiring, assessment, and workforce design, questions most employers are not yet asking loudly.

Automation vs. Transformation in Education Systems

Dean Schwartz voiced a concern that resonated with me: AI could simply automate existing educational systems, including the ones we actually want to change.

Dean Schwartz and Mrs. Price didn't fully unpack it, but I kept asking myself why this would be good and bad.  I guess I'll have to come back to that bit.

In rule-based domains like math, AI is naturally well-suited to grading, tutoring, and feedback. That creates efficiency and frees up teacher bandwidth. In flexible school models, like NYC PROSE schools,  that bandwidth could be redirected toward individualized, applied learning.

But that assumes school systems can:

  • Recognize individualized learning as legitimate

  • Measure it meaningfully

  • Operationalize it at scale

Large, urban school systems already struggle with data integrity even under standardized testing regimes. AI doesn’t remove that problem, it raises the stakes. We will need to fundamentally redefine what data we care about, why we collect it, and how it informs decisions.

AI, Observation, and the Changing Role of Teachers

Dean Schwartz mentioned emerging tools that can analyze classroom engagement via cameras,  identifying disengagement or emotional states at a high level.

This immediately raised another set of questions for me, particularly given current political and demographic realities.

Teachers are increasingly working with students whose parents do not speak English. Many of them rely on tools like Google Translate. But could AI evolve into something more powerful, a genuine bridge between parents, students, and teachers?

If AI tightens that feedback loop:

  • Do parents become more engaged?

  • Do expectations become clearer?

  • Does accountability improve?

From an operational standpoint, this would represent not just efficiency, but a redesign of stakeholder communication in education.

Everyone Is Becoming a Creator: Will That Shift Consumption?

Another insight that stood out was Dean Schwartz’s observation that AI is turning everyone into a creator, not just a consumer.

That made me pause.

If AI lowers the barrier to creation across disciplines, does that fundamentally alter American consumer culture? Do students, and eventually employees, approach work less as passive recipients and more as active co-producers?

For employers, this has implications for:

  • Training models

  • Performance evaluation

  • Intellectual ownership

  • Risk governance

This is not just an education issue, it’s an enterprise design issue.

Experiential Learning, Corporate L&D, and Decision Making

Dean Schwartz highlighted how AI enables experiential learning in fields like architecture and engineering, allowing students to experience their designs in real time.

My immediate question was: Why isn’t this more prevalent in corporate learning environments?

Could AI-enabled simulation help employees:

  • Understand the second-order effects of decisions

  • Practice risk-aware judgment

  • See consequences before they materialize

For organizations grappling with AI governance, compliance, and operational risk, this feels like an underutilized opportunity.

Employers, Skills, and the Case for Self-Learners

Employers often argue that schools, especially higher education, don’t teach job-ready skills. Dean Schwartz pushed back, noting that universities can’t realistically train for every employer’s needs. Their role is to provide a broad foundation that employers then deepen.

I agree, but I think the new signal employers should watch for is something else entirely:

The ability to self-learn, cross-pollinate ideas, and pursue curiosity beyond formal job scope.

That aligns directly with Dean Schwartz’s warning that we’ve underestimated how much knowledge humans still need in order to use AI well, including the ability to evaluate outputs, recognize quality, and iterate intelligently.

AI doesn’t reduce the need for fundamentals. It raises the bar.

Cheating and Workforce Readiness

One moment that unsettled me was the claim that 60% of K–12 students cheat, and that this number hasn’t increased with AI.  I still want to examine the source, but my compliance nose started twitching when I heard this.

This isn’t just a technology issue. It’s a character issue,  and one with direct workforce consequences. If primary education tolerates or fails to meaningfully address this behavior, employers (again) inherit the downstream risk.  

The question isn’t just how we catch cheating, but how we design systems that reinforce integrity, effort, and adaptive problem-solving.

Adaptability, Privacy, and Humans in the Loop

Dean Schwartz emphasized adaptability as the defining skill of the AI age, not creativity for creativity’s sake, but creation through adaptation. I agree wholeheartedly.

Parents raised valid concerns about privacy. Dean Schwartz suggested a compromise: if data is collected, it should be shared responsibly to improve learning tools, embedding social good into the system.

Another parent raised the idea that the future of work is managerial, humans managing agents at scale. That aligns directly with what many of us see coming: human-in-the-loop systems everywhere.

That, in my view, is the skill set students truly need to leave school with.

Is Four Years Necessary?

Finally, the question many institutions are asking themselves: Is a four-year degree still necessary?

If so:

  • What makes those four years valuable?

  • What should they cost?

  • What should graduates actually produce for employers?

Those questions are not just academic, they're also operational.

Plain + Simple

AI is not just changing how students learn. It’s also quietly renegotiating the contract between education and employers.  

YouTube: https://www.youtube.com/watch?v=TJhtzJlSYIk 

Read More
Article Reviews Meisa B. Article Reviews Meisa B.

Article Review: "The AI Question Every Job Candidate Should Be Prepared to Answer" and the One Companies Are Avoiding

Hiring has not collapsed in the age of artificial intelligence. It has stalled. As companies slow hiring and delay workforce decisions, AI is quietly reshaping the labor market in ways that are harder to measure but impossible to ignore.

In a low hiring, low firing economy, job candidates are increasingly expected to explain how they create value by working with AI, not resisting it. But this shift raises a larger question for employers. AI is not just a productivity tool. It is a risk multiplier that affects data governance, compliance, cybersecurity, and workforce strategy.

Despite claims that AI has not yet disrupted jobs, stalled hiring, frozen internal mobility, and delayed decision making suggest otherwise. Workforce disruption does not require mass layoffs to be real.

The real issue is not whether companies will deploy AI. It is whether leadership understands employee value, skills, and risk well enough to deploy it responsibly.

Trevor Laurence Jockims, professor of writing, literature, and contemporary culture at NYU, published a thoughtful piece for CNBC titled, “The AI Question Every Job Candidate Should Be Prepared to Answer.” It is the kind of article that lands well at the beginning of the year, measured, forward looking, and grounded in the reality that 2026 is shaping up to be a year of low hiring and low firing.

And on one central point, I could not agree more.

Jockims argues that employees who keep their jobs, and candidates who want them, must be able to articulate how they bring unique value by working with artificial intelligence, not in opposition to it. Not as passive users. Not as reluctant adopters. But as professionals who understand how AI can be integrated into their role in a way that is specific, thoughtful, and differentiated.

That framing is exactly right.

What we have seen over the past year is not mass displacement overnight, but something quieter and arguably more destabilizing: stagnation. Hiring slowed to a crawl. Internal mobility froze. And in many cases, employees were pushed out not because they were underperforming, but because leadership panicked, mistaking resistance to AI for redundancy and mistaking AI efficiency for a magic wand.

I have read countless stories of companies that preemptively cut headcount out of fear that employees would refuse to train the very systems that could replace them, and out of exuberance that maybe they did not need so many employees post COVID anyway. And yes, I understand from both sides.

Even as someone deeply engaged in learning about AI governance, compliance, and ethics, I remain skeptical, intentionally so. My concerns are not abstract. They are structural: data center environmental impact, privacy erosion, immature data governance frameworks, and a cybersecurity landscape where bad actors often wield the same tools as the rest of us, only more aggressive with no holds barred.

AI does not just change workflows. It multiplies everything. Compliance, legal, and risk teams, lean ones especially, are nowhere near prepared for the volume of oversight this technology demands. If anything, I do not think most compliance practitioners have fully forecasted how much additional work AI creates for them and will continue to create as time moves forward.

That said, the article makes another point worth applauding.  40% of freelancers on Fiverr are already using AI to take on more work. That’s a positive signal. When used correctly, AI can expand capacity, elevate output, and allow individuals to move up the value chain. Less busywork. More judgment. More strategy.

Where I part ways with the article is the soft assertion (if we’re calling it that), implied or otherwise, that we have not yet seen AI disrupt the labor market.  With respect, that position requires an impressive level of denial.  When Sam Altman and Dario Amodei have openly expressed concerns about workforce displacement, and when you look at the 2025 labor reports in totality, it becomes very difficult to argue that nothing has changed. Yes, 2025 GDP made it out alive due to some sectors carrying others. But hiring stalled in ways that cannot be explained solely by interest rates or post pandemic normalization.

Something has frozen decision making.  And while we may not yet be able to quantify the exact percentage attributable to AI, pretending it is not a major factor does not make companies prudent, it makes them unprepared.

What I did strongly agree with is the article’s emphasis on a skills based labor market. Tenure no longer guarantees relevance. Time served does not equal skills maintained. But that cuts both ways.

We have already seen what happens when companies rush headlong into AI first workforce decisions without truly understanding what their people do. Klarna’s decision to lay off 40% of its workforce during an AI policy shift, only to rehire many of them later, is a textbook example. Organizations often have a dangerously shallow understanding of individual contributionship. They see outputs, not connective tissue. And AI does not replace invisible work nearly as easily as PowerPoint decks suggest.

The McKinsey projection cited in the article, that more than half of U.S. work hours could be automated, may very well prove true once we have the benefit of a longer time horizon. However, other jobs will grow. That part is inevitable.

The real question for 2026 is not whether companies will deploy AI, but how deliberately they will do it.

  1. Will they invest in learning and development for existing employees?

  2. Will they operationalize policies that already exist but live in binders no one opens?

  3. Will they add automation incrementally and then actually measure whether it produces good, usable data?

  4. Will they chase efficiency promises with governance, reflection, and accountability?

The smartest organizations will do all of the above. The rest will lurch forward, then scramble backward.

For job candidates, the AI question is not “Can you use it?”  It is “Can you explain, clearly and credibly, how you create value because of it, without surrendering judgment, ethics, or responsibility?


And for companies, the question they are avoiding is simpler.  Do you actually know what your people do… before you decide a machine can do it better?

That is the conversation we should be having this year.


CNBC’s article: https://www.cnbc.com/2026/01/10/jobs-careers-ai.html

Read More