China is Practicing. We're Still Planning.
Two weeks ago, I saw something that stood out like a dandelion in fresh cut grass. It was a social media post showing people in Beijing gathered outside the Baidu office waiting (and sitting around teaching each other how) to use agentic AI, specifically OpenClaw. And it wasn’t just young people. It was everybody. Different generations, (I surmise) different backgrounds, all in the same space. Younger folks helping older learners, and everybody learning at the same time. It blew me away—but if I’m being honest, it didn’t surprise me. Because what I was looking at wasn’t just a tech demo. It was a society practicing how to learn together.
As a(n idle) career and technical educator, I’ve often said that public schools could shift outcomes almost overnight if we created structured opportunities for parents and students to learn together. Not occasionally. Not as an afterthought. But intentionally, once a week or even a couple of times a month, where families are required to engage in learning side by side. Because the reality is, you’ve got students going home with material their parents don’t understand, and parents who have been out of formal education for years, sometimes decades, trying to support learning in a world that has moved on without them. And we act like that gap doesn’t matter. It does. Now layer AI on top of that, and the gap becomes even more significant.
If we’re serious about AI dominance, then we have to be serious about lifelong learning, not as a catchy slogan, but as a system. Not just for workers. Not just for students. For families. That’s why what’s happening in China matters. Reporting from CNBC highlighted how companies like Baidu and Tencent are pushing public-facing agentic AI adoption through OpenClaw, with people showing up in large numbers to learn how to use it. What stood out wasn’t just the technology, it was the normalization. This wasn’t gated. This was public. In my opinion, AI dominance won't be about what country's companies build the best models. It’ll be about who builds the fastest learned/adapted society.
On Friday, the White House released its National Policy Framework for Artificial Intelligence, and I took the time to sit with it overnight. There’s a lot in the document that I agree with, and there are areas where I think we need to tweak. Starting with protecting children and empowering parents, I’m glad that’s where the framework begins, because we missed this moment with social media. We let things scale before putting real guardrails in place, and we’ve been playing catch-up ever since. This framework is trying to get ahead of that by calling for stronger parental controls, and protections against sexual exploitation and harm, and clear limits on how children’s data can be used in model training and advertising. It also makes an important point that Congress should avoid vague standards that lead to excessive litigation and should not prevent states from enforcing their own child protection laws, even when AI is involved. This felt fitting, a win for states right up front.
In the section on safeguarding and strengthening American communities, I think the framework gets some key things right, especially around protecting seniors from AI-enabled scams. That is already happening, and it’s only going to get more advanced. I also appreciate the focus on inclusion of small businesses, providing grants, tax incentives, and technical assistance to help them adopt AI tools. But I want to be clear about something: access is not the same as understanding. If we want people/communities to actually benefit from AI, we cannot just hand them tools. We have to create environments where they can learn how to use those tools with depth. That’s what stood out to me when I saw what China is doing with agentic AI. They are not just building technology, they are building users.
When it comes to intellectual property and supporting creators, I was genuinely glad to see the word “creator” centered in the framework, because it reflects acknowledgment of what our economy has been for a minute and where it's heading. The framework acknowledges that there is a real and valid debate around whether training AI models on copyrighted material is lawful and supports allowing the courts to resolve that issue. I agree with that approach. But when it comes to the suggestion that Congress should consider licensing frameworks without addressing when or whether licensing is required, I have questions. If Congress is not going to define that, then who is? Because leaving that level of ambiguity in place creates uncertainty and litigation...and litigation and uncertainty slow innovation.
On the issue of free speech, I think the framework is on solid ground. It makes clear that artificial intelligence should not be used to suppress lawful expression or be manipulated based on ideology or partisanship. I think there certainly should be mechanisms for people to seek redress if government overreach occurs. But when we move into the section on enabling innovation and ensuring American AI dominance, that’s where I think we need to take a harder look. The framework states that Congress should not create a new federal AI regulatory body and should instead rely on existing agencies and their subject matter expertise within. I understand the concern about expanding bureaucracy, but I don’t think that approach is sufficient. Artificial intelligence cuts across too many domains: education, labor, national security, infrastructure, for us to rely solely on existing structures without a centralized layer of coordination. Without that, we risk fragmentation, and fragmentation slows response. We have already seen how a lack of early coordination in emerging technologies (e.g. cryptocurrency) can delay national direction. If we are serious about AI, then we need a hybrid approach, one that leverages existing expertise but also creates alignment across the system.
The section on educating Americans and developing an AI-ready workforce is the reason I wrote this in the first place. I have no objections here. The framework calls for integrating AI into education and workforce programs, expanding research on workforce shifts, and supporting institutions in launching demonstration projects and youth development initiatives. That is exactly the direction we should be moving in. But I would push this even further. Public schools should become hubs for cross-generational learning. Parents and students learning together, not in isolation. And if government feels that is too large to take on alone, then employers should step in with structured programs that allow employees and their children to learn AI tools together, supported by (tax) incentives. That’s an effective way to normalize cross generational learning at the societal level.
Finally, when it comes to establishing a federal framework while preempting burdensome state laws, I understand the intent. Artificial intelligence is inherently interstate, and a fragmented regulatory landscape can slow innovation. But we cannot structure this in a way that sidelines states. States have been at the forefront of some of the most meaningful protections we have, particularly around children, fraud, and consumer safety, and the framework does preserve some of that authority. Still, I believe states should play a more active role in shaping the broader direction of AI policy (particularly as it relates to data centers and their effects on local communities). Their proximity to communities provides insight that a purely federal approach cannot replicate. And one area where I strongly disagree is the idea that states should not be allowed to penalize AI developers for third-party misuse of their models. We already have regulatory parameters in place that require companies to conduct due diligence when it comes to third-parties and risk. That expectation should not disappear simply because AI is involved. If anything, it should become stronger and states should be allowed to course correct.
When I step back and look at all of this, the OpenClaw learning environments in China, the White House framework, and the broader direction we are heading, I keep coming back to the same conclusion. We are spending a lot of time thinking about how to regulate/structure artificial intelligence, but not enough time thinking about how to learn it and use it strategically as a nation. And that is a big gap. Because the countries that lead in AI will not just be the ones with the most advanced models. They will be the ones where learning AI feels agile, public, and shared, where parents understand what their children are using, where workers can adapt without starting from zero, and where communities are not left behind as technology moves forward swiftly.
A question I think we should be asking post this framework drop is not just whether we can build AI, but whether we can build a society that knows how to use it independently and interdependently at scale (before another country does). Because right now, other countries are practicing. And we are still planning. And this...THIS is the war I want to win: U.S. AI dominance.
Sources:
The White House: A National Policy Framework for Artificial Intelligence - March 20, 2026
CNBC. How China is getting everyone on OpenClaw, from gearheads to grandmas. https://www.cnbc.com/2026/03/18/china-openclaw-baidu-tencent-ai.html