Article Review: "The AI Question Every Job Candidate Should Be Prepared to Answer" and the One Companies Are Avoiding

Trevor Laurence Jockims, professor of writing, literature, and contemporary culture at NYU, published a thoughtful piece for CNBC titled, “The AI Question Every Job Candidate Should Be Prepared to Answer.” It is the kind of article that lands well at the beginning of the year, measured, forward looking, and grounded in the reality that 2026 is shaping up to be a year of low hiring and low firing.

And on one central point, I could not agree more.

Jockims argues that employees who keep their jobs, and candidates who want them, must be able to articulate how they bring unique value by working with artificial intelligence, not in opposition to it. Not as passive users. Not as reluctant adopters. But as professionals who understand how AI can be integrated into their role in a way that is specific, thoughtful, and differentiated.

That framing is exactly right.

What we have seen over the past year is not mass displacement overnight, but something quieter and arguably more destabilizing: stagnation. Hiring slowed to a crawl. Internal mobility froze. And in many cases, employees were pushed out not because they were underperforming, but because leadership panicked, mistaking resistance to AI for redundancy and mistaking AI efficiency for a magic wand.

I have read countless stories of companies that preemptively cut headcount out of fear that employees would refuse to train the very systems that could replace them, and out of exuberance that maybe they did not need so many employees post COVID anyway. And yes, I understand from both sides.

Even as someone deeply engaged in learning about AI governance, compliance, and ethics, I remain skeptical, intentionally so. My concerns are not abstract. They are structural: data center environmental impact, privacy erosion, immature data governance frameworks, and a cybersecurity landscape where bad actors often wield the same tools as the rest of us, only more aggressive with no holds barred.

AI does not just change workflows. It multiplies everything. Compliance, legal, and risk teams, lean ones especially, are nowhere near prepared for the volume of oversight this technology demands. If anything, I do not think most compliance practitioners have fully forecasted how much additional work AI creates for them and will continue to create as time moves forward.

That said, the article makes another point worth applauding.  40% of freelancers on Fiverr are already using AI to take on more work. That’s a positive signal. When used correctly, AI can expand capacity, elevate output, and allow individuals to move up the value chain. Less busywork. More judgment. More strategy.

Where I part ways with the article is the soft assertion (if we’re calling it that), implied or otherwise, that we have not yet seen AI disrupt the labor market.  With respect, that position requires an impressive level of denial.  When Sam Altman and Dario Amodei have openly expressed concerns about workforce displacement, and when you look at the 2025 labor reports in totality, it becomes very difficult to argue that nothing has changed. Yes, 2025 GDP made it out alive due to some sectors carrying others. But hiring stalled in ways that cannot be explained solely by interest rates or post pandemic normalization.

Something has frozen decision making.  And while we may not yet be able to quantify the exact percentage attributable to AI, pretending it is not a major factor does not make companies prudent, it makes them unprepared.

What I did strongly agree with is the article’s emphasis on a skills based labor market. Tenure no longer guarantees relevance. Time served does not equal skills maintained. But that cuts both ways.

We have already seen what happens when companies rush headlong into AI first workforce decisions without truly understanding what their people do. Klarna’s decision to lay off 40% of its workforce during an AI policy shift, only to rehire many of them later, is a textbook example. Organizations often have a dangerously shallow understanding of individual contributionship. They see outputs, not connective tissue. And AI does not replace invisible work nearly as easily as PowerPoint decks suggest.

The McKinsey projection cited in the article, that more than half of U.S. work hours could be automated, may very well prove true once we have the benefit of a longer time horizon. However, other jobs will grow. That part is inevitable.

The real question for 2026 is not whether companies will deploy AI, but how deliberately they will do it.

  1. Will they invest in learning and development for existing employees?

  2. Will they operationalize policies that already exist but live in binders no one opens?

  3. Will they add automation incrementally and then actually measure whether it produces good, usable data?

  4. Will they chase efficiency promises with governance, reflection, and accountability?

The smartest organizations will do all of the above. The rest will lurch forward, then scramble backward.

For job candidates, the AI question is not “Can you use it?”  It is “Can you explain, clearly and credibly, how you create value because of it, without surrendering judgment, ethics, or responsibility?


And for companies, the question they are avoiding is simpler.  Do you actually know what your people do… before you decide a machine can do it better?

That is the conversation we should be having this year.


CNBC’s article: https://www.cnbc.com/2026/01/10/jobs-careers-ai.html

Previous
Previous

Video Review: Learning in the Age of AI: What Education Is Optimizing For and What Employers Should Be Watching