AI adoption is no longer a matter of "give the team a tool and they will use it." In 2026, one thing is becoming much clearer to me: internal AI usage should be managed like a product.
The signals from the last few days point in the same direction. Lenny's Newsletter covered Sendbird's internal AI adoption system: quests, token leaderboards, a skills marketplace, and visible rewards for completed automations. TechCrunch reported Cloudflare's claim that AI made some roles obsolete, even as the company posted record revenue. Product Hunt and Hacker News are full of agentic coding, agent analytics, vibe coding tools, and workflow automation products.
The pattern is bigger than any single tool. AI adoption is moving beyond training sessions and tool access. It is becoming an internal product problem that needs real product management.
"Use AI more" is not a product strategy
In many companies, AI rollout still looks like this: buy licenses, announce them in Slack, run a few workshops, and wait for usage to grow.
That is like launching a SaaS product without onboarding and hoping users will figure out the value on their own. Availability does not create behavior change. If people do not know when to use the tool, what job it improves, what success looks like, and where the risk boundary is, adoption will stay shallow.
This is where a PM lens helps. A good Product Manager does not only ask whether the feature is ready. They ask: at what moment will the user need this, what motivation will they have, and what result will tell them it was worth repeating?
The same question applies inside the company.
Internal users are still users
The interesting part of the Sendbird example is not only the tooling. It is the behavior design. Instead of telling everyone to "use AI," they turned real internal problems into quests. A team can say, "I want to automate this recurring report." Someone else can pick it up. The win becomes visible. Time saved, risk level, affected teams, and reuse potential become part of the system.
That is not just an enablement program. It is a product surface.
Internal users are not less complex than external customers. They also have limited time, strong habits, uneven motivation, and a healthy fear of breaking something. Access to an AI tool is not enough. They need a trigger, a safe first use case, a feedback loop, and a visible sense of progress.
These are product problems.
Token budget may become a product metric
Most companies still measure AI adoption with weak signals: how many people logged in, how many licenses are active, how many prompts were sent.
Those numbers can be useful at the start, but they are not enough. Prompt count is not value. Sometimes it is waste.
Better questions are more concrete:
- Which workflow actually got faster?
- Which team reduced a recurring weekly task?
- Which output passed human review and made it into real work?
- Which AI usage was stopped because the risk was too high?
- Was the token spend proportional to saved time or better quality?
Token budget is not just a finance control. It can become a product design tool. If usage is unlimited and invisible, learning becomes noisy. If every experiment feels punished, people stop trying. Good design sits between those extremes: visible budget, clear limits, and enough room for valuable exploration.
Who owns AI adoption?
This is the practical question.
IT can distribute tools. HR can organize training. Finance can track cost. Engineering can build integrations. But if adoption itself is a product problem, someone has to own the full journey.
That person does not have to be called an AI Program Manager. But they need to think like a PM: segments, use cases, activation moments, retention, trust, feedback loops, and success metrics.
Cloudflare-like examples show that AI is beginning to affect org design. a16z's labor-market argument is a useful counterweight: AI will not affect every job, team, or task in the same way. Both things can be true. AI may not create a universal job apocalypse, while still repricing specific task bundles very quickly.
That is why unmanaged internal adoption creates a strange split. The curious minority moves fast. Everyone else watches from a distance. Over time, that gap becomes a performance gap.
Manage it like an internal product
The weakest AI adoption strategy is to leave it at the level of "culture change."
Culture matters, of course. But culture does not grow on top of undesigned workflows. Teams need to know which tasks can be delegated to AI, when to trust the output, when to stop, how to review the result, and how the win becomes visible to others.
My takeaway is simple: if internal AI usage is not growing, the first question should not be "are people resisting?"
The better question is:
Did we actually design the adoption experience like a product?