Zenken's Road to Company-Wide ChatGPT Enterprise Rollout
Table of contents 15 items
In November 2024, Zenken Corporation became the first company in Japan to roll out ChatGPT Enterprise to every employee.
This is the story of how we got there, told by someone in the middle of it: me — Okada.
Timeline
March 2023 — GPT-4 launches. We start using it via ChatGPT Plus.
July 2023 — A few dozen people inside the company start using the GPT-4 API.
January 2024 — A dedicated AI division is launched. ChatGPT Enterprise goes live for an initial 150 users.
July 2024 — All sales staff get Enterprise accounts.
November 2024 — Japan’s first company-wide ChatGPT Enterprise rollout.
March 2025 — We publish results: about ¥50M in outsourcing-cost savings (CY 2024).
A side note Most of the larger companies I’ve talked with rolled out generative AI top-down. Zenken did it bottom-up, so think of this as a bottom-up case study.
It started with GPT-4
I first touched ChatGPT in early 2023. Back then GPT-3.5 was the flagship and even that was already a major break from what “AI” had meant up to that point.
After about 1–2 months on ChatGPT, GPT-4 arrived — a stunning leap. I jumped on it immediately, and it became clear: we have to use this in the company.
At that time, only one or two other people inside Zenken even knew about GPT-4. ChatGPT Enterprise didn’t exist yet. There was only the consumer offering, ChatGPT Plus — the very early days of the early days.
So the journey began with the most boring step possible: getting internal approval to use ChatGPT Plus.
I helped grow the ChatGPT Plus user base internally to about 10 people. In summer 2023, the GPT-4 API began to roll out, and we joined the waitlist. Once we were through, our entire department at the time (~20 people) started using it.
Then big news landed: ChatGPT Enterprise.
There was no OpenAI Japan back then, so everything was English-only. I sent a contact request via ChatGPT itself — and waited. After a month, then two, with no reply, I started to give up.
Around the same time, Microsoft announced the general availability of Microsoft 365 Copilot, and an internal discussion started: “If OpenAI doesn’t get back to us, do we go with Microsoft?” To pile on, ChatGPT Plus paused new signups, and OpenAI’s CEO was briefly fired.
The temperature on adopting generative AI inside the company was high. But OpenAI itself was looking shaky. I held the line and kept arguing that we had to stick with ChatGPT.
After three months of back-and-forth, OpenAI finally got in touch about ChatGPT Enterprise. With the recent signup freeze fresh in everyone’s mind, the message was clear: we couldn’t miss this window. Adoption discussions accelerated.
With no internal or external precedent for generative AI adoption, getting approval for what was a multi-million-yen capital outlay was not simple for management. “Just install it” doesn’t carry approval. We needed a clear cost-benefit story.
What was the business case for ChatGPT Enterprise?
The fear was real: nobody knew how much productivity would actually rise, and the money would still be going out the door regardless.
The angle we landed on was reducing outsourcing costs. Zenken’s flagship business is content marketing, and we were filling shortfalls with external writers and contractors.
By bringing the outsourced content production back in-house using ChatGPT, we saw a clear path to at least covering the rollout cost. (I’ll skip the internal details, but explaining, getting buy-in, and securing commitments across the relevant teams was a serious lift.)
In the worst case, we’d break even within a year — so we framed it to the executive sponsor as “no downside, please let us try.” After repeated conversations, the rollout was approved.
What carries the day isn’t just the cost-benefit logic. It’s also the conviction that generative AI is essential, and the willingness to commit to actually delivering — and that you’ll be trusted to do so.
A dedicated AI division
At the time, I was running a newly launched B2B content-marketing service. It was small but profitable and on track.
When the rollout was approved, I handed that service entirely to my reports and asked our division head to let me work full-time on AI. The AI division was officially founded in January 2024.
The AI division started with two people, and remains two people supporting 450 employees today.
Why dedicated, not shared? Because there was no precedent and almost no internal AI literacy, “shared” was simply not viable. From the people I’ve talked to in other companies who tried to do this with shared roles, I’ve yet to hear of one where adoption and habituation went well.
Once approved, here’s what to do
Establish guidelines
- Scope (especially if there are group companies)
- User-account policies
- Copyright and intellectual property
- Prohibited content for generation
- Image-generation policies
- Governance and escalation paths
You need at least these as the minimum-viable internal guideline. If you’re not rolling out company-wide, you also need to decide whether to allow free-tier ChatGPT, and write a separate guideline for that case.
The most important piece is copyright and intellectual property. You need to land internal understanding around how rights to AI-generated content are claimed, and that prompts themselves are intellectual property. This is the kind of legal grounding that needs to land slowly and carefully — and your legal team’s involvement is non-negotiable.
Training and ongoing-education plans
Decide your delivery format first. For new rollouts, we recommend online — Zoom, Teams, or Google Meet — depending on your scale. Always record sessions and store the videos somewhere people can re-watch them.
In the early days, run real-time online sessions to build internal momentum. Once things are mature enough, restrict real-time sessions to “topics likely to generate Q&A,” and shift everything else to video. That works well.
Training-design tips
My training style is “join and leave whenever, watch passively while you do other work.” Traditional “training” means everyone gathers in a room and stays from start to finish, but that format gives many people an excuse to skip. What matters is engagement — how many people you can actually pull in.
A more important point: use Zoom Webinar or Google Meet Live. With these, you can force participants’ camera and audio off, removing the “I’m being watched” friction and eliminating accidental hot mics. Note that Google Meet Live doesn’t have a comments feature, so you can’t surface participant voices through chat. If that matters, use a regular meeting link with the host’s “force everyone’s video and audio off” capability — the chat feature stays available.
Online delivery requires stable networking. If wired Ethernet is available, use it. Remote work is now common, so remote-friendly delivery is non-negotiable.
Past training topics
A non-exhaustive sampling:
- ChatGPT Enterprise from zero
- How to build GPTs / new feature walkthrough
- Auto-creating documents and calendar events with code
- ChatGPT’s new search feature
- “Zero internal inquiries” with GPTs
- GPTs that work with no prompting
- Useful ChatGPT tips
- Daily reports that impress your manager — written in ChatGPT
- Document understanding and text generation (chats, content, email)
- Aggregating and reviewing daily reports with ChatGPT × GAS
- Webinar: GPT-driven product/market understanding workflows
- Meeting sharing and minutes-free information sharing with GPTs
- Faster chat / email handling, daily-report GPTs
- Reducing internal inquiries with NotebookLM
- Per-model ChatGPT comparison
- Prompting o1
- When to use Gemini vs. ChatGPT
- Comparing Gemini 2.5 Pro and ChatGPT
- Information search with AI (NotebookLM, Gemini, ChatGPT)
- Case studies (employees presenting)
- 4o image generation
- o3, o4-mini walkthrough and updates
- Image tone alignment and image-library features
- May 2025 ChatGPT operations walkthrough
That’s only a fraction. At our peak we run 10+ training hours per month. Prompts and workflows that worked become outdated fast, so we revise and republish them as updated versions.
Zenken uses Google Workspace company-wide, so Gemini and NotebookLM are also available — and from 2025 we run training that covers those too.
Foundational concepts everyone needs to understand
- The knowledge cutoff
- The difference between GPT models and reasoning models
- What a prompt is
- The importance of language precision
- Zero-shot and few-shot
- The difference between GPTs (custom GPT) and Projects, and when to use which
These are non-negotiable. The classic “what is generative AI?” intro slide isn’t really needed — anyone who wants to know that can ask ChatGPT.
Making ChatGPT actually stick inside the company
The single most important thing is resonance. Training tends to fixate on “what it can do,” but at the entrance you need people to feel that AI is interesting and to resonate with it.
Spark interest first, and let users experiment on their own. Build sessions that invite that.
Next, build internal allies who share that resonance. With only two people in the AI division, we’re physically constrained. Across small and large initiatives alike, we need cooperative future AI leaders that we can rope in.
The hardest hurdle is mindset. There’s always a contingent of people who, at first, are skeptical and stand-offish. It may sound blunt, but deprioritize them: focus on the people who resonate, and let them go build momentum in their own departments.
Once the rest of their environment starts working ChatGPT-first, the skeptics have no choice but to come along. Time solves most of this problem.
The mindsets that need to change
To make adoption stick, you have to overturn defaults that everyone has been operating under.
For example:
- Read every word of a document → use AI summaries. Stop reading long reports and meeting minutes end-to-end. Have ChatGPT summarize, then check the points.
- Build documents from scratch → generate a draft and edit. Stop facing a blank page. Have ChatGPT draft, then refine.
- Search FAQs and manuals each time → ask AI in natural language. Stop searching thick manuals. Build a custom GPT and just ask it.
- Brainstorm from scratch → start from AI-generated ideas. Get ChatGPT to surface initial proposals or copy lines, then debate from there.
- Translate / draft English emails by hand → let AI translate and draft. Have ChatGPT translate and native-check; save time.
- Hand-take meeting minutes → automate from audio with AI. Transcribe recordings with a voice-to-text tool, summarize with ChatGPT, and auto-generate the minutes.
- Look up code yourself → ask AI for samples and explanations. Don’t just Google and paste. Tell ChatGPT what you want to do and get the code.
- Analyze surveys / large datasets by hand → auto-summarize and chart with AI. Hand the data to AI and ask “summarize the key points.”
- Proofread / review yourself → use AI editing. Ask ChatGPT to “make this politer” or “rewrite more simply.”
- Plan tasks and TODOs in a notebook / Excel → have AI design the optimal flow and TODO list. Tell it your goal and let it sequence and prioritize.
The trap people fall into is “I’ll do it myself first, and ask ChatGPT when I get stuck.” That is a major mistake.
The point is to make ChatGPT the starting point of every workflow. About 80% of the work in many flows can be processed quickly by ChatGPT. Don’t use it in fragmented bursts — process everything you can with ChatGPT first, then add your own corrections and additions at the end. Spending your remaining time finishing rather than drafting is one of the better targets of optimization, and lifts the quality of your work as a side effect.
Workflow redesign requires manager / team-lead involvement — not just frontline
The people with the highest resolution on the business and on the work itself are managers and above. I’ve never seen workflow optimization go well when the front line was left alone to figure it out.
What managers should be doing in the field is systematizing and pushing the system down. Workflow design isn’t simple — it requires medium-to-long-term thinking and visibility into stakeholders and dependencies. If you neglect this, your department falls into the negative spiral of “productivity isn’t going up.”
Leaving it entirely to the front line means decisions don’t move forward — things slow to a crawl. And in many cases, 90% of meetings end up being “review/discuss” with no execution: 30 minutes of chat, no decision, repeat.
The biggest problem is when an organization’s leader doesn’t actively guide AI optimization — at which point AI optimization itself becomes person-dependent. Without an org-level look at workflows, individuals start optimizing in their own way (“how can I work most efficiently?”), the strong AI users get even more efficient, their methods become a black box, and overall organizational productivity does not move.
Calling for “we’re all doing this” is the leader’s most important role — and they have to understand generative AI better than anyone else in their team.
How we communicate internally
We created a chat room invited to all employees, owned by the AI division, and pump information out. Occasionally we run polls, share case studies, or have light back-and-forth.
The most important thing is to never let it lapse. A channel that posts once a week is meaningless. We post anything — even small things — that we believe will be useful to at least some part of the company.
Effectively, we’re operating an AI-only internal version of X.
This also gives us a feedback loop: people who consistently react with emoji are the ones reaching for information actively, and we recruit them to get involved.
What matters here is the broadcaster’s mental endurance. At the start, reactions are very thin. As ChatGPT understanding grows internally, reactions gradually increase. Until you reach that stage, you have to keep posting without reacting to the absence of reactions yourself.
The information you share also can’t be too technical, or no one will engage. Prioritize relatable, light material; share the result of actually trying it yourself. That builds trust.
If you’re going to do this full-time, getting trustworthiness and authority behind every piece of information you share is the path to success.
Wrap-up
I’ve been driving ChatGPT adoption inside Zenken since GPT-4’s launch in 2023. The atmosphere then was largely “can AI even be useful?” and everything was groping in the dark. I started by getting myself onto ChatGPT Plus, experiencing the value firsthand, then slowly expanded usage around me. The very start was a tiny minority — just a few people. But seeing the speed of generative AI’s progress and its real impact convinced me we had to use it across the entire company.
The hardest part was building the case: “Why now?” “Why this investment?” To convince the executives, raw enthusiasm wasn’t enough — I had to repeatedly model out concrete cost-benefit scenarios, especially around outsourcing-cost reductions, and negotiate persistently across teams. The result — ¥50M of outsourcing-cost savings in the first year — is a major milestone, and a big result.
Zenken stood up an AI division to support 450 employees with two people. We focused heavily on guidelines, copyright/IP handling, and — above all — design choices that make people understand and enjoy using the tool. Training was deliberately not push-style: we built it to be approachable, with “join and leave anytime” and “passive watching is fine” as design principles. From experience, getting people to feel “this is interesting, let me try” is the first step toward adoption that sticks.
We’ve also been intentional about mindset. Replace the default of “look up, write, read everything yourself” with “automate 80% with AI, spend your time on the final 20%” — and spread that across the company. We don’t leave it at the frontline either; we always pull in management to standardize workflows and prevent personal-black-box drift.
For internal communication: never let it lapse, lighter topics matter too, and keep going even when reactions are thin. Operating it like an internal X has built genuine resonance and a chain of action.
Looking back, this was a string of unprecedented attempts. The single biggest engine has been the heat behind “let’s change the company with AI” and the ability to pull peers along with you. We’ll keep pushing into the next stage of AI adoption from here.
Was this article helpful?