GitHub’s internal playbook for building an AI-powered workforce
July 22, 2025 // 16 min read
How GitHub scaled AI fluency across its workforce by focusing on people not just technology.
Published via GitHub Executive Insights | Authored by Matt Nigh, Program Manager Director of AI for Everyone
Generative AI presents one of the most significant opportunities to accelerate business performance in a generation, and the race is on to capture its value. However, the critical challenge is not about recognizing the potential of AI, but about enabling it at scale.
Many companies invest heavily in AI tools, only to see adoption confined to a small group of early enthusiasts. The result is an investment that fails to translate into broad productivity gains, leaving immense value on the table. The difference between a high-performing, AI-fluent organization and one that stalls is a deliberate strategy for enablement.
Their mistake is treating AI adoption as a technology problem when it is, in fact, a change management problem. Companies fail at AI adoption because they treat it like installing software when it's actually rewiring how people work. The difference between success and failure isn't buying licenses. It's building the human infrastructure that turns skeptical employees into power users.
This is the internal playbook developed and implemented by GitHub to build AI fluency across its global workforce. The strategies detailed here are the product of the AI for Everyone initiative, which guides our company's efforts to embed AI into the fabric of how we work. Read on — not for a collection of theories — but for a practical, actionable blueprint for building that same road-tested system at your organization.
GitHub’s operating model for AI enablement
A successful AI enablement effort is not a single initiative, but a holistic system of mutually reinforcing components. It requires a thoughtful blend of top-down strategy and grassroots momentum, so you can build an ecosystem where AI fluency can thrive.
The foundation of this ecosystem is built on executive support and clear policies and guardrails. Visible sponsorship from leadership provides the strategic vision and investment necessary to get started, while well-defined policies create a safe environment for employees to experiment and innovate.
That said, your AI enablement model requires the following components, which we’ll refer to as the eight pillars:
Pillar | What is it |
---|---|
AI Advocates | A volunteer network of internal champions who scale adoption through peer-to-peer influence and feedback. |
Clear policies and guardrails | Simple rules and guidelines that empower employees to use AI confidently and responsibly. |
Learning and development opportunities | A learning ecosystem that provides accessible pathways curated from exceptional external training sources. |
Data-driven metrics | A multi-phased measurement framework to track adoption, engagement, and business impact. |
Dedicated responsible individual | A central owner who orchestrates the program, enables others, and drives the overall strategy. |
Executive support | Visible leadership commitment that provides strategic vision, investment, and transparent communication. |
Right-fit tooling | A portfolio of vetted first-party and third-party tools suited for a variety of roles and use cases. |
Communities of practice | Dedicated forums for peer-to-peer learning, knowledge sharing, and collaborative problem-solving. |
With your foundation set, focus on three connected elements. First, equip teams with vetted AI tools and human support systems: an advocates program creates internal champions driving adoption, while communities of practice enable peer learning. Second, amplify these networks through structured L&D that builds skills systematically. Third, assign a DRI to drive investment decisions and use data-driven metrics to measure impact and evolve the program.
Put the framework into action
Understanding the core components of our AI enablement program is the first step. Activating them requires a deliberate and strategic approach. This section provides a detailed guide for putting each of the eight pillars into practice, starting with the most critical element for gaining initial momentum: executive support.
Executive support: How to set the tone
Successful AI adoption begins at the top. It is not enough to simply provide tools; leadership must actively and consistently champion the "why" behind the company's AI strategy. This means translating high-level goals into clear, tangible reasoning and benefits that resonate with an employee's daily work.
When employees understand the vision, they are more likely to engage. For example, leadership can explain the "why" with messages like these:
- For engineers: We're investing in AI to eliminate toil from your daily work. Our goal is for Copilot to handle writing boilerplate code, generating unit tests, and summarizing complex pull requests so you can spend more of your day in a flow state, solving the hard, creative problems that you enjoy.
- For the whole company: Our AI strategy is about shipping better products to our customers, faster. By using AI to augment our skills, we can accelerate innovation and focus on the high-value, creative work that drives our business forward.
Beyond articulating the vision, a critical part of a leader's role is to be pragmatic and transparent about the impact of AI on the workforce. The introduction of a transformative technology like AI will inevitably automate tasks and change the nature of many jobs. Ignoring this reality creates uncertainty and fear, which are significant barriers to adoption. Employees who are anxious about their future are less likely to embrace the very tools designed to help the company evolve.
Leaders must address this reality directly. This means moving beyond simple reassurances and focusing on a clear strategy for upskilling and role evolution.
In conversation don’t say: “Your job is safe.”
Do say: “This is how our jobs will change, and this is how we will support you in developing the new skills required to succeed.
This pragmatic approach builds trust by treating employees as partners in the transition.
This message should also be tailored for different audiences. For the management layer, the focus is not just about their personal AI usage, but about equipping them to lead their teams through this change. Executives should challenge managers to rethink their teams' workflows, to identify tasks ripe for automation, and to redefine what high-value work looks like. The goal is ultimately to arm managers with the context they need to coach their teams, adapt their goals, and connect AI adoption to tangible improvements in team performance and innovation.
For senior-level individual contributors (ICs), the message must be about expanding their influence and impact. Their role is not just to use AI, but to become the architects of how AI is used effectively across their teams. Senior ICs wield significant internal influence; their adoption of new practices sets the standard for others. Therefore, the conversation should challenge them to embrace a dual mandate: first, to use AI as a force multiplier to elevate their own work, and second, to act as internal experts who scale AI fluency to others. This mastery then makes them credible mentors who can teach others to do the same. This is how you create a compounding return on your most valuable talent.
Policies and tooling: Providing clarity and access
Widespread AI adoption requires clear guardrails. Employees will not experiment with new tools if they are uncertain about what is permissible, so establishing a clear and accessible Acceptable Use Policy is a critical prerequisite for success. This is not only about compliance; it's about enabling employees to use AI confidently and responsibly.
To be effective, these policies must be developed in partnership with key stakeholders, including IT, HR, Security, and Legal, to ensure a comprehensive approach to risk. The final policy should be centralized in a single, easy-to-find document that clearly lists all approved AI tools, and the types of data appropriate for each tool.
A successful model for AI usage policies is a tiered approach to tooling, which provides a simple framework for making safe decisions. Rather than a long list of prohibited actions, a tiered system clarifies what is approved and provides a safe default for everything else. An effective framework looks like this:
- Tier 1: Fully vetted and approved tools. This tier is for AI tools that have undergone rigorous internal security and legal review, making them safe for use with confidential company and customer data. This category should include your company's own first-party AI products (such as GitHub Copilot) and any enterprise-grade third-party tools you have procured. Employees should know that any tool in this tier is a safe choice for their daily work.
- Tier 2: Unvetted public and consumer tools. This tier serves as a catch-all for the vast ecosystem of public AI tools that have not been officially contracted or vetted by your company. The policy here is simple and universal: These tools should only be used with public, non-sensitive data. This provides a clear, default guardrail that empowers employees to experiment with new and emerging technologies without putting company data at risk.
This tiered model removes the guesswork. It gives employees a straightforward mental model: If a tool isn't on the "Fully vetted" list, treat it as public and use only public data. This simple, clear guidance is the key to unlocking responsible usage at scale.
AI advocates: Your grassroots champions
While top-down support and clear policies are essential, lasting adoption is driven by peer-to-peer influence. An AI advocates program is a powerful mechanism for scaling this influence. With this program, your mission is to build a volunteer network of internal champions who can drive adoption from the ground up, acting as a bridge between the central enablement program and individual teams. These advocates translate high-level strategy into tangible, org-specific use cases to build momentum organically.
The most effective way to build this network is to simply ask for volunteers. A formal nomination process is often unnecessary; a company-wide call for those who are passionate about AI will naturally surface the right people. This self-selection process ensures your advocates are intrinsically motivated and genuinely interested in helping their peers succeed, making them credible and effective champions for the program.
What advocates do
The role of an advocate is multifaceted: acting as a local expert, a community builder, and a vital feedback channel. Their primary functions are to:
- Act as internal champions: Advocates serve as the go-to AI subject matter experts for their organization. They mentor their peers, answer day-to-day questions, and help colleagues overcome practical hurdles, effectively lowering the barrier to entry for those less familiar with AI.
- Amplify peer-to-peer learning: A crucial function is to make the value of AI tangible and relatable. Advocates do this by identifying and showcasing real-world use cases and success stories from within their own teams. This peer-driven demonstration is often more powerful than a formal training session.
- Act as a voice for their area: Advocates create a critical feedback loop for the enablement program. By representing their teams' perspectives, they provide invaluable, on-the-ground insights into what's working, what's not, and where the biggest opportunities for AI adoption lie. This allows the program to iterate and improve based on real user needs.
- Help curate and co-lead training: Advocates are the "voice of the customer" for the enablement program. They surface the specific needs, pain points, and desired use cases of their organization. This allows the central program to move beyond generic instruction and partner with advocates to co-lead targeted, high-impact training sessions that address real-world challenges.
Supporting your advocates
Advocate programs can only be effective with dedicated support from the central program. This support should be practical and value-driven, giving advocates the resources and access they need to be credible leaders. Key support mechanisms include:
- Fostering a self-supporting advocate community: Create an environment where advocates can support each other, like a dedicated communication channel (e.g., a private Slack channel) and facilitating regular, advocate-led check-ins. The goal is to build a peer-to-peer network that shares best practices, troubleshoots challenges, and evolves into a self-managed group, scaling expertise organically across the organization.
- A direct line to leadership: They should have direct access to someone who represents the voice of leadership in some form. This could be the DRI for AI enablement or a program sponsor.
- A "Train the Trainer" philosophy: The central program should actively work to develop advocates' skills as trainers and AI subject matter experts. This goes beyond just giving them information. It involves teaching them how to effectively mentor their peers and lead workshops, transforming them into an extension of the core enablement team.
Communities of practice: Fostering collaboration
While an Advocates program provides targeted, high-touch support, scaling AI fluency across an entire organization requires broader forums for collaboration. This is where Communities of Practice (CoPs) come in: dedicated spaces where employees can connect, ask questions, and share knowledge organically. These communities are the connective tissue of a successful enablement program, breaking down silos and ensuring that valuable insights don't get lost in private conversations.
Another goal: bringing structure to the organic interest in AI without stifling it. Most companies already have nascent communities in the form of scattered chat channels or email threads. But an effective enablement program identifies these pockets of activity and formalizes them into a cohesive network. This involves:
- Establishing dedicated, purpose-driven communities: Instead of a single, monolithic AI channel, create distinct communities for different user groups and purposes. This allows for more focused and relevant conversations. A good starting point is to create:
- A general-use community (e.g., a Slack channel like
#how-do-i-ai
) for broad, non-technical questions and company-wide announcements. - A developer-focused community (e.g.,
#copilot-users
) for sharing technical use cases, deep-dives, developer specific questions, and sharing advanced techniques. - Function-specific communities (e.g.,
#ai-for-sales
) as needed for groups like marketing, sales, or finance, who have unique use cases and workflows.
- A general-use community (e.g., a Slack channel like
- Defining clear charters and leadership: Each community should have a clear, documented purpose and a designated leader or group of leaders (possibly drawn from your AI Advocates). This ensures that conversations stay on track and that the community remains a valuable resource.
- Sustaining momentum: The work doesn't end once the channels are created. The enablement program should work to sustain momentum by actively showcasing interesting use cases from the communities, using them as a platform to announce new features or training, and evolving them over time.
By intentionally fostering these communities, you can create a scalable, self-sustaining engine for peer-to-peer learning that is essential for achieving enterprise-wide AI fluency.
Curated learning and development: Lowering the barrier
Providing access to tools is not enough; you must also provide accessible pathways to proficiency. A dedicated Learning & Development (L&D) workstream is essential for minimizing the learning curve and ensuring that all employees, regardless of their technical background, can gain practical AI skills relevant to their roles. The objective is to provide a multi-faceted learning ecosystem that caters to different needs and learning styles.
At GitHub we’ve built an L&D site that curates content from internal learnings and external resources.
An effective L&D strategy is built on several key investments:
- A centralized resource hub: To combat information overload, create a single, easy-to-navigate internal site that serves as the central source of truth for all things AI. This hub should go beyond a simple list of links; it should be a dynamic showcase of internal innovation, featuring real-world use cases, best practices, and projects built by employees. This not only provides valuable learning material but also inspires adoption by demonstrating what's possible.
- Core AI Learning paths: The foundation of any L&D program should be just a few clear, "zero-to-one" learning paths that take a complete beginner to a state of basic competency. Instead of creating your own content, try to find exceptional content that already exists from external resources. AI features and functionality are changing quickly and it is not sound to invest in developing internal training that might be irrelevant in only a few months.
- Building blocks for technical Users: For more advanced technical staff, the goal is to accelerate their work, not just teach them the basics. An effective way to do this is by providing a library of pre-built, reusable AI components that act as building blocks. These can be templated files, cloneable repositories, or reusable workflows that handle common AI tasks. This allows technical users to build their own custom AI-powered solutions faster and more efficiently, without having to reinvent the wheel.
- Integration with onboarding: To establish AI as a core part of the company culture, it must be integrated into the new-hire experience. Partner with the onboarding team to introduce relevant AI skills and tools from day one. This sends a clear signal to new employees that AI fluency is a key competency for success at the company.
Dedicated program leadership: Driving the program
A successful enablement program is an active, living system, not a static set of resources. It requires a dedicated owner, a Directly Responsible Individual (DRI) or a small team, to act as the central orchestrator. This leadership is the glue that connects the various components, from executive strategy to grassroots advocacy, into a cohesive and effective whole.
A critical question leaders must ask is whether their investment in enablement reflects the level of AI fluency they expect from their company. If the goal is enterprise-wide fluency, the investment must extend beyond simply purchasing licenses. This is where the DRI becomes essential.
The mission of this role is to scale others, not to build a fiefdom. The DRI is an enabler, not a gatekeeper, whose primary function is to amplify the work of others and remove obstacles to adoption. This involves a blend of strategic and tactical responsibilities:
- Owning the program strategy and roadmap: The DRI is responsible for the overall program strategy, including defining workstreams, managing the monthly planning process, and ensuring alignment with senior sponsors and company objectives.
- Leading change management: The DRI is the company's expert on driving AI fluency. They own the comprehensive change management plan, ensuring that the introduction of new AI capabilities is a smooth, well-communicated process that minimizes disruption and maximizes adoption.
- Acting as a central AI consultant: The DRI serves as an expert for the organization, providing 1:1 support and office hours to help employees and advocates tackle complex problems and develop sophisticated use cases.
- Amplifying internal success and innovation: A key part of the role is to find and broadcast success stories. The DRI actively looks for innovative uses of AI within the company and showcases them in communities and workshops, creating a virtuous cycle of inspiration and adoption.
- Managing the AI tooling and policy lifecycle: The DRI acts as the central intake point for new AI tool requests and partners with IT, Security, and Legal to manage the end-to-end process of evaluation, procurement, and policy-setting. This streamlines a potentially complex workflow and ensures consistency across the organization.
- Owning adoption and fluency metrics: The DRI is accountable for the health of the enablement program and is responsible for tracking leading indicators of fluency (e.g., MAU, MEU, user segmentation). This is a distinct effort focused on proving that the program's initiatives are effectively driving usage and maturing how employees use AI.
- Demonstrating business ROI: As a separate but related effort, the DRI is accountable for demonstrating the program's business value. This involves correlating the adoption and fluency data to lagging indicators of business impact (e.g., productivity gains, improvements in code quality, increased developer satisfaction). The DRI's role is to deliver a clear, data-driven narrative to leadership that shows how investment in enablement directly translates to a measurable return on investment.
At GitHub, this function is formally resourced with a Program Director and a Program Manager who partner to drive the "AI for Everyone" initiative. This level of dedicated ownership ensures that the program has the focus and accountability required to succeed at an enterprise scale.
Metrics: Measuring for success
To justify investment and guide the evolution of an enablement program, you must measure what matters. A robust measurement framework moves beyond simple license assignment to provide a nuanced understanding of how, where, and how deeply AI is being adopted across the organization. Since industry standards for measuring AI's ROI are still emerging, a multi-phased approach is most effective, starting with broad adoption metrics and maturing toward measuring business impact.
Phase 1: Measuring breadth of adoption
The first step is to understand the basic reach of your AI tools. This provides a baseline and tracks the initial success of your enablement efforts.
- Monthly Active Users (MAU): The percentage of licensed employees who use an AI tool at least once in a given month. This is your primary indicator of overall adoption.
- Monthly Engaged Users (MEU): A stricter version of MAU, this tracks the percentage of employees who use a tool multiple days per month. The exact number should depend on your company’s definition of what an engaged user should be. A growing MEU indicates that users are moving beyond initial experimentation and are beginning to form a habit.
Phase 2: Measuring depth of engagement
Once a majority of employees are actively using AI, the focus shifts from breadth to depth. Are they integrating AI into their core workflows, or is usage still superficial?
- User segmentation: Go beyond a single "active" number by segmenting users based on their frequency of use. A simple model could be:
- Dedicated users: Active 10+ days per month. These are your power users.
- Occasional users: Active 2-9 days per month.
- Tire kickers: Active only 1 day per month.
- Tracking the shift in these segments over time provides a much richer picture of adoption maturity. For example, a key goal should be to convert "Tire kickers" into "Occasional" or "Dedicated" users.
- Total AI events: This is a raw measure of interaction volume (e.g., number of prompts, interactions, code completions, etc.). A steady increase in total events per active user indicates that AI is becoming more deeply embedded in daily workflows.
Phase 3: Measuring business impact
Once AI adoption has been established, the focus shifts to demonstrating its tangible business value. This involves understanding how AI usage impacts existing metrics and identifying new areas to measure.
For a comprehensive guide to measuring engineering system performance and business outcomes, including relevant AI metrics and their calculation, we recommend consulting GitHub's Engineering System Success Playbook (ESSP). The ESSP outlines a balanced and comprehensive approach, helping organizations assign and track metrics across key "zones" such as Developer Happiness, Quality, Velocity, and Business Outcomes. This playbook also provides detailed guidance on leading and lagging indicators, ensuring a holistic view of performance improvements.
Specifically, the ESSP highlights key metrics related to AI, such as:
- AI leverage: This metric quantifies the realized opportunity due to effective engagement with AI, by calculating the difference between potential and current AI-driven productivity gains across engineering employees. A higher AI leverage indicates reduced manual engineering effort or accelerated/enhanced quality of delivery with increased cost efficiency.
- Cycle time (or lead time): This measures the amount of time it takes for a commit to get into production. A decrease in cycle time, often influenced by AI-assisted development, suggests improved efficiency and faster responses to market demands.
- Code churn: While not explicitly defined as a direct ESSP metric, the playbook discusses how AI can simplify and remove redundant code, which can impact churn. Assessing whether AI-generated code requires more or less rework than human-written code can be a strong signal of code quality.
- Pull request size: Monitoring the size and complexity of pull requests helps ensure that AI is not inadvertently encouraging practices that slow down code review, as AI can sometimes lead to larger, more complex pull requests.
- Developer wellbeing: This metric tracks changes in job satisfaction. A happier, less burned-out team, potentially aided by AI tools that reduce toil, is generally more productive and innovative.
- Perceived productivity: This involves directly asking employees how AI has impacted their ability to focus on valuable, high-impact work. This qualitative data provides a powerful narrative to complement quantitative metrics.
Executing on enablement: A strategic checklist
This checklist provides a practical, phased approach to implementing the framework described in this whitepaper.
Phase 1: Foundational steps (first 30 days)
- Secure executive sponsorship: Identify and gain commitment from a C-level sponsor who will provide budget, publicly champion the program, and consistently communicate the "why" behind the AI strategy.
- Appoint a DRI: Designate an owner for the AI enablement program who is accountable for its success and has the authority to coordinate across functions.
- Draft a v1 usage policy: In partnership with Legal, Security, and IT, create and publish a simple, tiered policy (e.g., vetted vs. unvetted tools) to provide clear guardrails and unblock safe experimentation.
- Establish initial metrics: Instrument your systems to track foundational adoption metrics like Monthly Active Users (MAU) and Monthly Engaged Users (MEU), and create a baseline dashboard to track initial progress.
- Announce the program: Work with your executive sponsor and communications team to deliver a clear, company-wide announcement that outlines the vision for AI, the resources available, and what to expect next.
Phase 2: Building momentum (first 90 days)
- Launch the AI advocates program: Put out a company-wide call for volunteers, run a simple onboarding session to align on the advocates' role, and establish their dedicated communication channel.
- Establish communities of practice: Create and seed initial conversations in a general-use AI channel and a developer-focused channel, each with a clear charter and a designated community lead (like an advocate).
- Launch a centralized resource hub: Create a v1 internal site that aggregates links to approved tools, the usage policy, and the first set of curated learning paths.
- Begin showcasing success: Task the DRI and advocates to actively look for early wins and interesting use cases, then share these stories in the Communities of Practice to build social proof and inspire others.
- Launch an onboarding module: Partner with HR to create and integrate a small, self-service AI enablement module into the standard new-hire onboarding process.
Phase 3: Scaling and measuring (ongoing)
- Implement a "Train the Trainer" program: Formalize the process of upskilling advocates by providing them with resources and coaching on how to effectively mentor their peers and lead workshops.
- Develop a business ROI dashboard: Begin the work of instrumenting key business and engineering systems to correlate adoption data (MAU/MEU) with lagging indicators of business impact (e.g., cycle time, code churn, sales productivity).
- Conduct qualitative surveys: Launch the first of regular, lightweight surveys to the workforce to capture perceived impact on productivity and well-being, and gather direct feedback on the program.
The path to AI fluency
Investing in AI tools without a deliberate enablement strategy is a recipe for wasted resources. A systematic, multi-faceted program is what separates a high-performing, AI-fluent organization from one that fails to realize the value of its technology investment.
There is no silver bullet for AI adoption success. It requires a sustained, data-driven effort. It requires executive support, clear policies, grassroots advocates, and a commitment to measuring what matters. And it requires building a robust, adaptable system of capabilities. For leaders who commit to this systematic approach, the return is a more productive, more innovative, and more effective organization.
Want to learn more about the strategic role of AI and other innovations at GitHub? Explore Executive Insights for more thought leadership on the future of technology and business.
Tags