How to Survive the Shift to AI: The Skills Your Team Needs Now (Or Get Left Behind)

TL;DR
  • The gap between people who say they're "good with computers" and people who can actually do work with AI is enormous, and it widens every quarter.
  • Goldman Sachs estimates 300 million jobs globally are exposed to AI automation, and roughly 25% of US work hours can be automated today [1].
  • Anthropic's own usage data shows Computer Programmers, Customer Service Reps, and Data Entry Keyers are already the most exposed occupations, with 75%, 70%, and 67% task coverage respectively [2].
  • The "digital native" myth is dead. Multiple studies since 2021 show Gen Z students don't understand file systems, directories, or terminals because they grew up on search bars and walled gardens [3].
  • One peer of mine spends $1,000 a month on Claude and replaces about $60,000 a month in human labor with it. That's the new ceiling for one operator.
  • Your team needs ten specific skills this year. We list them below with the courses and tools to get there.
Audio summary. Prefer to listen? A six-minute spoken version of this post is below. Drop the MP3 into /assets/audio/2026-04-27-ai-shift.mp3 when ready.

The 25-minute meeting

A few weeks ago I was on a call with members of my team. We needed to do something simple. Generate an SSH key on each person’s laptop and paste the public key into a chat thread so I could add it to a server. Three minutes per person. Maybe four if someone had never done it.

Before we started, I asked who was comfortable in a terminal. Multiple people said they were good. “Yeah, I use it all the time.” Cool.

It took us twenty-five minutes.

Not because the command was hard. The command is one line. It took twenty-five minutes because people couldn’t figure out how to copy text out of a terminal window. Couldn’t use arrow keys to recall the last command. Pressed CTRL+C and got confused when nothing happened the way they expected. Tried to right-click. Tried to drag-select. Pasted the wrong half of the key. One person closed the terminal by accident and didn’t know how to reopen it.

These are smart people. They build campaigns. They run ad accounts. They write briefs that clients pay for. And they got stuck on copy-paste in a black rectangle.

I’m not telling this story to dunk on anyone. I’m telling it because that meeting is the most honest diagnostic I’ve gotten on the state of my own company in the last two years. The gap between “I’m good with computers” and “I can actually do work with computers in a way that compounds with AI” is enormous. And it’s the gap that decides who has a desk job in five years.

This post is what I told my team afterward, expanded.

Two things are true at once

This isn’t a complaint and it isn’t a lecture. It’s a diagnosis. Both of these statements are true at the same time, and you have to hold both of them in your head without flinching.

The first thing that’s true is that there are real, structural, generational reasons the gap exists. Most people on my team grew up on iPhones. Their first computer was a touchscreen. They’ve never been told by an operating system, “I don’t understand that command, try again.” They’ve spent their entire adult lives inside walled gardens that hide the file system, hide the network stack, and hide the actual machine. That’s not their fault. That’s a product decision Apple and Google made, and it worked, and it sold a lot of phones.

The second thing that’s true is that none of that matters anymore. The work changed. The tools changed. AI ate the boring middle of office work. And whether or not the gap exists for fair reasons, the gap still has to close, because the work on the other side of it is the only work that pays.

You can be sympathetic to why and still be honest about what.

You can be sympathetic to why someone hasn't learned a thing, and still be honest that the thing has to be learned. Those are not in conflict. Pretending they are is how teams die slowly.

The “digital native” myth, with receipts

For about a decade, every consultant deck assumed Gen Z would be the most technically capable workforce in history. They grew up with technology. Of course they’d be good at it.

Then computer science professors started writing about what they were actually seeing in classrooms. In 2021, The Verge published a piece that quoted multiple professors who’d realized their students didn’t understand file directories [3]. An astrophysics professor at Cal Poly Humboldt was running a simulation lab and her students kept getting “file not found” errors. She asked where they’d saved their work. They didn’t know. They didn’t understand the question. The concept of “a place where the file lives” wasn’t in their model of computers.

That piece set off years of follow-up. A George Mason astronomy professor said he now spends class time teaching students about file extensions and terminal commands before he can teach them astronomy [3]. The Stack Overflow podcast did a whole episode on it [4]. PC Gamer wrote about it. The Verge’s reporter pointed out that a generation raised on search bars and apps doesn’t see hard drives the way older users do, because operating systems hide the file structure on purpose.

This isn’t a story about Gen Z being lazy or dumb. They’re not. The astrophysics students are doing astrophysics. The point is that “grew up with technology” turned out to mean “grew up with the abstraction layer,” and the abstraction layer is exactly what AI tools require you to peel back.

To get real value out of Claude Code or Cursor or any agentic system, you need to understand:

  • where files live
  • how processes run
  • what a path is
  • what an environment variable is
  • how to copy something out of one window and into another without losing your mind

These are 1995 skills. And a lot of people who can swipe through TikTok at 60 frames per second have never been forced to learn them.

What "digital native" was supposed to mean

  • Comfortable with new tools
  • Can teach themselves software fast
  • Understands how computers work because they grew up with them
  • Adapts to new platforms easily

What it actually means in practice

  • Comfortable with consumer apps that hide complexity
  • Can sign up for SaaS, can't configure it
  • Has never seen a terminal, a directory tree, or a config file
  • Adapts to new UIs, not new systems

Why millennials are weirdly well positioned

I’m a millennial. I grew up on DOS, Windows 3.1, 95, 98, ME, XP, and 7, in that order. Switched to Mac in 2006. Built and broke and rebuilt machines with parts from local shops. Ran Linux on a laptop in college because the Wi-Fi card cost less if you bought one without an OS. I’m at home in a terminal. I’m also at home in a Figma file.

I’m not bragging. I’m describing a specific accident of timing. Anyone born roughly between 1982 and 1995 had to learn the underlying systems because the abstraction layer didn’t exist yet, and then we got to enjoy the fancy abstraction layer when it arrived. We can drop down a level when we need to. We can read an error message that includes a stack trace and not panic. We can SSH into a box. We can also use Notion and Linear without a manual.

Gen X and older sometimes dismiss AI as hype because they remember three previous cycles of “this changes everything” that didn’t. Fair instinct, wrong call this time. Gen Z grew up on the abstraction layer and often can’t drop below it, which means they get stuck the moment AI tools demand a real working knowledge of computers.

Millennials are uniquely positioned to bridge those worlds. That advantage is worth approximately zero if we don’t actually use it.

What AI just ate

Goldman Sachs put a number on this in early 2023 and updated it through 2025 and 2026. About 300 million full-time jobs globally are exposed to automation by generative AI [1]. In the US specifically, the technology can automate tasks that account for roughly 25% of all work hours. Office and administrative support has the highest exposure at 46% of tasks. Legal work is 44%. Architecture and engineering is 37%. Business and financial operations is 35% [5].

Goldman’s economist Joseph Briggs estimates 6 to 7% of US workers will be displaced over the transition, with younger entry-level knowledge workers hit first [1]. An April 2026 Goldman note found AI had already cut monthly US payroll growth by about 16,000 jobs in the prior twelve months [6].

Anthropic’s own data, drawn from millions of real Claude conversations, makes it more concrete. Their Economic Index ranks occupations by how much of their tasks Claude is actually doing right now. Computer Programmers top the list at 75% task coverage. Customer Service Representatives are at 70%. Data Entry Keyers are at 67%. Translators, Copy Writers, Tax Preparers, Bookkeepers, all clustered in that same band [2].

If your team’s job description has significant overlap with that list, the work is changing under your feet whether anyone tells you or not.

300M
Global jobs exposed to AI automation (Goldman Sachs)
25%
Of US work hours that can be automated today
75%
Of Computer Programmer tasks already covered by Claude usage
46%
Of office/admin support tasks exposed to automation

I want to make this less abstract. A few weeks ago I got a long message from an offshore SEO contractor we’d been working with. The message was an unprompted explanation of why his deliverables hadn’t been performing. It walked through the playbook he was running. Manual citation building. Free guest post outreach with no responses. Some link insertion services. The kind of work that maybe moved the needle in 2017 and definitely doesn’t in 2026.

The message itself was clearly written by AI. You could tell from the rhythm.

I’m not telling you that story to embarrass one person. The contractor’s individual situation isn’t the point. The point is the entire category. Manual citation building, free guest post outreach, basic on-page edits, lightweight backlink work, pasting boilerplate into outreach templates. All of it is now done faster, cheaper, and with better US-business context by tools like Search Atlas, Surfer, and direct Claude prompts. A category of work that supported tens of thousands of contractors globally just got eaten by software that costs $99 a month.

The honest response, if you’re in that category, is to either retrain into something AI can’t do yet, or to specialize in the parts of SEO that genuinely require judgment (technical audits on edge-case CMSes, real PR outreach, content strategy tied to brand voice). Pretending nothing changed and continuing to ship 2017-era work is the worst option, and it’s also the most common one.

The MIT NANDA report from 2025 confirmed this pattern in the data. The lead author told Axios there weren’t widespread layoffs from AI yet. What was happening instead was that companies were quietly not renewing their offshore contracts. “Jobs most impacted were already low priority or outsourced” [7]. The first wave of AI displacement is global outsourcing, not US W2s. That’s a temporary reprieve, not a permanent one.

What AI hasn’t eaten yet

Here’s where I want to be clear-eyed instead of doom-y.

Plenty of work is still safe. The Anthropic data shows roughly 30% of workers had zero AI task coverage in their occupations, because the tasks happen too rarely in digital form to even be measured [2]. That list includes Cooks, Motorcycle Mechanics, Lifeguards, Bartenders, Dishwashers, Dressing Room Attendants. Goldman’s analysis put building maintenance at 1% exposure and installation/repair at 4%.

If you can fix a transmission, frame a wall, run wire, weld a rail, or run a kitchen line at 8pm on a Saturday, you are extremely fine. Probably the most fine you’ve been in twenty years.

Inside knowledge work, what survives is a specific list:

  • Real client relationships, where someone trusts you because you’ve shown up for them
  • Strategic synthesis across messy, contradictory inputs
  • Judgment under genuine uncertainty
  • Taste, in the design or copy or product sense
  • In-person trust, the kind that closes a deal in a room
  • Coordinating other humans, including being the person who decides what AI should do
  • Truly creative work that requires lived experience or new ideas, not remixed ones
  • Anything physical, anything regulated heavily, anything where the cost of a wrong answer is somebody’s life or somebody’s lawsuit

Notice that almost none of those are the day-to-day tasks of a typical agency role circa 2022. Almost all of them are higher-leverage, more senior, more relationship-driven, and require more, not fewer, hard skills underneath. The middle of the pyramid is the part that’s collapsing.

The new ceiling for one operator

I have a friend in the industry, a peer of mine who’s a couple of years older. We came up around the same time. He’s gone deep on AI in a way I’ve only partially matched, and he’s worth describing because he’s the most concrete picture I have of what one person can do now.

He runs five Claude Max accounts. Five. Two hundred dollars a month each. He uses them as parallel workers, kicking off long-running tasks across different projects so he never has to wait on a single conversation. He builds Telegram bots for clients so they can edit their own websites by sending a message. He generated 22 SVG videos using Remotion in two hours, total cost about 86 cents. He shipped two small iPhone apps over a single weekend last month. He’s running a real product company, plus a few side products with paying users (Tridesk, ProxyBox, VoiceTrail, SalesConnector).

His own line on it: “I spend $60K a month on humans. Spending $1K a month on Claude that works around the clock and never complains is a no-brainer.”

He’s not a special genius. The tools are special. He’s just been willing to live in them. Every weekend for the last 18 months he’s been building things, breaking things, learning the rough edges, and getting faster. The compounding is brutal at this point. The gap between him and someone who refuses to install Claude Code is no longer a skill gap. It’s a category gap.

That’s the new ceiling. Not for a 50-person dev shop. For one person.

If you’re running an agency, a consulting practice, or a small product team, that’s the bar your competitors are about to clear. The boutique two-person shop that knows how to operate this way will out-deliver a fifteen-person agency that doesn’t. We’ve already watched it happen in copywriting. It’s happening right now in design and front-end. It’s coming for SEO, for paid media operations, for finance back-office, for legal review.

The skills your team actually needs

This is the playbook part. It’s also the part most teams fail at, because the skills feel basic and unglamorous and people skip them. Don’t skip them.

These are the ten skills. Each one is the floor, not the ceiling. If someone on your team can’t do these, they can’t use AI tools well, full stop.

1. Terminal basics

cd, ls, pwd, mkdir, rm, arrow keys for command history, CTRL+C to kill a process, CTRL+L to clear, copy-paste in your specific terminal app. SSH into a server. Generate an SSH key. Read a path. Knowing this stops 90% of the friction in agentic AI tools.

2. Git fundamentals

Clone, branch, commit, push, pull, merge, resolve a conflict, read a diff, undo a mistake. Doesn't have to be fancy. Has to be there. Claude Code, Cursor, and every modern AI workflow assumes Git literacy as a baseline.

3. Prompting like a manager

Treat the model like a brilliant junior employee with infinite patience and zero context. Give it the goal, the constraints, the format you want, and an example. Iterate. Don't dump a one-liner and complain when it misreads you.

4. Claude Code or Cursor workflow

Pick one. Live in it daily. Learn how to plan a task, run it, review the diff, accept or reject, run tests. This is the single highest-leveragv>

5. MCP servers

Model Context Protocol lets you wire AI assistants into actual tools (your CRM, your database, your file system, your project manager). Knowing what an MCP server is, how to install one, and how to write a basic config is what separates "I use ChatGPT for emails" from "I run my business through agents."

6. Reading and editing Markdown / YAML

Front matter on a Jekyll post. A GitHub Action workflow. A package.json. A docker-compose.yml. If indentation in a YAML file makes you sweat, you can't ship anything modern. This is a one-afternoon skill that most people never bother to learn.

7. APIs at a conceptual level

What's a GET versus a POST. What's an API key. What's a rate limit. What's JSON. You don't have to write a backend. You have to understand what's happening when an AI agent calls an API on your behalf, because when it breaks, only you can fix it.

8. Knowing when to verify

The biggest failure mode with AI is blind trust. The BCG / Harvard study found consultants who used GPT-4 outside its capability zone were 19 percentage points less likely to get the right answer than consultants without AI. Learn the smell of a hallucination. Verify numbers, citations, and code that touches money.

9. Async work with long-running agents

Agentic tools take minutes, sometimes hours. The skill is queueing up several tasks, doing other work, and returning to review. People who treat AI like a synchronous chatbot lose the productivity gain entirely.

10. Build, don't wait

Stop waiting for a SaaS company to ship the feature you need. Build the tool. A custom internal dashboard, a Slack bot, a one-off scraper, an automation that pings you when a client's site goes down. With Claude Code, the cost of a small custom tool dropped from "weeks of dev time" to "an afternoon." Treat that as the new normal.

</div> The point of the list isn't that everyone becomes a software engineer. The point is that the floor moved. Office knowledge work in 2026 is closer to amateur software engineering than it is to office knowledge work in 2018. That's not an opinion. It's just where the tools landed. ## Why this is hard, even for smart people The BCG / Harvard study from 2023 is worth knowing because it's the cleanest data on what happens when you give knowledge workers AI [8]. 758 consultants. Random assignment. Real consulting tasks. The headline numbers were huge. Consultants with GPT-4 access completed 12.2% more tasks, 25.1% faster, and produced work rated 40% higher in quality. Bottom-quartile performers improved by 43%. The technology was a level-up across the board. There's a less-quoted finding in the same paper. On tasks outside GPT-4's capability frontier, consultants using AI did worse than consultants without it. The authors called it the "jagged technological frontier." Inside the frontier, AI is a superpower. Outside it, AI is a confident liar that drags you with it. The skill is knowing which side of the line you're on. That requires actual judgment, which requires actual familiarity with how the model behaves, which requires using it every day for months. There's no shortcut. GitHub's controlled study on Copilot showed similar magnitude effects on the dev side. Developers given Copilot completed an HTTP server task 55.8% faster than the control group [9]. The catch: that's average across people who already wrote code for a living. For someone who can't open a terminal, the multiplier is zero, because the runway never starts. ## The MIT counter-evidence, taken seriously I want to be honest about the other side of the data, because the AI hype industry has incentives to lie to you. In Auguiness 2025." The viral headline was that 95% of enterprise generative AI pilots failed to produce measurable financial returns [10]. Across $30 to $40 billion in spending, only about 5% of integrated AI deployments showed real P&L impact. Most pilots stalled. The report has real critics. Marketing AI Institute pointed out that the success bar (measurable P&L impact within six months) ignored efficiency gains, churn reduction, and pipeline velocity, and that the underlying sample was 52 interviews [11]. Reasonable critique. But the core finding is still useful. Most companies are throwing AI at the wall and getting nothing back. The 5% that succeeds is doing specific things: targeting back-office automation instead of sales gimmicks, partnering with vendors instead of building from scratch, empowering line managers instead of central AI labs, and giving the tools enough context and feedback to actually learn the work. What that means for a smaller team is good news, weirdly. The big enterprise pilots are failing for big enterprise reasons. A small team with a clear use case, a willing operator, and a $200 Claude Max subscription can be in the 5% almost by default.
5% Of enterprise AI pilots produce measurable financial returns. The rest fail not because the tech doesn't work, but because the company doesn't.
## Adoption by generation, the data we actually have The Anthropic Economic Index also tracks AI adoption by US state and country, and it's wildly uneven. The top 20% of US states account for about 40% of population-adjusted Claude usage [12]. The Census Bureau's Business Trends survey shows AI adoption among US firms more than doubled in two years, going from 3.7% in fall 2023 to 9.7% in August 2025, with the Information sector roughly 10x more adopted than Accommodation and Food Services [12]. Generational data is messier because it's mostly self-report. Gallup's 2025 numbers show frequent AI use at work nearly doubled in two years. Younger workers report higher use, but the BCG study and others show that experience with the tool matters more than age. The most successful AI users in Anthropic's latest report are not the youngest. They're the most experienced, who treat Claude as a collaborator and iterate, instead of dumping a single prompt and accepting the first answer [13]. That's a hopeful finding for any team that decides to invest in real training. Age isn't destiny. Hours in the tool are. ## What we're doing this quarter Talking about a gap and not closing it is the most common form of corporate cope. So here's what's on our roadmap, in case it's useful as a template. We'll see how it goes. I expect about two-thirds of the team to be visibly more capable in 90 days. I expect a third to struggle, and some of them will choose to opt out, and that's fair. ## The honest hard truth Not everyone makes it across this gap. Some people choose not to. Some people can't, for reasons that don't have anything to do with effort. The kindest and the most honest thing a company can do is be clear about it now, instead of pretending later. If you're a manager, you owe your team the truth that the work is changing, the tools are changing, and the floor for office work in 2026 is higher than it was in 2022. You owe them the time, the tools, and the training to clear the new floor. You also owe them the honesty that if they decide not to clear it, the work that's left for them is probably not the same work, and probably not at the same desk. If you're on a team and you've been quietly hoping AI was a fad, this is the part where I tell you it isn't. The bell curve you can see in Anthropic's data and Goldman's data and MIT's data is not a vibe. It's already showing up as offshore contracts not getting renewed, junior dev roles not getting backfilled, content marketing teams getting cut in half. It's coming for the next layer up next, and it'll keep moving. The good news is that the entry point is laughably cheap. $20 a month for Claude Pro. $200 a month for Claude Max if you're serious. A free GitHub account. A free Cursor download. An afternoon to learn enough terminal to be dangerous. The cost of becoming AI-native in 2026 is roughly the cost of one nice dinner per month and ten focused hours. The cost of not becoming AI-native is that the work you currently do gets done by someone else's agent for $1. ## What Actually Works Here's the short list of what we've actually deployed and gotten value from. No affiliate spam, just what's working in our shop right now.

Claude Max ($200/mo)

The single best dollar-for-dollar productivity tool we've found. Run multiple long-running tasks in parallel, get Claude Code, and stop hitting rate limits. Worth it the moment you have one real task per week.

Claude Code (CLI)

Agentic coding in your terminal. Pairs with Claude Max. Best workflow we've found for everything from one-off scripts to full app builds. Learn it before Cursor if you're starting from zero.

Cursor

If you prefer a full IDE, Cursor is the right call. Strong Clation, smart autocomplete, in-line edits. $20 a month. Pay for it.

ClickUp + Claude integration

We use ClickUp as the spine of project management. The Claude MCP integration with ClickUp lets us ask "what's overdue across all client accounts" and get a real answer in five seconds.

Anthropic's free prompt engineering course

Found at the Anthropic docs site. Two hours. Covers everything from basic prompting to chained workflows. We make every new hire complete it in week one.

The Missing Semester (MIT)

MIT's free crash course on terminal, shell scripting, Git, and the developer toolchain. It's the single best resource for anyone who skipped these skills in college. Free, public, evergreen.

Search Atlas (for SEO teams)

If your shop still does SEO, this replaces about $5,000 a month of offshore manual work with a unified platform. The point isn't to fire your contractors. The point is to use the tool yourself and decide what's still worth outsourcing.

What we don't recommend yet

Most agentic "do my whole job" SaaS that costs $500+ a month per seat. The MIT data is right that these mostly stall. Build narrow tools you control instead. The rare exceptions are vertical-specific (legal, accounting) where the vendor knows your domain better than you do.

## Acceptance I used to get genuinely frustrated about the gap. I thought, how is it possible that a person can't copy text out of a terminal in 2026. How is it possible that a contractor sends me an AI-written explanation of why his manual work didn't perform. I'm not frustrated about it anymore. I just see it. The gap is real. The people on the wrong side of it didn't get there on purpose. Apple sold them a beautiful walled garden and they paid for it and they liked it. Google sold them a search bar that made directories obsolete. Schools stopped teaching computers and started teaching apps. The skills my generation accidentally learned because we had no choice are skills that nobody teaches anymore on purpose. That's a real thing. It explains a lot. It excuses none of it. You can't change everyone. You can change what your team learns this quarter. You can pay for the tools, run the workshops, set the expectation, and watch who shows up. The ones who do show up are going to compound at a rate that doesn't make any sense compared to what was possible four years ago. The ones who don't will gradually move to other work. Some of that work will be fine. Some of it won't. I'm writing this in April 2026. I'll write the follow-up in April 2027 with whatever we got right and whatever we got wrong. If you're reading this and you run a team, the only thing I'd say is: the cost of starting now is small, the cost of starting in twelve months is enormous, and the cost of never starting is your business. Pick the floor. Train the team. Own a few tools. Ship the work. The rest sorts itself out. ## Sources | # | Source | |---|--------| | 1 | Goldman Sachs Research, "How Will AI Affect the US Labor Market?" (2025/2026 update) — [https://www.goldmansachs.com/insights/articles/how-will-ai-affect-the-us-labor-market](https://www.goldmansachs.com/insights/articles/how-will-ai-affect-the-us-labor-market) | | 2 | Anthropic, "Anthropic Economic Index: Labor Market Impacts" — [https://www.anthropic.com/research/labor-market-impacts](https://www.anthropic.com/research/labor-market-impacts) | | 3 | The Verge, "File Not Found: A generation that grew up with Google is forcing professors to rethink their lesson plans" (Sept 2021) — [https://www.theverge.com/22684730/students-file-folder-directory-structure-education-gen-z](https://www.theverge.com/22684730/students-file-folder-directory-structure-education-gen-z) | | 4 | Stack Overflow Blog, "Gen Z doerstand file structures (Ep. 415)" — [https://stackoverflow.blog/2022/02/15/gen-z-doesnt-understand-file-structures-ep-415/](https://stackoverflow.blog/2022/02/15/gen-z-doesnt-understand-file-structures-ep-415/) | | 5 | Goldman Sachs / Briggs & Kodnani, "The Potentially Large Effects of Artificial Intelligence on Economic Growth" (March 2023) — [https://www.gspublishing.com/content/research/en/reports/2023/03/27/d64e052b-0f6e-45d7-967b-d7be35fabd16.html](https://www.gspublishing.com/content/research/en/reports/2023/03/27/d64e052b-0f6e-45d7-967b-d7be35fabd16.html) | | 6 | Goldman Sachs, AI payroll-impact note (April 2026) summary — [https://www.prismnews.com/workplace/goldman-sachs/goldman-sachs-says-ai-is-already-slowing-payroll-growth](https://www.prismnews.com/workplace/goldman-sachs/goldman-sachs-says-ai-is-already-slowing-payroll-growth) | | 7 | Axios, "AI is already replacing offshore jobs — with U.S. workers a long-term target" (Aug 2025) — [https://www.axios.com/2025/08/18/ai-jobs-layoffs](https://www.axios.com/2025/08/18/ai-jobs-layoffs) | | 8 | Dell'Acqua et al., "Navigating the Jagged Technological Frontier" (Harvard / BCG / MIT / Wharton, Sept 2023) summary — [https://www.thecrimson.com/article/2023/10/13/jagged-edge-ai-bcg/](https://www.thecrimson.com/article/2023/10/13/jagged-edge-ai-bcg/) | | 9 | Peng et al., "The Impact of AI on Developer Productivity: Evidence from GitHub Copilot" (arXiv 2302.06590) — [https://arxiv.org/abs/2302.06590](https://arxiv.org/abs/2302.06590) | | 10 | Fortune coverage of MIT NANDA "GenAI Divide: State of AI in Business 2025" — [https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/](https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/) | | 11 | Marketing AI Institute critique of MIT methodology — [https://www.marketingaiinstitute.com/blog/mit-study-ai-pilots](https://www.marketingaiinstitute.com/blog/mit-study-ai-pilots) | | 12 | Anthropic Economic Index, September 2025 report (geographic and enterprise adoption) — [https://www.anthropic.com/research/anthropic-economic-index-september-2025-report](https://www.anthropic.com/research/anthropic-economic-index-september-2025-report) | | 13 | Built In coverage of Anthropic Economic Index, April 2026 — [https://builtin.com/articles/anthropic-economic-index-2026-ai-jobs-report](https://builtin.com/articles/anthropic-economic-index-2026-ai-jobs-report) |

Comments

Disqus shortname is not configured. Comments will appear once site.disqus.shortname is set in _config.yml.