I have seen this study cited enough to have a copy paste for it. And no, there are not a bunch of other studies that have any sort of conclusive evidence to support this claim either. I have looked and would welcome any with good analysis.
"""
1. The sample is extremely narrow (16 elite open-source maintainers doing ~2-hour issues on large repos they know intimately), so any measured slowdown applies only to that sliver of work, not “developers” or “software engineering” in general.
2. The treatment is really “Cursor + Claude, often in a different IDE than participants normally use, after light onboarding,” so the result could reflect tool/UX friction or unfamiliar workflows rather than an inherent slowdown from AI assistance itself.
3. The only primary outcome is self-reported time-to-completion; there is no direct measurement of code quality, scope of work, or long-term value, so a longer duration could just mean “more or better work done,” not lower productivity.
4. With 246 issues from 16 people and substantial modeling choices (e.g., regression adjustment using forecasted times, clustering decisions), the reported ~19% slowdown is statistically fragile and heavily model-dependent, making it weak evidence for a robust, general slowdown effect.
"""
Any developer (who was a developer before March 2023) that is actively using these tools and understands the nuances of how to search the vector space (prompt) is being sped up substantially.
I think we agree no the limitations of the study--I literally began my comment with "for seasoned maintainers of open source repos". I'm not sure if in your first statement ("there are no studies to back up this claim.. I welcome good analysis") you are referring to claims that support an AI-speedup. If so, we agree that good analysis is needed. But if you think there already is good data:
Can you link any? All I've seen is stuff like Anthropic claiming 90% of internal code is written by Claude--I think we'd agree that we need an unbiased source and better metrics than "code written". My concern is that whenever AI usage in professional developers is studied empirically, as far as I have seen, the results never corroborate your claim: "Any developer (who was a developer before March 2023) that is actively using these tools and understands the nuances of how to search the vector space (prompt) is being sped up substantially."
I'm open to it being possible, but as someone who was a developer before March 2023 and is surrounded by many professionals who were also so, our results are more lukewarm than what I see boosters claim. It speeds up certain types of work, but not everything in a manner that adds up to all work "sped up substantially".
I need to see data, and all the data I've seen goes the other way. Did you see the recent Substack looking at public Github data showing no increase in the trend of PRs all the way up to August 2025? All the hard data I've seen is much, much more middling than what people who have something to sell AI-wise are claiming.
"""
1. The sample is extremely narrow (16 elite open-source maintainers doing ~2-hour issues on large repos they know intimately), so any measured slowdown applies only to that sliver of work, not “developers” or “software engineering” in general.
2. The treatment is really “Cursor + Claude, often in a different IDE than participants normally use, after light onboarding,” so the result could reflect tool/UX friction or unfamiliar workflows rather than an inherent slowdown from AI assistance itself.
3. The only primary outcome is self-reported time-to-completion; there is no direct measurement of code quality, scope of work, or long-term value, so a longer duration could just mean “more or better work done,” not lower productivity.
4. With 246 issues from 16 people and substantial modeling choices (e.g., regression adjustment using forecasted times, clustering decisions), the reported ~19% slowdown is statistically fragile and heavily model-dependent, making it weak evidence for a robust, general slowdown effect.
"""
Any developer (who was a developer before March 2023) that is actively using these tools and understands the nuances of how to search the vector space (prompt) is being sped up substantially.