Like everyone else on the planet, I’ve been following GitHub Copilot since its launch. It is an impressive achievement and a remarkable milestone for the deep learning industry, that’s for sure. We are obviously at the early stages in deep learning applied to software development, and it is somewhat unsettling to ponder what the future might hold in this field.

Like many others, however, I worry about code quality issues and the risk of license infringements1. I am also concerned that the advent of Copilot-like tools might fundamentally change the software developer experience, if not the software developer role as a whole, and for the worst.

I wrote down some notes preparing for an in-depth Copilot article, but then I stumbled on Jeremy Howard’s ‘[Is Copilot a blessing, or a curse?][3]’. In that piece, Jeremy covers all of my points and then some more. Also, given his background, Jeremy’s musings on deep learning carry way more weight than mine. My advice is to read his article. I especially appreciate his critique on Copilot’s so-advertised role as “AI pair programmer”:

GitHub markets Copilot as a “pair programmer”. But I’m not sure this really captures what it’s doing. A good pair programmer is someone who helps you question your assumptions, identify hidden problems, and see the bigger picture. Copilot doesn’t do any of those things – quite the opposite, it blindly assumes that your assumptions are appropriate and focuses entirely on churning out code based on the immediate context of where your text cursor is right now.

He then mentions both automation and anchoring biases and explains how they might influence the developers using advanced AI-powered automation tools like Copilot.

The code proposed to Copilot seems to solve most problems, yes, but it appears to average quality at best. Jeremy explains why: Copilot trains on public repositories, with no filter on the overall quality of the material at hand –something complicated to achieve, indeed. The developer is expected to carefully review the suggestions, and that’s where automation and anchoring biases might affect judgment. Besides, who enjoys doing code reviews? I certainly don’t. Any day, I’d instead take on the challenge and churn out my own solution. Yes, it might require effort and time, or see me googling for some help (those Stack Overflow hints have usually been reviewed, amended and commented on by fellow programmers; both quality and review, right there). When my solution works, I am thrilled. That feel of self-accomplishment and satisfaction is what I enjoy the most. It’s what I look forward to in the morning when I sit at my desk.

I also don’t want to renounce deep understanding. When we delegate code creation, we’re taking a step toward shallow knowledge in our field. Eric Sink’s ‘[Will deep understanding still be valuable?][4]’ has an excellent discussion around this topic:

In my nearly 4 decades of writing code, I have consistently found that the most valuable thing is to know how things work. Nothing in software development is more effective than the ability to see deeper. […] I am utterly convinced that deep understanding is important. But increasingly, I feel like I’m swimming upstream. It seems like most people in our industry care far more about “how to do” rather than “how does it work”.

Copilot is great and feels like magic2. And precisely for that reason, at my company, we’re not going to adopt it.


  1. For example, see Armin Ronacher’s on Copilot regurgitating famous, GPL-license code. [3]: https://www.fast.ai/2021/07/19/copilot/ [4]: https://ericsink.com/entries/depth.html ↩︎

  2. Any sufficiently advanced technology is indistinguishable from magic –Arthur C. Clark [rss]: https://nicolaiarocci.com/index.xml [tw]: http://twitter.com/nicolaiarocci [nl]: https://buttondown.email/nicolaiarocci ↩︎