The near-future of AI-assisted software engineering
"Will AI replace programming" is a lazy, unimaginative trope. It will not replace software engineers anytime soon. It will, however, give them another useful tool, increasing reliability and productivity.
Here is how we shift our thinking from polar extremes, to practical solutions for Software Engineering with AI, read on!
AI is not reliable enough for general problems
How should we think about AI in general, even outside of software? Firstly, the absolute best AI's are 80-90% correct for narrow, well-defined tasks of a one-dimensional nature. Given this, they are only suitable for business- and other processes where 80-90% accuracy is good enough, and the cost of loss or correction of 10-20% is low enough.
When you start seeing the problem through this lens, the number of suitable tasks drops precipitously. 10-20% being "wrong" at scale quickly accrues. A software system with single-digit percentage error rates can often be considered unusable.
Kind problems and wicked problems
The book "Range: Why Generalists Triumph in a Specialized World" by David Epstein puts forth that there are generally two types of problems we face:
- Kind problems, suited for specialists, which are usually constrained to certain environments or challenges, subject to rigid, unchanging rules.
- Wicked problems, suited for generalists, that occur in uncertain environments, with ill-defined challenges, few known rules and that are rapidly changing.
In this context, AI is a specialist, suited to kind problems. But we know, in software engineering, as in business, the codifying of the solution is only a miniscule part. Getting there in the first place is a wicked problem.
As a sidenote, the fact that estimating software development work is so uncertain, is further proof software is a wicked problem: if it was kind, subject to rigid, unchanging rules and environments, it would be easy.
Producing code for a constrained algorithm may be relatively easy, but matching what code to write to the underlying problem is anything but.
Implications of working with wicked problems
If software engineering at large consists of wicked problems, and if AI is still 10-20% wrong for even kind problems, what does this mean for us?
Even if we can bread down a wicked problem into a set of kind problems, the 10-20% error-rate is concerning. The effort in correcting this likely exceeds the effort of a human simply writing the code (this is at least my subjective experience from trying it). We cannot simply accept the 10-20% error-rate. If you put it in the perspective that many production systems have a 99.9% reliability target, we would exceed our error budget by 100x if we simply accepted the AI's output.
But this doesn't mean that AI is useless, so long as it is used appropriately and guided by a human applying their own judgement.
We could instead see an AI assistant in software do the following tasks, which are approaching kind problems:
- Use it as a smarter "Stack Overflow". Trained on a corpus of texts about frequent software problems, an AI would likely retrieve plausibly correct answers to an engineers queries faster than Google.
- As an advanced linter/static analysis tool: "find me potential bugs in this code".
- Optimization problems: "Analyze the O(n) complexity of this algorithm, and suggest improvements to achieve the same outputs, given the same inputs".
Shifting our thinking from "AI replacing for everything" to "AI linting/proofing constrained kind problems" takes us away from thinking in black and white extremes. All of a sudden, we have real practical problems, with a real practical solution, feasible to solve thanks to AI in the upcoming years. Rather than jumping to replacing the most complex human tasks straight away, we should take it in baby-steps, and solve tasks that raise our productivity.
The future isn't AI supplanting humans. The future is AI assisted humans raising their productivity to super-human levels.