ChatGPT won't replace software engineers

Chat GPT prompted to write Go Code

Unless you’ve been living under a rock, you must have heard about ChatGPT by now. The hot-take of the moment is that AI will replace Software Engineers. But as with many hot-takes, it is wrong. AI will fundamentally change software companies, but not in the way most expect.

ChatGPT is impressive

Let's start by stating the obvious: ChatGPT is a leap forward compared to the already powerful GPT-3 model. The answers it provides to prompts are human-like and tackle the task as required.

For example, it can answer essay questions. Or generate prompts for AI-based image generation software. Focussing on a more technical domain, it can solve problems of advent of code. The list of prompts with great results is long.

People are testing the tool in varied domains: law, writing, poetry, coding, and many more. They are getting decent results in all. Given that, one could believe that it will start replacing some jobs.

When talking about our industry, the sight narrows inevitably to software engineers. The end of 2022 has seen many rounds of lay-offs, and a challenging environment.

Given the salaries involved, anything that can reduce head-count is welcome. It allows investment of revenue into growth, and other areas.

But, as one would expect, the replacement of software engineers is not straightforward. It may even be the wrong approach.

ChatGPT is error-prone

To understand why, we have to discuss the issues of ChatGPT. We have already said that it is impressive. But please try to run this prompt on ChatGPT and read the result:

"Explain the long multiplication of 17 by 23"

For reference, this is the output that we got:

Long multiplication is a method for multiplying two numbers by breaking them down into smaller parts and multiplying these parts separately. Here's how to use long multiplication to find the product of 17 and 23:

Write out the problem in standard form, with the multiplier on the right: 17 * 23

Multiply the two units digits: 7 * 3 = 21

Multiply the two tens digits: 1 * 3 = 3

Multiply the two hundreds digits: 1 * 2 = 2

Add the intermediate products: 21 + 3 + 2 = 26

Simplify the result: 26

The final answer is 26.

The real answer is 391. As you can see, the model understands that there is a sequence of steps, but it produces the wrong result. Now, consider if this was a more complex mathematical operation. You may not have the skill to verify the result. Can you trust it? What about code involved in credit card payments? Or code handling HIPAA data.

There is mounting evidence that AI tools can be problematic. For example, they can help software engineers produce more insecure code.

The reality is that the models can generate boilerplate reliably. And they can generate decent text output, from novels to newspaper articles. But for more complex output, we need a human reviewing the result.

One can object that the next iteration, GPT-4, may solve this limitation. But there are fundamental limits to the language models, and it would be a grave mistake to ignore them. It is likely that GPT-4 still introduces subtle errors in its output.

Does this mean these tools are not useful? No, they are obviously useful. Yet often we require a human to verify the output. We require an expert guiding the AI, and in a software company the expert would be a software engineer. More productive, thanks to the help provided by ChatGTP (or a similar tool). But still required.

How it will affect your company

We said we need a software engineer to verify the output. You could argue that it doesn't need to be a software engineer. It could instead be a business expert who knows the domain well. They could ask for solutions to business problems, and receive the implementation.

This is the scenario many people envision when talking about GPT-3, and similar tools. But it is not likely to happen in this way. There are two main reasons:

  • the need for technical knowledge

  • the need for experience translating domain models into software platforms

The most obvious reason is, that the person would need to have some degree of technical knowledge. The outputs would be technical: code, platform integrations, and similar. Someone, without that knowledge, wouldn't be able to make use of those outputs.

Even considering No-code tools, we still require some level of technical skills. No-code help a lot on basic scenarios. Once the complexity starts to increase, we will need to do things, like integrate with a 3rd party. Guess what skill we will need.

In the future, it could be the system would create everything for us. But that's not likely to happen soon, and common adoption issues will delay it even more. We can ignore this possibility for now.

As a result, the person must have technical training. Barring particular industries, it is easier to learn a business domain than to get technical skills. This is demonstrable by how often software engineers work in unrelated business domains. It is a normal path in their careers, and their performance is not restricted by the differences.

The second reason is one that many software engineers will recognise. After years working with business analysts, we have found a general issue with the understanding of edge cases. Talking to the business, there is a focus on the happy scenario, but failure is frequently ignored. Or not considered in enough detail, leaving actions undefined. It ends up being a task of the software engineers to understand how to handle failure, and understanding what are the implications.

It may seem harsh, but we have seen it repeatedly. Business processes involving humans are flexible. That flexibility means there are grey areas, impromptu processes for exceptions. When moving to software, we need people used to discovering edge cases and act on them.

The Software Engineer is a role in flux

This may seem like a lot of work for software engineers. Some may consider this wishful thinking for technical people who are trying to deny reality. But, if we look back at recent history, increased scope of a software engineer role is the general trend.

Not so long ago, the role of a software engineer was much more specialised. For example, working with the database was the domain of a Database administrator. A role that still exists, but in much less demand as software engineers started taking care of databases.

That is not the only situation. The rise of the full-stack software engineer, integrating front-end and back-end. QA roles exist, but a lot of their work has moved to automated tests written by software engineers. The trend of DevOps, integrating system administrators and software engineers.

All these changes happened because of better tooling with higher levels of abstraction, that allowed individuals to do more of the work. Frameworks, better defaults and technology in databases, cloud tooling… they made many tasks unnecessary or automated.

We know not all software engineers are taking on the increased scope. There is a big group of software engineers, in off-shoring and consultancies, that don't. They are at great risk of replacement by AI, given their repetitive and restricted tasks. But these are not the software engineers we talk about in this article.

The roles at risk

If we circle back to the capabilities and limitations of ChatGPT, we see that:

  • it needs guidance towards a right answer

  • any technical output needs a review by an expert, as it may have obscure bugs or issues

  • it is excellent at tasks like defining a business plan or describing a potential business case in detail

The above paints a picture of who could benefit the most from the tool. Someone with the skills to confirm the output, but that may want help with tasks like marketing, business domain, or similar.

Based on that, the roles at risk are business analysts and middle management, not software engineers.

Let's look at the current landscape. Roles like Agile delivery manager are being removed in some companies. The reason is that software engineers are self-managing, and the board can't see value in having ADMs. Places with the full set of middle management types (project manager, ADM, analyst, etc) are having coordination issues. Those are impacting delivery negatively, due to the number of meetings required.

If that is your company, and if a tool allowed you to replace most of those roles, what would you do? Replace the software engineers?

It is a recurring theme in technology forums that soon we will have a 1-person company valued at 1 billion. This will happen due to AI tooling helping in all areas: marketing, sales, and others. Many roles will need to evolve to adapt to those tools. But technical skills, to be able to join all the pieces together, will be needed.

As a leader in your organisation, you can start preparing for that future.

Sign up for our latest Data & AI insights!

© 2017- Chaordic GmbH. All rights reserved.