Artificial intelligence is already influencing the creation of solutions and their operation. While the use of AI has already proven itself when operating solutions with AIOps, it is difficult to do so when creating solutions. This may be because AI has different aspects when developing applications: Firstly, AI-supported development of a solution, secondly, the development of a solution that offers AI features and thirdly, the combination of both.
AI enthusiasts' vision for developing and operating solutions is that:
Until recently, this future seemed a long way off. A publication from 2017 estimates that 2040 will be the point in time when machines largely take over the coding of new solutions from humans, from design to logic.
That point in time may have come closer.
Domain-specific solutions have been around for a long time. Various IDEs such as Visual Studio already integrate AI-supported features that support the developer in coding. A collection of other tools can be found on GitHub, for example. Consequently, many of these tools also use GitHub as a training basis.
Facebook's Sapienz is a milestone in the field of automated testing, as it not only performs testing, but also creates the test cases for an application itself. And that's not all: Facebook's Sapfix completes Sapienz and supports the correction of any bugs found. The whole thing can be found open source on GitHub.
On a non-technical level, current no-code tools make it easier to create new products, be it websites, designs, data analysis or models. Shopify, Wix and WordPress are no-code tools that millions of people already use to do things themselves without consulting designers or developers.
The domain of data scientists is also affected. AutoML tools drastically reduce the time it takes to get AI into production. Tools such as Apteo allow almost anyone to deploy AI models without coding knowledge.
AI is also playing an increasing role in project management, for example, because the availability of historical project data can be used, for instance, to derive time and effort estimates, identify risks and automate decisions.
Apart from these domain-specific solutions, a far more general approach was discussed a few years ago and introduced by Andrej Karpathy as Software 2.0. Basically, the idea is not to build the solution to a problem using a coded algorithm, but rather to see it as a response from a neural network to a number of requirements. The network is trained in such a way that it generates the right outputs on the most comprehensive set of inputs possible. This is already clearly moving in the direction of the aforementioned vision.
GPT stands for Generative Pretrained Transformer. GPT-3 is the latest in this series of AI from OpenAI in natural language processing (NLP). OpenAI is a research organisation primarily funded by Elon Musk and Microsoft. The primary task of AI in the GPT series is to generate texts under naturally formulated constraints or requirements that cannot be distinguished from human-written content.
For this purpose, AI is trained on a gigantic amount of texts, which enables it to be quite successful in new topics without additional training. Gigantic is the right word here: GPT-3 has absorbed most of what people have published online (and in English). It uses the majority of text available on the Internet to generate a statistically plausible response based on the text input received. And because there is a lot of data on the web to find out which response is most plausible, the predictions are usually quite accurate. GPT-3 can be used to write poems on a subject, to generate essays like this or to flood blogs with fake news.
GPT-3 does not require task-specific fine-tuning. This can, of course, be done for special tasks. And that is what some AI enthusiasts did, for example, when they realised that written code is actually only a text and you can get GPT-3 to respond to natural language requirements with code with little effort, because the sources on GitHub, for example, are also familiar with AI.
It is not all perfect, of course, but the following examples suggest what will be possible in the future. What they all have in common is that code is generated in response to a request formulated in natural language:
For example, "Check if a string is a palindrome" provides the correct Python code. Of course, GPT-3 could simply have found the solution on GitHub. However, this does not work with the request "Give me the indices in a list of strings that are palindromes with at least 7 characters". This input is also answered with the correct Python code:
Other examples demonstrate GTP-3's ability to generate elements for websites. For example, "Make me a button that looks like a watermelon" or "A large text that says ‘WELCOME TO MY NEWSLETTER’ and a blue subscribe button" produce correct results. Indirect formulations such as "Make me a button in Donald Trump's hair colour" are also implemented correctly.
Generating the Google homepage based on a verbal description also leads to a very good result, as a tweet shows.
This is all very impressive and indicates where the journey will take us – after all, GPT-3 is still "just" a language predictor. It is a statistical model that, due to the scope of the training data, has a high probability of delivering the expected outputs. But GTP-3 does not "think" and has no "mind of its own". It is only logical that requests like "Write an algorithm that is better than you" lead to problems. And, of course, AI lacks the ability (and motivation) to think about it because the output is generated word by word, there is no consistent mental model as we humans have. However, this model could be the basis for a deep understanding of the world, as outlined in a paper by the Association for Computational Linguistics. The OpenAI research team addresses further limitations of the model in their article.
Innovative solutions that require unique ideas therefore cannot be expected from GPT-3. On the other hand, one has to ask: do developers have to have unique ideas with each line of code that lead to innovative solutions? Not really. Sometimes our work is really boring.
Therefore: NLP tools such as GPT-3 can, should, and will massively reduce the need for everyday tasks like generating variations of the same design or creating simple websites based on common principles.
So what will happen in the next few years? Of course there will still be designers, developers, data scientists, project managers, etc. in the area of solution development. And there will be very powerful tools that will increasingly take over and automate parts of the work we do now and once again massively increase productivity in solution development. The ability to delegate coding to AI gives us the bandwidth to implement a larger number of requests in the same amount of time.
In a few years, the "developer" will be largely obsolete as a "writer", but not as a problem solver.
And this is precisely where the focus of all employees in developing solutions will be: solving problems.
We can expect another new aspect to be that solution developers must have a precise picture of what "their AI" can and cannot do, and how to manage it. The knowledge, skills and thinking required for this must be given even more emphasis in the training.
It is time to introduce development teams to the AI topic.