Your cart is currently empty!

The AI-driven company: the AI system generator
Moving from using AI agents as augments to fully autonomous generation of systems, humans will be more concerned with asking the right questions than building the solutions, says Jan Bosch.
After the business process maturity ladder and the first three steps on the R&D maturity ladder, ie AI assistants, AI compensators and AI superchargers, we discuss the fourth level: the AI system generator. Here, the intent is to go through a fundamental shift from augmenting humans in their roles to fully autonomous end-to-end creation of systems. Although it doesn’t really matter how things work behind the scenes (when did you last check out the internals of the compiler that you use to build code?), it’s important to note that in most cases, the AI system generator isn’t a single agent but rather a set of coordinated agents that together, by and large, replicate the full lifecycle of a software engineering process.
Of course, there will be humans involved in formulating the intent of the system, but the intent is the input to the AI system generator and everything post that point is conducted autonomously by the agents. Once the system has been generated, humans can provide input in the form of additional, complementary or correcting intents, which causes the generator to regenerate all or parts of the system. Some may think that it’s virtually impossible for AI agents to create systems like this, but anyone who has looked at systems such as Lovable and its many competitors knows that we’re moving toward “no code” solutions where users do exactly this: provide an intent and iterate on the generated solution where necessary.
It might not strictly be necessary for non-critical systems, but in most cases, humans want to review the requirements generated based on the prompt or intent, the high-level design of the system, the test cases for testing components as well as the system as a whole and the documentation and deployment process. Although the proof is in the pudding, ie the deployed, running system, we often want some guarantees along the way that the generated system indeed performs as we intended it.
That means that the workflow followed by the AI system generator follows many of the steps we would see in a human development process. First, the intent capture requires natural language processing to ensure that the underlying organizational needs are indeed adequately recorded.
Second, the system needs to be broken down structurally in terms of an architecture and components, as well as in terms of tasks that need to be performed. In this context, it’s important to note that components may be obtained from open or commercial sources and then need to be configured. Alternatively, we need to generate the code for the component. Finally, it’s of course entirely feasible to use machine learning or deep learning models to realize the functionality.
Third, the AI system generator needs to generate tests of all kinds, ie unit, integration and system, to ensure that the generated system works as intended. And it needs to generate the required reports, user manuals and artefacts needed for regulatory compliance and certification.
Finally, the generated system needs to be delivered. Preferably, it’s deployed directly, similar to today’s DevOps pipelines, but often, companies want a human in the loop for the final step toward deployment. However, especially in cases where the system is periodically regenerated, partially or completely, due to changing intents and requirements, the more automated the process is, the smoother the workflow will be.
The advantages of this approach are obvious. The speed at which we can generate systems improves by orders of magnitude. It isn’t uncommon for smaller systems to be generated in minutes or hours. Even if it takes a few days, it’s still infinitely faster than traditional approaches. Furthermore, human errors will be much fewer and, ideally, our ways of working on the system are highly reproducible, resulting in much higher consistency and quality. And, of course, it gives a huge strategic leverage as it allows us humans to focus on what to build and why to build it, leaving the how to the AI system generator.
For all the positives, we need to be aware of some of the challenges. We have to ensure that the automatically generated systems indeed can be relied upon to the extent required by companies. Also, in the context of regulatory compliance, eg for safety, security or other aspects, we need to make sure that the generated artefacts cover these needs. This brings us to accountability: who’s responsible for the system if it was generated by AI? In the end, it will have to be the humans in the company, despite their role changing from builders to supervisors.
Moving from using AI agents as augments to fully autonomous generation of systems, humans become the supervisors, validators and strategists. They’ll be more concerned with asking the right questions than building the system. To paraphrase Albert Einstein, who said: “If I had an hour to solve a problem, I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions,” in a world where AI generates systems, we should focus our energy on framing intent, not coding solutions.