How Programming Intelligence Changes the Role of AI
Moving beyond black-box models toward transparent, controllable systems.
The growing lack of determinism of AI-based systems has been degrading the solutions they deliver. AI looks and feels too much like human intelligence. Which, naturally, is not deterministic. Roger Penrose, a popular mathematician and quantum theorist, corroborates and adds that "the conscious aspects of our minds are not explicable in computational terms." If the goal of AI is to mimic how we, humans, think, then its output will also be non-deterministic. Now, that's not what you'd expect from a computer program. So, how can you tame AI in a way that it gives you exactly what you want? Keep reading to find out.
This article is brought to you with the help of our supporter, Apideck.
Apideck helps you integrate with multiple APIs through a single connection. Instead of building and maintaining integrations one by one, use our Unified APIs to connect with hundreds of leading providers across Accounting, HRIS, CRM, Ecommerce, and more.
All connections are built for secure, real-time data exchange, so you can trust that the data you access is always accurate and protected.
Ship integrations 10x faster, reduce dev costs, and accelerate your product roadmap without the integration bottleneck.
AI has been a synonym with large language models (LLMs) for too long now. Whenever we entertain the idea of AI, we're, in fact, thinking about LLMs. These models have been on our collective mind for a couple of years as the only way to provide, interact with, and manage AI systems. However, LLMs work as black boxes where there's no way to fully understand what they're doing. Take Air Canada, for example. In February 2024, its chatbot hallucinated that the company would retroactively be doing refunds for bereavement-related flights, which was simply not true. Why the chatbot came up with that answer is something no one can understand. What we all recognize, though, is how unpredictable LLMs are. And how difficult—practically impossible—it is to debug their actions. While this can be a minor nuisance to many, in areas such as medicine, finance, or legal, not knowing how LLMs work can have serious implications. And that's why we need to take control.
One way to take control is to stop using AI altogether and go back to pure deterministic programming. But that's not what we want. We want to be able to use AI to augment our ability to be in charge of the machines. To do that, we need to be able to program the AI system so it becomes easy to control. To get there, we can limit what AI does to the minimum possible, leaving everything else to deterministic, programmable systems. Using tools inside workflows is a sound approach. If each tool does its job well, you can use AI to interface with the end user and orchestrate what tools to use to reach a specific outcome. AI won't be doing the bulk of the work. Instead, it will direct and orchestrate how the work should get done. But even with this approach, losing control can happen easily.
After all, the AI is still in charge of calling the tools. One tactic to make the orchestration more deterministic is to add constraints to the prompts you use. Examples of constraints include limiting the choice of tools, making the workflow halt whenever there's an error (instead of attempting to correct it), and giving precise instructions on how to interpret tool responses. If adding constraints isn't enough, you always have the possibility of making the AI ask for user approval before trying to interact with a tool. This way, you'll always have the last word before the AI makes something happen.
One direct advantage of making the AI actions deterministic and programmable is your increased ability to understand what is going on. Suddenly, AI will stop feeling like "magic" and get more into the realm of programming. Yes, the part that interfaces with end users is still an LLM. Yes, the orchestration is still done by an LLM. However, in a very constrained and deterministic way. And that should make workflows more predictable, which is what you need to be able to make safe decisions on how to solve problems. Especially in areas like legal or healthcare. If we want to use AI to solve serious real-life problems, then we need to control it. And, in situations where things don't go as planned, we want to be able to undo the actions as quickly as possible. Doing that requires that each step of the workflow is reversible, and that the whole AI-managed orchestration is well understood and fully reversible.
Understanding the decisions behind AI-managed workflows creates a sense of safety and puts humans back in control. After all, AI is supposed to be solving problems for us. It's only natural that we should be the ones defining the workflow goals, the tools that are available to AI, and its behaviors. The AI systems should be mere executors of our intentions.
As you can see, being able to program AI systems brings numerous advantages. Instead of being black boxes that no one understands, AI systems become programmable and thus deterministic. Humans regain much of the agency that's been delegated to AI until now. This is an approach that will only grow in adoption as more developers and designers build AI solutions that are focused on programmability.