API Translation Layers
Not all systems understand the API you're building and so they need help from a translation layer.
You want your API to be used as much as possible. That's how I see it, at least. Anything you can do to make your API easy to integrate with, the better. And that applies to both humans but also—and, especially at this point—to machines. Any friction you add to the steps someone takes to understand your API, the more difficult it will be for them to use it. Creating a version of your API that everyone understands feels almost impossible. So, the next best thing is to use translation layers. Let's look at a few options to see how they work. Stay with me.
This article is brought to you with the help of our supporter, n8n.
n8n is the fastest way to plug AI into your own data. Build autonomous, multi-step agents, using any model, including self-hosted. Use over 400 integrations to build powerful AI-native workflows.
People don't use what they don't understand. It's as simple as that. In the case of APIs, it means that consumers can't connect using tools, technologies, or frameworks they don't use or recognize. So, for example, if you launch a gRPC API and the only architecture type consumers can access is REST, you're doomed. This kind of situation happens not because consumers aren't open to trying different technologies. It's because they're using tools or applications that only support certain architectural styles. In those scenarios, consumers try to find a way to convert—or translate—the operations your API exposes into something the tool or application they're using supports.
A scenario that's simple to understand is the consumption of a GraphQL API by a tool that doesn't support it. Since GraphQL uses HTTP as its transport layer, you can, in theory, make an HTTP POST or GET request to the GraphQL endpoint sending the query you want to execute. Suppose you're trying to use a GraphQL service to get a list of countries. The service lets you execute a query on countries
and retrieve their names. The following is an example of the GraphQL query you'd send to the service.
query {
countries {
name
}
}
Even though this is a simple query, if you're inside an application that doesn't support GraphQL, you won't be able to run it. Well, not really. Fortunately, most GraphQL services offer a translation layer that lets you execute queries using HTTP GET. You'd be able to run the above query by sending an HTTP GET request passing the contents of the query inside the query
URL parameter. The full path would look like /query=query%20{countries{name}}
. Easy, right?
Let's look at another situation. Now you're the API producer and you want to make sure that Personally Identifiable Information (PII) isn't shared with consumers. One solution is to anonymize PII so that consumers can't identify a particular individual. You could do that by changing the code of the API service or you could add a translation layer to your API gateway. You could define a list of rules for identifying PII fields. Then, any time the gateway sees one of those fields on a response, it anonymizes its value. From a consumer perspective, this solution would be totally invisible. From your point of view, the API producer, this solution wouldn't involve any engineering making its cost of development and maintenance quite low.
Another scenario is the one many of us face when using an AI agent to connect to one or several APIs. The first hurdle the AI agent has is finding the right API to connect to. Then comes the biggest difficulty: actually making a request to the API and obtaining a meaningful response. LangChain and other similar technologies have been solving the issue of mapping APIs to agentic workflows. However, they haven't been able to make sure the API responses agents get are meaningful. That's where a more recent technology shines. I'm talking about the Model Context Protocol (MCP), an approach created by Anthropic. Instead of trying to "massage" request and response data to make agents and APIs understand each other, MCP offers a new, standardized API. It uses JSON-RPC as the communication layer between an agent and any API. The translation happens at a point that receives the JSON-RPC requests and routes them to the external API. That translation point is called a "tool" and is what MCP exposes to end users. With this solution, the whole translation becomes completely invisible to the user. Additionally, producers don't have to change their APIs to make them compatible with AI agents. In the end, everyone wins.
Altogether, these and other translation layers are what power the interactions between end users and APIs. Whether you like it or not, API translation layers will continue to exist. They don't diminish the value of existing APIs. In fact, I'd argue that by building and using these translation layers you're acknowledging the relevance of existing APIs.