

Discover more from The API Changelog
LintGPT Opens the Door to Non-technical API Design
AI is paving the way for new API tools that anyone can use
This article is brought to you with the help of our supporters: Zuplo and Speakeasy.
Zuplo is the only API Gateway that runs your OpenAPI definition directly. If you care about high-quality developer documentation, API analytics, and zero configuration deployments, give Zuplo a try today.
Speakeasy offers you everything you need to create great integration experiences for your APIs: from native-language SDKs and Terraform providers to friction-free docs. Sign up now and start generating SDKs for your API with zero effort.
If anyone tells you that validating an OpenAPI definition document can only be done by an engineer, they're wrong. That's the belief of Aidan Cunniffe, the CEO of Optic, a company focused on building tools to make it easy to document, validate, and test APIs. The thesis behind Optic is that accurate and well-designed APIs with high quality have increased adoption and retention. And that, ultimately, leads to an increase in business results.
A big part of having a well-designed API is following a style guide. And I don't mean simply writing a document and asking people to follow it. API style guides are machine-readable documents that scripts and build systems can use. The goal is to verify if an API definition adheres to the rules defined in the style guide and, if not, trigger an alert that changes should be made to fix the issue. Style guides are not just something you'd do to enforce APIs to follow certain rules. They're a part of something larger called API governance, a set of policies and practices aimed at regulating how APIs are manipulated in an organization.
Until recently most API style guides were created using rules configuration files. These are machine-readable documents written in JSON or YAML that represent the rules the API definition should follow. The process of checking if an API definition follows the rules is called linting. The term comes from lint
, a tool created by Stephen C. Johnson in 1978 to check for common issues in C code. Johnson felt the need to verify if his code would compile on different machines, so he created a tool to help him. Since then a number of linters have been created for different purposes. Today, the most used API linter is Spectral, an open-source tool that supports custom rulesets and works with OpenAPI and AsyncAPI.
Cunniffe and his team identified several hurdles related to the creation of Spectral rulesets. They're not easy to write since they involve the usage of regular expressions or, even worse, JavaScript functions. Rules themselves aren't easy to understand unless you see them in action. Additionally, it's not easy to prevent the usage of certain abstract concepts in API responses. An example is trying to prevent the usage of the word "error." API designers can easily circumvent the rule by using words such as "issue" or "problem" instead. Another example is enforcing the usage of pagination on certain resources. With Spectral, you have to specify in detail the type of pagination that you want to enforce, and that can be complex.
Having established what the issue is, Cunniffe proposes a solution that uses AI to translate simple rules written in a common language into something that Optic can understand. Imagine being able to write a rule as simple as "Numeric properties should define a default and minimum." Writing the same rule using Spectral is cumbersome and prone to errors. What properties do you consider to be "numeric?" Should "default" and "minimum" be defined using the same data type of the numeric property?
Enter LintGPT, the solution that Optic released recently. It uses a fine-tuned AI algorithm to understand rules that you write in your own style and apply them to OpenAPI definition documents. According to Cunniffe, the AI engine has been trained on a corpus of OpenAPI documents, so it can understand the terminology and apply it correctly. I was skeptical at first of what the system could do, and I couldn't understand how it was different from what you can already do with AI tools such as ChatGPT. One big difference is that LintGPT is deterministic, meaning that given one input, it always generates the same output. This is highly important, especially when you want to run style validation on a CI pipeline. Another difference is that it only runs the validation rules against changes in the OpenAPI document. The cost of validating an API becomes much lower than if you run the rules against the whole definition on every build.
Ultimately—and from a product perspective—the goal of LintGPT is to empower non-technical people to work with API style guides. And by being able to contribute to style guides, they're also directly contributing to API governance inside their organizations. This is a big shift, in my opinion, towards a practice where non-technical roles within an organization can be active players in governing how APIs are created. I was able to test LintGPT to see how it works and test different rules using natural language. I had access to an early version of the tool that I ran locally using my own OpenAI API account.
The first thing I did was try a rule to make sure that all error responses follow the HTTP problem details format (RFC 9457). The rule I wrote is as simple as "Error responses must follow the HTTP problem details format." Notice that I didn't specify what the HTTP problem details format is, and I didn't even have to mention that there's an RFC behind it. The output that LintGPT produced gave me enough information to fix the issue. It mentioned that a response on my OpenAPI definition doesn't follow the HTTP problem details format, explained why the format is important, and even told me what fields I need to add to the response to comply with the RFC. Notice that LintGPT's response is an error because I used the word "must" in my rule.
As a side note, it's interesting to see that, in this case, LintGPT assumes that the type
, title
, and status
fields are required when they're not. My guess is that it's not possible to identify a correct HTTP problem details response unless you define a number of required fields. Let's try a different rule to see what it does. This time my rule is "Listing operations should allow sorting." LintGPT detected the issue and provided a comprehensive warning. Once I added a sort parameter, the issue was fixed. It's fascinating that I didn't have to specify what type of sorting I was looking for or the name of the sorting parameter to check.
The approach Optic is taking sounds great, so what is preventing anyone from building a similar tool? I commented with Cunniffe how LintGPT depends on OpenAI, and he acknowledged that is currently a risk. However, he believes that the linting system he built can be run against any AI service and Optic can even have its own self-hosted AI models in the future. What's even more interesting from my perspective is how this approach can be replicated in other areas of API design. Imagine being able to design a whole API just by writing user stories in natural language. Or, using direct consumer feedback to improve the design of an API. What Optic is presenting today with LintGPT is just the beginning of a new way of building API products where non-technical people can participate.