Microservice Best Practices: API-first design approach in the dawn of agentic AI

Introduction

I started this article almost two years ago and almost forgot about it to continue. Due to the incredible dawn of the agentic era in Software Engineering I decided to pick up it again and finish it. The original idea was to show you how important it is to start with designing the software interfaces (UI and API) before you start implementing your application or microservice. However, with AI assistants and agents the whole process of designing and implementing a microservice is changing dramatically. The design first approach is crucial to leverage the full potential of AI in software development. Long story short, giving a coding assistant or agent a well-defined OpenAPI specification will improve code generation dramatically. But let’s start from the beginning.

In this article, I will focus on microservice development. Most microservices provide an API to the external world. Doesn’t matter how complex this API will be, the API will be an integral part of most microservices. In the API-first design approach you start designing your API before you start implementing any functionality. This also means APIs will be your first-class citizens in your project! When I talk about API, I mean complete API, request and response payloads, error codes, security schemes etc.

Why is it a good idea to start with the API? What are the benefits of this approach:

  1. Development teams can work in parallel (consumer and provider)
  2. Reduce unnecessary implementation iterations (costs)
  3. Reduce the risk of failure (early communication between consumer and provider)
  4. Better understanding of the domain model (focus on the business value)
  5. Very good source (context) for AI assistants and agents to generate code

In this article, I will focus on REST APIs and show you on an example how to use OpenAPI specification for API-first design.

Designing an API can be done in various ways and depends on your technical background. As a programmer you will mostly start writing interfaces and entity classes in your programming language. If you more of an architect, you are used to interface definition language (IDL) like OpenAPI. This two different approaches also called API-first and Code-first.

API-first vs Code-first

There are many articles already written to this topic. I don’t want to repeat the advantages and disadvantages of these techniques. One article i liked, you can find here:

https://medium.com/@tiokachiu/api-first-vs-code-first-choosing-the-right-approach-for-your-project-868443e73052

It is a matter of personal preferences which approach you want to choose.

Two years ago when I started this article, I wrote that: “I personally like the “Hybrid Approaches” this fits best with my software developer background and knowledge of the advantages of IDL. On the other side, I am not a big fan of code generation. Most results are difficult to maintain. Again, this is only my view, feel free at choose your way!”

Today I am using Copilot a lot, I have to admit, the results are usable with some rework. If you have good context like an OpenAPI specification and good existing code base. You can still stick to design-first approach and let Copilot update the OpenAPI specification and than the code for you.

Nevertheless, for my last two projects I switched to code-first approach after the first version of my OpenAPI specifications. I let the Apache Maven plugin springdoc-OpenAPI generate the OpenAPI specification from my code base. See chapter [Switch to code-first approach](#Switch to code-first-approach) for more details.

Let’s start and check what tools and services can help you with the API-first design at the beginning of your project. I would also recommend to think of slicing your domain model in a way that the features are manageable. This will help you to keep the complexity low and encapsulate subdomains in a better way. This will also help to develop with AI and review features in a more isolated way.

Software development lifecycle

From my experience it is good to separate the software development in phases.

  • Requirements phase
  • Design phase
  • Implementation phase
  • Test phase

It sounds a bit like the old waterfall model, but it is not when you slice your features/domains in small increments (user stories). You can still work in an agile way, but you have to focus in each phase on different aspects and use also different tools, techniques and methodologies. On the other hand you need a good understanding when to switch between these phases.

I know this is basic stuff, but I want to repeat this, because I see often that some phases like requirements and design are not done in a proper way. This two phases are the most important and need human interaction. Implementation and test will more an more done by AI assistants and agents.

So back to the design phase and the API-first approach.

Design phase

I will now show you on an example how to design a REST API. The best and most widely used way to describe REST APIs is the OpenAPI specification. The OpenAPI specification is a standard to describe REST APIs in a machine-readable format. see also

The OpenAPI specification can be written in JSON or YAML. The OpenAPI specification is a very powerful way to describe your API. You can describe the endpoints, request and response payloads, error codes, security schemes and much more. The OpenAPI specification is also a good way to communicate between the consumer and provider of the API. Also AI assistants and agents can create valid OpenAPI specifications as the schemas are well defined.

However, when you start with design, you should have already gathered some requirements and information. Nowadays, I start directly with a project in GitHub and write everything in Markdown files. This is a good way to document your requirements and design decisions. These file are perfect input resource for AI assistants and agents as well. Use them as instructions and insert this file as context to your prompts.

Next, I will introduce some tools and techniques to work with OpenAPI specifications.

AI, Copilot, Gemini and Co.

Often I gather information upfront about the domain and domain specific semantics. These I also collect for example in a glossary file. This will help you to use the right terms in your API specification.

If all information is collected, let the AI do the work for you and generate a first draft of your OpenAPI specification.

Example prompt:

Create an OpenAPI specification in version 3.0.1 for xy domain. Create typical CRUD operations for it. Please check the requirement file #file:Requirements.md and #file:Glosary.md for more details.

The generated result is often a very good starting point. Next, I will fine tune the specification and add more details manually or with AI assistance. In order to get some feedback from the consumers of the API, I will share the specification with them and use Swagger viewer to present the API in a human readable format.

For that I use two extensions in Visual Studio Code.

Swagger Viewer

This extension will help you to view your OpenAPI specification in a good human-readable format and is a good format for presenting your API to the consumers of your API, this can be technical but also business people. Both will understand this Swagger UI!

In case your IDE doesn’t have a plugin for showing OpenAPI specification in a good human readable way or you want to use a more current OpenAPI version I recommend you to use the swagger-editor: API Editor - Download or Try it in the Cloud

Json / YAML editor

OpenAPI specifications can be written in JSON or YAML. Whatever you prefer have a good support in your IDE for these languages. Check also if you need to provide the right OpenAPI schema files to allow your IDE to validate the file and support auto completion.

The normal auto completion and AI auto-completion will help you to write the specification in a proper way without knowing all details of the OpenAPI specification.

Similar to the requirements and glossary file, I will store my OpenAPI specification in my GitHub project. This will help me again to use this file as context for AI assistants and agents.

Implementation

When you reach a certain level of detail and consistency in your OpenAPI specification, it is time to start with the implementation. I am using more and more AI assistants and agents to generate code. However, I still want control over the generated code. In order to achieve this, I generate the code step by step. Again, this is a personal decision, you can choose your granularity. I prefer to generate code in small increments, this will help me to review the generated code and be more concrete in my prompts. In the end this process is influenced how I implemented microservices in the past and how i would distribute implementation tasks to a team of developers.

Nevertheless, the initial setup of my microservice project i still use the Maven archetype (see my old article Jump-Start your next Cumulocity microservice project in java.) I tried this with AI but failed. It is not easy to use AI for generating deterministic code. Next I will show you my step by step approach to implement a microservice with AI support.

  1. I am starting with the data model and REST controllers.
Please generate the data model (request and response entities) using lombok and REST controllers in Java/Spring Boot for the following OpenAPI specification #file:OpenAPI.yaml.
  1. After that I let AI generate the service layer interfaces.
Please generate the service layer interfaces and service bean with stub methods for REST controller #file:MyController.java. Please use following service as template #githubRepo:<your-repo>/template/TemplateService.java
  1. First Review (Pull Request) of the generated code.

This is important to get a good understanding of the generated code and to improve the code manually. Sometimes the generated code gives you a new perspective on the API design and you have to adjust the OpenAPI specification. This is an iterative process.

  1. Service layer implementation

The service layer implementation is often more complex and I let AI generate feature by feature. This means I give the AI agent a prompt with a specific task, for example:

Please implement the method #sym:createXy(xy) in the service bean #file:MyServiceImpl.java. Please use following repository as template #githubRepo:<your-repo>/template/TemplateRepository.java.

For a single feature I often use the agent mode of Copilot, which generates the unit tests and validates the implementation with these tests. If the feature is more complex I split the implementation in smaller tasks and let the AI generate code step by step. If the implementation is done, I do a final review and merge the code.

Switch to code-first approach

When you have reached the first version of you API I would suggest to switch to code-first approach. This means you let the code generate the OpenAPI specification. Now you can overwrite the OpenAPI specification with the generated one and do a final review.

In my last projects I am using Maven plugins to generate the OpenAPI specification and the API documentation as Markdown during the integration tests at my build process. I also let the springdoc-OpenAPI Maven plugin generate the OpenAPI specification from my code. This will help me to keep the OpenAPI specification and the code in sync. This means that I also have to run the integration tests locally and commit the changes in the OpenAPI specification and markdown files.

pom.xml snippet:

		<dependency>
			<groupId>org.springdoc</groupId>
			<artifactId>springdoc-openapi-ui</artifactId>
			<version>${springdoc-openapi.version}</version>
		</dependency>
		<dependency>
			<groupId>org.springdoc</groupId>
			<artifactId>springdoc-openapi-security</artifactId>
			<version>${springdoc-openapi.version}</version>
		</dependency>
			<plugin>
				<groupId>org.springdoc</groupId>
				<artifactId>springdoc-openapi-maven-plugin</artifactId>
				<version>1.4</version>
				<executions>
					<execution>
						<id>integration-test</id>
						<goals>
							<goal>generate</goal>
						</goals>
					</execution>
				</executions>
				<configuration>
					<apiDocsUrl>http://localhost:8080/v3/api-docs</apiDocsUrl>
					<outputFileName>openapi.json</outputFileName>
					<outputDir>${project.basedir}/docs</outputDir>
				</configuration>
			</plugin>
			<plugin>
				<!-- converts the openapi docu to markdown -->
				<groupId>org.openapitools</groupId>
				<artifactId>openapi-generator-maven-plugin</artifactId>
				<version>${openapi-generator-maven-plugin-version}</version>
				<executions>
					<execution>
						<id>export-opennapi-to-html-doc</id>
						<phase>post-integration-test</phase>
						<goals>
							<goal>generate</goal>
						</goals>
						<configuration>
							<inputSpec>${project.basedir}/docs/openapi.json</inputSpec>
							<generatorName>markdown</generatorName>
							<output>${project.basedir}/docs</output>
						</configuration>
					</execution>
				</executions>
			</plugin>

Example: cumulocity-microservice-service-request-mgmt/docs/README.md at develop · Cumulocity-IoT/cumulocity-microservice-service-request-mgmt · GitHub

With that I do not need to care anymore about the API documentation, it is always up to date and available in my GitHub project.

Useful links | Relevant resources

Conclusion

AI assistants and agents are set to revolutionize software development. Consequently, the importance of a solid design will only increase. A well-defined OpenAPI specification, or any other standardized schema and documentation, is crucial for achieving better results from these AI tools. All these artifacts should be stored in your GitHub project. Even if you don’t heavily rely on AI support, the API-first design approach helps reduce the risk of failure and improves communication between the API’s consumers and providers.

If your design and requirements aren’t crystal clear, don’t be surprised if the final implementation doesn’t match your vision.

Furthermore, this step-by-step approach and Git pull request (review) helps you to maintain control over the generated code and prevents you from getting lost in the complexities of AI-generated code, documentation and tests.

4 Likes