How to Integrate OpenAI's GPT-3 with Java and Spring Boot in 5 Minutes
This article explains how to build a simple AI agent using Java and Spring Boot, integrating with OpenAI's powerful language models.
Initial Setup: API Key and Dependencies
Before we start developing, it's essential to secure your OpenAI API key. Log into your OpenAI account, navigate to the API keys section, and generate a new secret key.
Note: For security reasons, never hardcode your API key directly in the codebase. Instead, store it as an environment variable. For this example, we'll name it OpenAI_API_key
.
Once the key is set up in your environment, you can reference it in your Spring Boot application.properties
file:
openai.api.key=${OpenAI_API_key}
Next, you need to add the required dependency for the OpenAI GPT-3 Java client to your build.gradle
file. We will use a community-maintained library that simplifies the interaction.
implementation 'com.theokanning.openai-gpt3-java:service:0.18.2'
After adding the dependency, rebuild your project to ensure it's correctly downloaded and linked.
Building the Core Components
Our application architecture will be structured into three main parts: 1. Configuration: To set up the OpenAI service bean. 2. Data Objects: For handling request and response data. 3. Service Layer: To contain the business logic for communicating with the API. 4. Controller: To expose the functionality via a REST endpoint.
1. OpenAI Configuration
First, create a configuration file named OpenAIConfig.java
within a config
package. This class will be annotated with @Configuration
and is responsible for creating the OpenAIService
bean, which is the central component for making API calls.
package com.example.config;
import com.theokanning.openai.service.OpenAiService;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.time.Duration;
@Configuration
public class OpenAIConfig {
@Value("${openai.api.key}")
private String openAiApiKey;
@Bean
public OpenAiService openAiService() {
// Using a timeout is recommended
return new OpenAiService(openAiApiKey, Duration.ofSeconds(60));
}
}
2. Request and Response Data Objects
We need to define Plain Old Java Objects (POJOs) to structure our API's request and response. We'll use Lombok to reduce boilerplate code for getters and setters.
Create a ChatRequest.java
class. Initially, it will only contain the user's message, but we will add a field for a system message later.
package com.example.dto;
import lombok.Data;
@Data
public class ChatRequest {
private String message;
private String systemMessage; // Will be added later
}
Next, create the ChatResponse.java
class to structure the data sent back to the client.
package com.example.dto;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@AllArgsConstructor
@NoArgsConstructor
public class ChatResponse {
private String response;
}
3. The Chat Service
The ChatService
class, annotated with @Service
, will handle the logic of preparing the request and calling the OpenAI API. We inject the OpenAiService
bean we configured earlier via the constructor.
The primary method, getChatResponse
, performs three key steps:
1. Prepares the chat messages for the API.
2. Invokes the OpenAI chat completion endpoint.
3. Extracts and returns the AI-generated content.
package com.example.service;
import com.theokanning.openai.completion.chat.ChatCompletionRequest;
import com.theokanning.openai.completion.chat.ChatMessage;
import com.theokanning.openai.service.OpenAiService;
import com.example.dto.ChatRequest;
import org.springframework.stereotype.Service;
import java.util.ArrayList;
import java.util.List;
@Service
public class ChatService {
private final OpenAiService openAiService;
public ChatService(OpenAiService openAiService) {
this.openAiService = openAiService;
}
public String getChatResponse(ChatRequest chatRequest) {
// Prepare the messages for the API call
List<ChatMessage> messages = new ArrayList<>();
// Add system message if provided
if (chatRequest.getSystemMessage() != null && !chatRequest.getSystemMessage().isEmpty()) {
messages.add(new ChatMessage("system", chatRequest.getSystemMessage()));
}
// Add user message
messages.add(new ChatMessage("user", chatRequest.getMessage()));
// Create the chat completion request
ChatCompletionRequest completionRequest = ChatCompletionRequest.builder()
.model("gpt-3.5-turbo") // Or any other model
.messages(messages)
.build();
// Call the API and get the response
return openAiService.createChatCompletion(completionRequest)
.getChoices().get(0).getMessage().getContent();
}
}
4. The REST Controller
Finally, the ChatController
exposes our service through a RESTful endpoint. It's a standard @RestController
that defines a POST mapping at /chat
.
package com.example.controller;
import com.example.dto.ChatRequest;
import com.example.dto.ChatResponse;
import com.example.service.ChatService;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class ChatController {
private final ChatService chatService;
public ChatController(ChatService chatService) {
this.chatService = chatService;
}
@PostMapping("/chat")
public ChatResponse chat(@RequestBody ChatRequest chatRequest) {
String aiResponse = chatService.getChatResponse(chatRequest);
return new ChatResponse(aiResponse);
}
}
Testing the Application
With all the components in place, start your Spring Boot application. You can test the /chat
endpoint using a tool like Postman or cURL.
Send a POST request to http://localhost:8080/chat
with the following JSON body:
{
"message": "Tell me a short joke about programming."
}
The API should return a joke, confirming the integration is working. For example:
Why do programmers always mix up Christmas and Halloween? Because Oct 31 == Dec 25.
Enhancing with System Messages
A powerful feature of the OpenAI API is the "system" message, which allows you to guide the AI's behavior, persona, and response format. We've already added the logic in our ChatService
.
To test it, send a request that includes the systemMessage
field.
Example 1: Responding as a Character
{
"systemMessage": "You are Jack Sparrow. Respond as a pirate.",
"message": "List the wonders of the world."
}
The response will be styled with pirate-like phrasing.
Example 2: Controlling Output Format
{
"systemMessage": "Always respond in two languages: first English, then Spanish.",
"message": "What is the capital of France?"
}
The model will now provide its answer first in English and then translate it into Spanish, demonstrating how you can control the output directly.
Why Programmatic Integration Is Powerful
Integrating an AI model directly into your application offers far more control and flexibility than using a standard web UI. With programmatic access, you can implement advanced features like function calling, force the model to return structured JSON for further processing, and build complex, automated workflows. This opens up a world of possibilities for creating truly intelligent and interactive applications.
Join the 10xdev Community
Subscribe and get 8+ free PDFs that contain detailed roadmaps with recommended learning periods for each programming language or field, along with links to free resources such as books, YouTube tutorials, and courses with certificates.