Utilizing Spring AI With LLMs To Generate Java Checks – DZone – Uplaza

The AIDocumentLibraryChat venture has been prolonged to generate take a look at code (Java code has been examined). The venture can generate take a look at code for publicly obtainable GitHub initiatives. The URL of the category to check may be supplied then the category is loaded, the imports are analyzed and the dependent courses within the venture are additionally loaded. That offers the LLM the chance to contemplate the imported supply courses whereas producing mocks for checks. The testUrl may be supplied to provide an instance to the LLM to base the generated take a look at. The granite-code and deepseek-coder-v2 fashions have been examined with Ollama.

The purpose is to check how properly the LLMs may help builders create checks.

Implementation

Configuration

To pick the LLM mannequin the application-ollama.properties file must be up to date:

spring.ai.ollama.base-url=${OLLAMA-BASE-URL:http://localhost:11434}
spring.ai.ollama.embedding.enabled=false
spring.ai.embedding.transformer.enabled=true
document-token-limit=150
embedding-token-limit=500
spring.liquibase.change-log=classpath:/dbchangelog/db.changelog-master-ollama.xml

...

# generate code
#spring.ai.ollama.chat.mannequin=granite-code:20b
#spring.ai.ollama.chat.choices.num-ctx=8192

spring.ai.ollama.chat.choices.num-thread=8
spring.ai.ollama.chat.choices.keep_alive=1s

spring.ai.ollama.chat.mannequin=deepseek-coder-v2:16b
spring.ai.ollama.chat.choices.num-ctx=65536

The spring.ai.ollama.chat.mannequin selects the LLM code mannequin to make use of.

The spring.ollama.chat.choices.num-ctx units the variety of tokens within the context window. The context window comprises the tokens required by the request and the tokens required by the response. 

The spring.ollama.chat.choices.num-thread can be utilized if Ollama doesn’t select the correct quantity of cores to make use of. The spring.ollama.chat.choices.keep_alive units the variety of seconds the context window is retained.

Controller

The interface to get the sources and to generate the take a look at is the controller:

@RestController
@RequestMapping("rest/code-generation")
public class CodeGenerationController {
  personal closing CodeGenerationService codeGenerationService;

  public CodeGenerationController(CodeGenerationService 
    codeGenerationService) {
    this.codeGenerationService = codeGenerationService;
  }

  @GetMapping("/test")
  public String getGenerateTests(@RequestParam("url") String url,
    @RequestParam(identify = "testUrl", required = false) String testUrl) {
    return this.codeGenerationService.generateTest(URLDecoder.decode(url, 
      StandardCharsets.UTF_8),
    Elective.ofNullable(testUrl).map(myValue -> URLDecoder.decode(myValue, 
      StandardCharsets.UTF_8)));
  }

  @GetMapping("/sources")
  public GithubSources getSources(@RequestParam("url") String url, 
    @RequestParam(identify="testUrl", required = false) String testUrl) {
    var sources = this.codeGenerationService.createTestSources(
      URLDecoder.decode(url, StandardCharsets.UTF_8), true);
    var take a look at = Elective.ofNullable(testUrl).map(myTestUrl -> 
      this.codeGenerationService.createTestSources(
        URLDecoder.decode(myTestUrl, StandardCharsets.UTF_8), false))
          .orElse(new GithubSource("none", "none", Listing.of(), Listing.of()));
    return new GithubSources(sources, take a look at);
  }
}

The CodeGenerationController has the tactic getSources(...). It will get the URL and optionally the testUrl for the category to generate checks for and for the non-obligatory instance take a look at. It decodes the request parameters and calls the createTestSources(...) methodology with them. The tactic returns the GithubSources with the sources of the category to check, its dependencies within the venture, and the take a look at instance.

The tactic getGenerateTests(...) will get the url for the take a look at class and the non-obligatory testUrl to be url decoded and calls the tactic generateTests(...) of the CodeGenerationService.

Service

The CodeGenerationService collects the courses from GitHub and generates the take a look at code for the category beneath take a look at.

The Service with the prompts appears like this:

@Service
public class CodeGenerationService {
  personal static closing Logger LOGGER = LoggerFactory
    .getLogger(CodeGenerationService.class);
  personal closing GithubClient githubClient;
  personal closing ChatClient chatClient;
  personal closing String ollamaPrompt = """
    You might be an assistant to generate spring checks for the category beneath take a look at. 
    Analyse the courses supplied and generate checks for all strategies. Base  
    your checks on the instance.
    Generate and implement the take a look at strategies. Generate and implement full  
    checks strategies.
    Generate the entire supply of the take a look at class.
					 
    Generate checks for this class:
    {classToTest}

    Use these courses as context for the checks:
    {contextClasses}

    {testExample}
  """;	
  personal closing String ollamaPrompt1 = """
    You might be an assistant to generate a spring take a look at class for the supply 
    class.
    1. Analyse the supply class
    2. Analyse the context courses for the courses utilized by the supply class
    3. Analyse the category in take a look at instance to base the code of the generated 
    take a look at class on it.
    4. Generate a take a look at class for the supply class, use the context courses as 
    sources for it and base the code of the take a look at class on the take a look at instance. 
    Generate the entire supply code of the take a look at class implementing the 
    checks.						

    {testExample}

    Use these context courses as extension for the supply class:
    {contextClasses}
			
    Generate the entire supply code of the take a look at class implementing the  
    checks.
    Generate checks for this supply class:
    {classToTest}	
  """;
  @Worth("${spring.ai.ollama.chat.options.num-ctx:0}")
  personal Lengthy contextWindowSize;

  public CodeGenerationService(GithubClient githubClient, ChatClient 
    chatClient) {
    this.githubClient = githubClient;
    this.chatClient = chatClient;
  }

That is the CodeGenerationService with the GithubClient and the ChatClient. The GithubClient is used to load the sources from a publicly obtainable repository and the ChatClient is the Spring AI interface to entry the AI/LLM.

The ollamaPrompt is the immediate for the IBM Granite LLM with a context window of 8k tokens. The {classToTest} is changed with the supply code of the category beneath take a look at. The {contextClasses} may be changed with the dependent courses of the category beneath take a look at and the {testExample} is non-obligatory and may be changed with a take a look at class that may serve for instance for the code era.

The ollamaPrompt2 is the immediate for the Deepseek Coder V2 LLM. This LLM can “understand” or work with a series of thought immediate and has a context window of greater than 64k tokens. The {...} placeholders work the identical as within the ollamaPrompt. The lengthy context window allows the addition of context courses for code era. 

The contextWindowSize property is injected by Spring to regulate if the context window of the LLM is large enough so as to add the {contextClasses} to the immediate.

The tactic createTestSources(...) collects and returns the sources for the AI/LLM prompts:

public GithubSource createTestSources(String url, closing boolean 
  referencedSources) {
  closing var myUrl = url.change("https://github.com", 
    GithubClient.GITHUB_BASE_URL).change("/blob", "");
  var consequence = this.githubClient.readSourceFile(myUrl);
  closing var isComment = new AtomicBoolean(false);
  closing var sourceLines = consequence.strains().stream().map(myLine -> 
      myLine.replaceAll("[t]", "").trim())
    .filter(myLine -> !myLine.isBlank()).filter(myLine -> 
      filterComments(isComment, myLine)).toList();
  closing var basePackage = Listing.of(consequence.sourcePackage()
    .break up(".")).stream().restrict(2)
    .acquire(Collectors.becoming a member of("."));
  closing var dependencies = this.createDependencies(referencedSources, myUrl, 
    sourceLines, basePackage);
  return new GithubSource(consequence.sourceName(), consequence.sourcePackage(), 
    sourceLines, dependencies);
}

personal Listing createDependencies(closing boolean 
  referencedSources, closing String myUrl, closing Listing sourceLines, 
  closing String basePackage) {
  return sourceLines.stream().filter(x -> referencedSources)
    .filter(myLine -> myLine.comprises("import"))
    .filter(myLine -> myLine.comprises(basePackage))
    .map(myLine -> String.format("%s%s%s", 
      myUrl.break up(basePackage.change(".", "https://dzone.com/"))[0].trim(),
	myLine.break up("import")[1].break up(";")[0].replaceAll(".", 
          "https://dzone.com/").trim(), myUrl.substring(myUrl.lastIndexOf('.'))))
    .map(myLine -> this.createTestSources(myLine, false)).toList();
}

personal boolean filterComments(AtomicBoolean isComment, String myLine) {
  var result1 = true;
  if (myLine.comprises("/*") || isComment.get()) {
    isComment.set(true);
    result1 = false;
  }
  if (myLine.comprises("*/")) {
    isComment.set(false);
    result1 = false;
  }
  result1 = result1 && !myLine.trim().startsWith("//");
  return result1;
}

The tactic createTestSources(...) with the supply code of the GitHub supply url and relying on the worth of the referencedSources the sources of the dependent courses within the venture present the GithubSource information.

To do this the myUrl is created to get the uncooked supply code of the category. Then the githubClient is used to learn the supply file as a string. The supply string is then turned in supply strains with out formatting and feedback with the tactic filterComments(...)

To learn the dependent courses within the venture the bottom package deal is used. For instance in a package deal ch.xxx.aidoclibchat.usecase.service the bottom package deal is ch.xxx. The tactic createDependencies(...) is used to create the GithubSource information for the dependent courses within the base packages. The basePackage parameter is used to filter out the courses after which the tactic createTestSources(...) known as recursively with the parameter referencedSources set to false to cease the recursion. That’s how the dependent class GithubSource information are created.

The tactic generateTest(...) is used to create the take a look at sources for the category beneath take a look at with the AI/LLM:

public String generateTest(String url, Elective testUrlOpt) {
  var begin = On the spot.now();
  var githubSource = this.createTestSources(url, true);
  var githubTestSource = testUrlOpt.map(testUrl -> 
    this.createTestSources(testUrl, false))
      .orElse(new GithubSource(null, null, Listing.of(), Listing.of()));
  String contextClasses = githubSource.dependencies().stream()
    .filter(x -> this.contextWindowSize >= 16 * 1024)
    .map(myGithubSource -> myGithubSource.sourceName() + ":"  + 
      System.getProperty("line.separator")	
      + myGithubSource.strains().stream()
        .acquire(Collectors.becoming a member of(System.getProperty("line.separator")))
      .acquire(Collectors.becoming a member of(System.getProperty("line.separator")));
  String testExample = Elective.ofNullable(githubTestSource.sourceName())
    .map(x -> "Use this as test example class:" + 
      System.getProperty("line.separator") +  
      githubTestSource.strains().stream()
        .acquire(Collectors.becoming a member of(System.getProperty("line.separator"))))
    .orElse("");
  String classToTest = githubSource.strains().stream()
    .acquire(Collectors.becoming a member of(System.getProperty("line.separator")));
  LOGGER.debug(new PromptTemplate(this.contextWindowSize >= 16 * 1024 ? 
    this.ollamaPrompt1 : this.ollamaPrompt, Map.of("classToTest", 
      classToTest, "contextClasses", contextClasses, "testExample", 
      testExample)).createMessage().getContent());
  LOGGER.data("Generation started with context window: {}",  
    this.contextWindowSize);
  var response = chatClient.name(new PromptTemplate(
    this.contextWindowSize >= 16 * 1024 ? this.ollamaPrompt1 :  
      this.ollamaPrompt, Map.of("classToTest", classToTest, "contextClasses", 
      contextClasses, "testExample", testExample)).create());
  if((On the spot.now().getEpochSecond() - begin.getEpochSecond()) >= 300) {
    LOGGER.data(response.getResult().getOutput().getContent());
  }
  LOGGER.data("Prompt tokens: " + 
    response.getMetadata().getUsage().getPromptTokens());
  LOGGER.data("Generation tokens: " + 
    response.getMetadata().getUsage().getGenerationTokens());
  LOGGER.data("Total tokens: " + 
    response.getMetadata().getUsage().getTotalTokens());
  LOGGER.data("Time in seconds: {}", (On the spot.now().toEpochMilli() - 
    begin.toEpochMilli()) / 1000.0);
  return response.getResult().getOutput().getContent();
}

To do this the createTestSources(...) methodology is used to create the information with the supply strains. Then the string contextClasses is created to exchange the {contextClasses} placeholder within the immediate. If the context window is smaller than 16k tokens the string is empty to have sufficient tokens for the category beneath take a look at and the take a look at instance class. Then the non-obligatory testExample string is created to exchange the {testExample} placeholder within the immediate. If no testUrl is supplied the string is empty. Then the classToTest string is created to exchange the {classToTest} placeholder within the immediate.

The chatClient known as to ship the immediate to the AI/LLM. The immediate is chosen primarily based on the dimensions of the context window within the contextWindowSize property. The PromptTemplate replaces the placeholders with the ready strings. 

The response is used to log the quantity of the immediate tokens, the era tokens, and the overall tokens to have the ability to examine if the context window boundary was honored. Then the time to generate the take a look at supply is logged and the take a look at supply is returned. If the era of the take a look at supply took greater than 5 minutes the take a look at supply is logged as safety in opposition to browser timeouts.

Conclusion

Each fashions have been examined to generate Spring Controller checks and Spring service checks. The take a look at URLs have been:

http://localhost:8080/relaxation/code-generation/take a look at?url=https://github.com/Angular2Guy/MovieManager/blob/grasp/backend/src/essential/java/ch/xxx/moviemanager/adapter/controller/ActorController.java&testUrl=https://github.com/Angular2Guy/MovieManager/blob/grasp/backend/src/take a look at/java/ch/xxx/moviemanager/adapter/controller/MovieControllerTest.java
http://localhost:8080/relaxation/code-generation/take a look at?url=https://github.com/Angular2Guy/MovieManager/blob/grasp/backend/src/essential/java/ch/xxx/moviemanager/usecase/service/ActorService.java&testUrl=https://github.com/Angular2Guy/MovieManager/blob/grasp/backend/src/take a look at/java/ch/xxx/moviemanager/usecase/service/MovieServiceTest.java

The granite-code:20b LLM on Ollama has a context window of 8k tokens. That’s too small to offer contextClasses and have sufficient tokens for a response. Meaning the LLM simply had the category beneath take a look at and the take a look at instance to work with. 

The deepseek-coder-v2:16b LLM on Ollama has a context window of greater than 64k tokens. That enabled the addition of the contextClasses to the immediate and it is ready to work with a series of thought immediate.

Outcomes

The Granite-Code LLM was capable of generate a buggy however helpful foundation for a Spring service take a look at. No take a look at labored however the lacking components might be defined with the lacking context courses. The Spring Controller take a look at was not so good. It missed an excessive amount of code to be helpful as a foundation. The take a look at era took greater than 10 minutes on a medium-power laptop computer CPU.

The Deepseek-Coder-V2 LLM was capable of create a Spring service take a look at with nearly all of the checks working. That was an excellent foundation to work with and the lacking components had been straightforward to repair. The Spring Controller take a look at had extra bugs however was a helpful foundation to start out from. The take a look at era took lower than ten minutes on a medium-power laptop computer CPU.

Opinion

The Deepseek-Coder-V2 LLM may help with writing checks for Spring purposes. For productive use, GPU acceleration is required. The LLM isn’t capable of create non-trivial code accurately, even with context courses obtainable. The assistance a LLM can present may be very restricted as a result of LLMs don’t perceive the code. Code is simply characters for a LLM and with out an understanding of language syntax, the outcomes are usually not spectacular. The developer has to have the ability to repair all of the bugs within the checks. Meaning it simply saves a while typing the checks.

The expertise with GitHub Copilot is just like the Granite-Code LLM. As of September 2024, the context window is simply too small to do good code era and the code completion solutions should be ignored too usually.

Is a LLM a assist -> sure.

Is the LLM a big timesaver -> no.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version