Skip to content

Commit

Permalink
Merge branch 'spring-projects:main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
zhangqian9158 authored May 27, 2024
2 parents ff4a262 + 5226d0b commit 10c1f7d
Show file tree
Hide file tree
Showing 278 changed files with 5,511 additions and 1,230 deletions.
89 changes: 37 additions & 52 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,53 +14,53 @@ On our march to release 1.0.0 M1 we have made several breaking changes. Apologi

**(22.05.2024)**

A major change was made that took the 'old' `ChatClient` and moved the functionality into `ChatModel`. The 'new' `ChatClient` now takes an instance of `ChatModel`. This was done do support a fluent API for creating and executing prompts in a style similar to other client classes in the Spring ecosystem, such as `RestClient`, `WebClient`, and `JdbcClient`. Refer to the [JavaDoc](https://docs.spring.io/spring-ai/docs/1.0.0-SNAPSHOT/api/) for more information on the Fluent API, proper reference documentation is coming shortly.
A major change was made that took the 'old' `ChatClient` and moved the functionality into `ChatModel`. The 'new' `ChatClient` now takes an instance of `ChatModel`. This was done do support a fluent API for creating and executing prompts in a style similar to other client classes in the Spring ecosystem, such as `RestClient`, `WebClient`, and `JdbcClient`. Refer to the [JavaDoc](https://docs.spring.io/spring-ai/docs/1.0.0-SNAPSHOT/api/) for more information on the Fluent API, proper reference documentation is coming shortly.

We renamed the 'old' `ModelClient` to `Model` and renamed implementing classes, for example `ImageClient` was renamed to `ImageModel`. The `Model` implementation represent the portability layer that converts between the Spring AI API and the underlying AI Model API.

### Adapting to the changes

NOTE: The `ChatClient` class is now in the package `org.springframework.ai.chat.client`

#### Approach 1

Now, instead of getting an Autoconfigured `ChatClient` instance, you will get a `ChatModel` instance. The `call` method signatures after renaming remain the same.
To adapt your code should refactor you code to change use of the type `ChatClient` to `ChatModel`
To adapt your code should refactor you code to change use of the type `ChatClient` to `ChatModel`
Here is an example of existing code before the change

```java
@RestController
public class OldSimpleAiController {

private final ChatClient chatClient;
private final ChatClient chatClient;

@Autowired
public OldSimpleAiController(ChatClient chatClient) {
this.chatClient = chatClient;
}
public OldSimpleAiController(ChatClient chatClient) {
this.chatClient = chatClient;
}

@GetMapping("/ai/simple")
public Map<String, String> completion(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
return Map.of("generation", chatClient.call(message));
}
@GetMapping("/ai/simple")
Map<String, String> completion(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
return Map.of("generation", chatClient.call(message));
}
}
```

Now after the changes this will be
Now after the changes this will be

```java
@RestController
public class SimpleAiController {

private final ChatModel chatModel;
private final ChatModel chatModel;

@Autowired
public SimpleAiController(ChatModel chatModel) {
this.chatModel = chatModel;
}
public SimpleAiController(ChatModel chatModel) {
this.chatModel = chatModel;
}

@GetMapping("/ai/simple")
public Map<String, String> completion(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
return Map.of("generation", chatModel.call(message));
}
@GetMapping("/ai/simple")
Map<String, String> completion(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
return Map.of("generation", chatModel.call(message));
}
}
```

Expand All @@ -79,17 +79,16 @@ Here is an example of existing code before the change

```java
@RestController
public class OldSimpleAiController {
class OldSimpleAiController {

private final ChatClient chatClient;
ChatClient chatClient;

@Autowired
public OldSimpleAiController(ChatClient chatClient) {
this.chatClient = chatClient;
OldSimpleAiController(ChatClient chatClient) {
this.chatClient = chatClient;
}

@GetMapping("/ai/simple")
public Map<String, String> completion(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
Map<String, String> completion(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
return Map.of(
"generation",
chatClient.call(message)
Expand All @@ -98,42 +97,28 @@ public class OldSimpleAiController {
}
```


Now after the changes this will be

```java
@RestController
public class SimpleAiController {
class SimpleAiController {

private final ChatClient chatClient;
private final ChatClient chatClient;

@Autowired
public SimpleAiController(ChatClient chatClient) {
this.chatClient = chatClient;
}
SimpleAiController(ChatClient.Builder builder) {
this.builder = builder.build();
}

@GetMapping("/ai/simple")
public Map<String, String> completion(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
return Map.of(
"generation",
@GetMapping("/ai/simple")
Map<String, String> completion(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
return Map.of(
"generation",
chatClient.prompt().user(message).call().content()
);
}
}
}
```

and in your `@Configuration` class you need to define an `@Bean` as shown below

```java
@Configuration
public class ApplicationConfiguration {

@Bean
ChatClient chatClient(ChatModel chatModel) {
return ChatClient.builder(chatModel).build();
}
}
```

NOTE: The `ChatModel` instance is made available to you through autoconfiguration.

Expand All @@ -144,7 +129,7 @@ There is a tag in the GitHub repository called [v1.0.0-SNAPSHOT-before-chatclien
```bash
git checkout tags/v1.0.0-SNAPSHOT-before-chatclient-changes

./mvnw clean install -DskipTests
./mvnw clean install -DskipTests
```


Expand Down
4 changes: 2 additions & 2 deletions models/spring-ai-anthropic/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@
</parent>
<artifactId>spring-ai-anthropic</artifactId>
<packaging>jar</packaging>
<name>Spring AI Anthropic Chat Client</name>
<description>Anthropic support</description>
<name>Spring AI Model - Anthropic</name>
<description>Anthropic models support</description>
<url>https://github.com/spring-projects/spring-ai</url>

<scm>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.ai.chat.ChatModel;
import org.springframework.ai.chat.model.ChatModel;
import reactor.core.publisher.Flux;

import org.springframework.ai.anthropic.api.AnthropicApi;
Expand All @@ -39,9 +39,9 @@
import org.springframework.ai.anthropic.api.AnthropicApi.StreamResponse;
import org.springframework.ai.anthropic.api.AnthropicApi.Usage;
import org.springframework.ai.anthropic.metadata.AnthropicChatResponseMetadata;
import org.springframework.ai.chat.ChatResponse;
import org.springframework.ai.chat.Generation;
import org.springframework.ai.chat.StreamingChatModel;
import org.springframework.ai.chat.model.ChatResponse;
import org.springframework.ai.chat.model.Generation;
import org.springframework.ai.chat.model.StreamingChatModel;
import org.springframework.ai.chat.messages.MessageType;
import org.springframework.ai.chat.metadata.ChatGenerationMetadata;
import org.springframework.ai.chat.prompt.ChatOptions;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,10 @@

import org.springframework.ai.anthropic.api.AnthropicApi;
import org.springframework.ai.anthropic.api.tool.MockWeatherService;
import org.springframework.ai.chat.ChatModel;
import org.springframework.ai.chat.ChatResponse;
import org.springframework.ai.chat.Generation;
import org.springframework.ai.chat.StreamingChatModel;
import org.springframework.ai.chat.model.ChatModel;
import org.springframework.ai.chat.model.ChatResponse;
import org.springframework.ai.chat.model.Generation;
import org.springframework.ai.chat.model.StreamingChatModel;
import org.springframework.ai.chat.messages.AssistantMessage;
import org.springframework.ai.chat.messages.Media;
import org.springframework.ai.chat.messages.Message;
Expand Down
4 changes: 2 additions & 2 deletions models/spring-ai-azure-openai/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@
</parent>
<artifactId>spring-ai-azure-openai</artifactId>
<packaging>jar</packaging>
<name>Spring AI Azure OpenAI</name>
<description>OpenAI support</description>
<name>Spring AI Model - Azure OpenAI</name>
<description>Azure OpenAI models support</description>
<url>https://github.com/spring-projects/spring-ai</url>

<scm>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,16 +32,19 @@
import com.azure.ai.openai.models.ContentFilterResultsForPrompt;
import com.azure.ai.openai.models.FunctionCall;
import com.azure.ai.openai.models.FunctionDefinition;
import com.azure.ai.openai.models.ChatCompletionsJsonResponseFormat;
import com.azure.ai.openai.models.ChatCompletionsTextResponseFormat;
import com.azure.ai.openai.models.ChatCompletionsResponseFormat;
import com.azure.core.util.BinaryData;
import com.azure.core.util.IterableStream;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import org.springframework.ai.azure.openai.metadata.AzureOpenAiChatResponseMetadata;
import org.springframework.ai.chat.ChatModel;
import org.springframework.ai.chat.ChatResponse;
import org.springframework.ai.chat.Generation;
import org.springframework.ai.chat.StreamingChatModel;
import org.springframework.ai.chat.model.ChatModel;
import org.springframework.ai.chat.model.ChatResponse;
import org.springframework.ai.chat.model.Generation;
import org.springframework.ai.chat.model.StreamingChatModel;
import org.springframework.ai.chat.messages.Message;
import org.springframework.ai.chat.metadata.ChatGenerationMetadata;
import org.springframework.ai.chat.metadata.PromptMetadata;
Expand Down Expand Up @@ -180,8 +183,8 @@ public Flux<ChatResponse> stream(Prompt prompt) {
isFunctionCall.set(false);
return true;
}
return false;
}, false)
return !isFunctionCall.get();
})
.concatMapIterable(window -> {
final var reduce = window.reduce(MergeUtils.emptyChatCompletions(), MergeUtils::mergeChatCompletions);
return List.of(reduce);
Expand Down Expand Up @@ -349,6 +352,11 @@ private ChatCompletionsOptions merge(ChatCompletionsOptions fromAzureOptions,
mergedAzureOptions.setPresencePenalty(toSpringAiOptions.getPresencePenalty().doubleValue());
}

mergedAzureOptions.setResponseFormat(fromAzureOptions.getResponseFormat());
if (mergedAzureOptions.getResponseFormat() == null && toSpringAiOptions.getResponseFormat() != null) {
mergedAzureOptions.setResponseFormat(toAzureResponseFormat(toSpringAiOptions.getResponseFormat()));
}

mergedAzureOptions.setN(fromAzureOptions.getN() != null ? fromAzureOptions.getN() : toSpringAiOptions.getN());

mergedAzureOptions
Expand Down Expand Up @@ -417,6 +425,10 @@ private ChatCompletionsOptions merge(AzureOpenAiChatOptions fromSpringAiOptions,
mergedAzureOptions.setModel(fromSpringAiOptions.getDeploymentName());
}

if (fromSpringAiOptions.getResponseFormat() != null) {
mergedAzureOptions.setResponseFormat(toAzureResponseFormat(fromSpringAiOptions.getResponseFormat()));
}

return mergedAzureOptions;
}

Expand Down Expand Up @@ -465,6 +477,9 @@ private ChatCompletionsOptions merge(ChatCompletionsOptions fromOptions, ChatCom
if (fromOptions.getModel() != null) {
mergedOptions.setModel(fromOptions.getModel());
}
if (fromOptions.getResponseFormat() != null) {
mergedOptions.setResponseFormat(fromOptions.getResponseFormat());
}

return mergedOptions;
}
Expand Down Expand Up @@ -509,6 +524,9 @@ private ChatCompletionsOptions copy(ChatCompletionsOptions fromOptions) {
if (fromOptions.getModel() != null) {
copyOptions.setModel(fromOptions.getModel());
}
if (fromOptions.getResponseFormat() != null) {
copyOptions.setResponseFormat(fromOptions.getResponseFormat());
}

return copyOptions;
}
Expand Down Expand Up @@ -590,4 +608,16 @@ protected boolean isToolFunctionCall(ChatCompletions chatCompletions) {
return choice.getFinishReason() == CompletionsFinishReason.TOOL_CALLS;
}

/**
* Maps the SpringAI response format to the Azure response format
* @param responseFormat SpringAI response format
* @return Azure response format
*/
private ChatCompletionsResponseFormat toAzureResponseFormat(AzureOpenAiResponseFormat responseFormat) {
if (responseFormat == AzureOpenAiResponseFormat.JSON) {
return new ChatCompletionsJsonResponseFormat();
}
return new ChatCompletionsTextResponseFormat();
}

}
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,14 @@ public class AzureOpenAiChatOptions implements FunctionCallingOptions, ChatOptio
@JsonProperty(value = "deployment_name")
private String deploymentName;

/**
* The response format expected from the Azure OpenAI model
* @see org.springframework.ai.azure.openai.AzureOpenAiResponseFormat for supported
* formats
*/
@JsonProperty("response_format")
private AzureOpenAiResponseFormat responseFormat;

/**
* OpenAI Tool Function Callbacks to register with the ChatModel. For Prompt Options
* the functionCallbacks are automatically enabled for the duration of the prompt
Expand Down Expand Up @@ -239,6 +247,12 @@ public Builder withFunction(String functionName) {
return this;
}

public Builder withResponseFormat(AzureOpenAiResponseFormat responseFormat) {
Assert.notNull(responseFormat, "responseFormat must not be null");
this.options.responseFormat = responseFormat;
return this;
}

public AzureOpenAiChatOptions build() {
return this.options;
}
Expand Down Expand Up @@ -356,6 +370,14 @@ public void setFunctions(Set<String> functions) {
this.functions = functions;
}

public AzureOpenAiResponseFormat getResponseFormat() {
return this.responseFormat;
}

public void setResponseFormat(AzureOpenAiResponseFormat responseFormat) {
this.responseFormat = responseFormat;
}

public static AzureOpenAiChatOptions fromOptions(AzureOpenAiChatOptions fromOptions) {
return builder().withDeploymentName(fromOptions.getDeploymentName())
.withFrequencyPenalty(
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
/*
* Copyright 2023 - 2024 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.ai.azure.openai;

/**
* Utility enumeration for representing the response format that may be requested from the
* Azure OpenAI model. Please check <a href=
* "https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format">OpenAI
* API documentation</a> for more details.
*/
public enum AzureOpenAiResponseFormat {

// default value used by OpenAI
TEXT,
/*
* From the OpenAI API documentation: Compatability: Compatible with GPT-4 Turbo and
* all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. Caveats: This enables JSON
* mode, which guarantees the message the model generates is valid JSON. Important:
* when using JSON mode, you must also instruct the model to produce JSON yourself via
* a system or user message. Without this, the model may generate an unending stream
* of whitespace until the generation reaches the token limit, resulting in a
* long-running and seemingly "stuck" request. Also note that the message content may
* be partially cut off if finish_reason="length", which indicates the generation
* exceeded max_tokens or the conversation exceeded the max context length.
*/
JSON

}
Loading

0 comments on commit 10c1f7d

Please sign in to comment.