Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🌿 Fern Regeneration -- October 4, 2024 #205

Merged
merged 1 commit into from
Oct 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@
"@aws-sdk/client-sagemaker": "^3.583.0",
"@aws-sdk/credential-providers": "^3.583.0",
"@aws-sdk/protocol-http": "^3.374.0",
"@aws-sdk/signature-v4": "^3.374.0"
"@aws-sdk/signature-v4": "^3.374.0",
"convict": "^6.2.4"
},
"devDependencies": {
"@types/url-join": "4.0.1",
Expand All @@ -39,7 +40,8 @@
"typescript": "4.6.4",
"@types/readable-stream": "^4.0.14",
"ts-loader": "^9.5.1",
"webpack": "^5.91.0"
"webpack": "^5.91.0",
"@types/convict": "^6.1.6"
},
"browser": {
"fs": false,
Expand Down
10 changes: 8 additions & 2 deletions reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,9 @@ await client.checkApiKey();
<dl>
<dd>

Generates a message from the model in response to a provided conversation. To learn how to use the Chat API with Streaming and RAG follow our Text Generation guides.
Generates a message from the model in response to a provided conversation. To learn more about the features of the Chat API follow our [Text Generation guides](https://docs.cohere.com/v2/docs/chat-api).

Follow the [Migration Guide](https://docs.cohere.com/v2/docs/migrating-v1-to-v2) for instructions on moving from API v1 to API v2.

</dd>
</dl>
Expand Down Expand Up @@ -173,7 +175,9 @@ await client.v2.chatStream({
<dl>
<dd>

Generates a message from the model in response to a provided conversation. To learn how to use the Chat API with Streaming and RAG follow our Text Generation guides.
Generates a message from the model in response to a provided conversation. To learn more about the features of the Chat API follow our [Text Generation guides](https://docs.cohere.com/v2/docs/chat-api).

Follow the [Migration Guide](https://docs.cohere.com/v2/docs/migrating-v1-to-v2) for instructions on moving from API v1 to API v2.

</dd>
</dl>
Expand Down Expand Up @@ -266,6 +270,8 @@ If you want to learn more how to use the embedding model, have a look at the [Se
```typescript
await client.v2.embed({
model: "model",
inputType: Cohere.EmbedInputType.SearchDocument,
embeddingTypes: [Cohere.EmbeddingType.Float],
});
```

Expand Down
12 changes: 9 additions & 3 deletions src/api/resources/v2/client/Client.ts
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,9 @@ export class V2 {
constructor(protected readonly _options: V2.Options = {}) {}

/**
* Generates a message from the model in response to a provided conversation. To learn how to use the Chat API with Streaming and RAG follow our Text Generation guides.
* Generates a message from the model in response to a provided conversation. To learn more about the features of the Chat API follow our [Text Generation guides](https://docs.cohere.com/v2/docs/chat-api).
*
* Follow the [Migration Guide](https://docs.cohere.com/v2/docs/migrating-v1-to-v2) for instructions on moving from API v1 to API v2.
*/
public async chatStream(
request: Cohere.V2ChatStreamRequest,
Expand Down Expand Up @@ -183,7 +185,9 @@ export class V2 {
}

/**
* Generates a message from the model in response to a provided conversation. To learn how to use the Chat API with Streaming and RAG follow our Text Generation guides.
* Generates a message from the model in response to a provided conversation. To learn more about the features of the Chat API follow our [Text Generation guides](https://docs.cohere.com/v2/docs/chat-api).
*
* Follow the [Migration Guide](https://docs.cohere.com/v2/docs/migrating-v1-to-v2) for instructions on moving from API v1 to API v2.
*
* @param {Cohere.V2ChatRequest} request
* @param {V2.RequestOptions} requestOptions - Request-specific configuration.
Expand Down Expand Up @@ -364,7 +368,9 @@ export class V2 {
*
* @example
* await client.v2.embed({
* model: "model"
* model: "model",
* inputType: Cohere.EmbedInputType.SearchDocument,
* embeddingTypes: [Cohere.EmbeddingType.Float]
* })
*/
public async embed(
Expand Down
12 changes: 6 additions & 6 deletions src/api/resources/v2/client/requests/V2ChatRequest.ts
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ import * as Cohere from "../../../../index";
* }
*/
export interface V2ChatRequest {
/** The name of a compatible [Cohere model](https://docs.cohere.com/docs/models) (such as command-r or command-r-plus) or the ID of a [fine-tuned](https://docs.cohere.com/docs/chat-fine-tuning) model. */
/** The name of a compatible [Cohere model](https://docs.cohere.com/v2/docs/models) (such as command-r or command-r-plus) or the ID of a [fine-tuned](https://docs.cohere.com/v2/docs/chat-fine-tuning) model. */
model: string;
messages: Cohere.ChatMessages;
/**
Expand All @@ -33,19 +33,19 @@ export interface V2ChatRequest {
citationOptions?: Cohere.CitationOptions;
responseFormat?: Cohere.ResponseFormatV2;
/**
* Used to select the [safety instruction](/docs/safety-modes) inserted into the prompt. Defaults to `CONTEXTUAL`.
* Used to select the [safety instruction](https://docs.cohere.com/v2/docs/safety-modes) inserted into the prompt. Defaults to `CONTEXTUAL`.
* When `OFF` is specified, the safety instruction will be omitted.
*
* Safety modes are not yet configurable in combination with `tools`, `tool_results` and `documents` parameters.
*
* **Note**: This parameter is only compatible with models [Command R 08-2024](/docs/command-r#august-2024-release), [Command R+ 08-2024](/docs/command-r-plus#august-2024-release) and newer.
*
* Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
* **Note**: This parameter is only compatible with models [Command R 08-2024](https://docs.cohere.com/v2/docs/command-r#august-2024-release), [Command R+ 08-2024](https://docs.cohere.com/v2/docs/command-r-plus#august-2024-release) and newer.
*
*/
safetyMode?: Cohere.V2ChatRequestSafetyMode;
/**
* The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations.
* The maximum number of tokens the model will generate as part of the response.
*
* **Note**: Setting a low value may result in incomplete generations.
*
*/
maxTokens?: number;
Expand Down
12 changes: 6 additions & 6 deletions src/api/resources/v2/client/requests/V2ChatStreamRequest.ts
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ import * as Cohere from "../../../../index";
* }
*/
export interface V2ChatStreamRequest {
/** The name of a compatible [Cohere model](https://docs.cohere.com/docs/models) (such as command-r or command-r-plus) or the ID of a [fine-tuned](https://docs.cohere.com/docs/chat-fine-tuning) model. */
/** The name of a compatible [Cohere model](https://docs.cohere.com/v2/docs/models) (such as command-r or command-r-plus) or the ID of a [fine-tuned](https://docs.cohere.com/v2/docs/chat-fine-tuning) model. */
model: string;
messages: Cohere.ChatMessages;
/**
Expand All @@ -62,19 +62,19 @@ export interface V2ChatStreamRequest {
citationOptions?: Cohere.CitationOptions;
responseFormat?: Cohere.ResponseFormatV2;
/**
* Used to select the [safety instruction](/docs/safety-modes) inserted into the prompt. Defaults to `CONTEXTUAL`.
* Used to select the [safety instruction](https://docs.cohere.com/v2/docs/safety-modes) inserted into the prompt. Defaults to `CONTEXTUAL`.
* When `OFF` is specified, the safety instruction will be omitted.
*
* Safety modes are not yet configurable in combination with `tools`, `tool_results` and `documents` parameters.
*
* **Note**: This parameter is only compatible with models [Command R 08-2024](/docs/command-r#august-2024-release), [Command R+ 08-2024](/docs/command-r-plus#august-2024-release) and newer.
*
* Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
* **Note**: This parameter is only compatible with models [Command R 08-2024](https://docs.cohere.com/v2/docs/command-r#august-2024-release), [Command R+ 08-2024](https://docs.cohere.com/v2/docs/command-r-plus#august-2024-release) and newer.
*
*/
safetyMode?: Cohere.V2ChatStreamRequestSafetyMode;
/**
* The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations.
* The maximum number of tokens the model will generate as part of the response.
*
* **Note**: Setting a low value may result in incomplete generations.
*
*/
maxTokens?: number;
Expand Down
8 changes: 5 additions & 3 deletions src/api/resources/v2/client/requests/V2EmbedRequest.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,9 @@ import * as Cohere from "../../../../index";
/**
* @example
* {
* model: "model"
* model: "model",
* inputType: Cohere.EmbedInputType.SearchDocument,
* embeddingTypes: [Cohere.EmbeddingType.Float]
* }
*/
export interface V2EmbedRequest {
Expand Down Expand Up @@ -36,7 +38,7 @@ export interface V2EmbedRequest {
* * `embed-multilingual-v2.0` 768
*/
model: string;
inputType?: Cohere.EmbedInputType;
inputType: Cohere.EmbedInputType;
/**
* Specifies the types of embeddings you want to get back. Not required and default is None, which returns the Embed Floats response type. Can be one or more of the following types.
*
Expand All @@ -46,7 +48,7 @@ export interface V2EmbedRequest {
* * `"binary"`: Use this when you want to get back signed binary embeddings. Valid for only v3 models.
* * `"ubinary"`: Use this when you want to get back unsigned binary embeddings. Valid for only v3 models.
*/
embeddingTypes?: Cohere.EmbeddingType[];
embeddingTypes: Cohere.EmbeddingType[];
/**
* One of `NONE|START|END` to specify how the API will handle inputs longer than the maximum token length.
*
Expand Down
6 changes: 2 additions & 4 deletions src/api/resources/v2/types/V2ChatRequestSafetyMode.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,12 @@
*/

/**
* Used to select the [safety instruction](/docs/safety-modes) inserted into the prompt. Defaults to `CONTEXTUAL`.
* Used to select the [safety instruction](https://docs.cohere.com/v2/docs/safety-modes) inserted into the prompt. Defaults to `CONTEXTUAL`.
* When `OFF` is specified, the safety instruction will be omitted.
*
* Safety modes are not yet configurable in combination with `tools`, `tool_results` and `documents` parameters.
*
* **Note**: This parameter is only compatible with models [Command R 08-2024](/docs/command-r#august-2024-release), [Command R+ 08-2024](/docs/command-r-plus#august-2024-release) and newer.
*
* Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
* **Note**: This parameter is only compatible with models [Command R 08-2024](https://docs.cohere.com/v2/docs/command-r#august-2024-release), [Command R+ 08-2024](https://docs.cohere.com/v2/docs/command-r-plus#august-2024-release) and newer.
*/
export type V2ChatRequestSafetyMode = "CONTEXTUAL" | "STRICT" | "OFF";

Expand Down
6 changes: 2 additions & 4 deletions src/api/resources/v2/types/V2ChatStreamRequestSafetyMode.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,12 @@
*/

/**
* Used to select the [safety instruction](/docs/safety-modes) inserted into the prompt. Defaults to `CONTEXTUAL`.
* Used to select the [safety instruction](https://docs.cohere.com/v2/docs/safety-modes) inserted into the prompt. Defaults to `CONTEXTUAL`.
* When `OFF` is specified, the safety instruction will be omitted.
*
* Safety modes are not yet configurable in combination with `tools`, `tool_results` and `documents` parameters.
*
* **Note**: This parameter is only compatible with models [Command R 08-2024](/docs/command-r#august-2024-release), [Command R+ 08-2024](/docs/command-r-plus#august-2024-release) and newer.
*
* Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
* **Note**: This parameter is only compatible with models [Command R 08-2024](https://docs.cohere.com/v2/docs/command-r#august-2024-release), [Command R+ 08-2024](https://docs.cohere.com/v2/docs/command-r-plus#august-2024-release) and newer.
*/
export type V2ChatStreamRequestSafetyMode = "CONTEXTUAL" | "STRICT" | "OFF";

Expand Down
1 change: 1 addition & 0 deletions src/api/types/AssistantMessage.ts
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ import * as Cohere from "../index";
*/
export interface AssistantMessage {
toolCalls?: Cohere.ToolCallV2[];
/** A chain-of-thought style reflection and plan that the model generates when working with Tools. */
toolPlan?: string;
content?: Cohere.AssistantMessageContent;
citations?: Cohere.Citation[];
Expand Down
1 change: 1 addition & 0 deletions src/api/types/AssistantMessageResponse.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ import * as Cohere from "../index";
export interface AssistantMessageResponse {
role: "assistant";
toolCalls?: Cohere.ToolCallV2[];
/** A chain-of-thought style reflection and plan that the model generates when working with Tools. */
toolPlan?: string;
content?: Cohere.AssistantMessageResponseContentItem[];
citations?: Cohere.Citation[];
Expand Down
27 changes: 12 additions & 15 deletions src/api/types/ChatFinishReason.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,22 +4,19 @@

/**
* The reason a chat request has finished.
*
* - **complete**: The model finished sending a complete message.
* - **max_tokens**: The number of generated tokens exceeded the model's context length or the value specified via the `max_tokens` parameter.
* - **stop_sequence**: One of the provided `stop_sequence` entries was reached in the model's generation.
* - **tool_call**: The model generated a Tool Call and is expecting a Tool Message in return
* - **error**: The generation failed due to an internal error
*/
export type ChatFinishReason =
| "complete"
| "stop_sequence"
| "max_tokens"
| "tool_call"
| "error"
| "content_blocked"
| "error_limit";
export type ChatFinishReason = "COMPLETE" | "STOP_SEQUENCE" | "MAX_TOKENS" | "TOOL_CALL" | "ERROR";

export const ChatFinishReason = {
Complete: "complete",
StopSequence: "stop_sequence",
MaxTokens: "max_tokens",
ToolCall: "tool_call",
Error: "error",
ContentBlocked: "content_blocked",
ErrorLimit: "error_limit",
Complete: "COMPLETE",
StopSequence: "STOP_SEQUENCE",
MaxTokens: "MAX_TOKENS",
ToolCall: "TOOL_CALL",
Error: "ERROR",
} as const;
2 changes: 1 addition & 1 deletion src/api/types/ChatMessages.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ import * as Cohere from "../index";
/**
* A list of chat messages in chronological order, representing a conversation between the user and the model.
*
* Messages can be from `User`, `Assistant`, `Tool` and `System` roles. Learn more about messages and roles in [the Chat API guide](https://docs.cohere.com/docs/chat-api).
* Messages can be from `User`, `Assistant`, `Tool` and `System` roles. Learn more about messages and roles in [the Chat API guide](https://docs.cohere.com/v2/docs/chat-api).
*/
export type ChatMessages = Cohere.ChatMessageV2[];
3 changes: 3 additions & 0 deletions src/api/types/Citation.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,11 @@ import * as Cohere from "../index";
* Citation information containing sources and the text cited.
*/
export interface Citation {
/** Start index of the cited snippet in the original source text. */
start?: number;
/** End index of the cited snippet in the original source text. */
end?: number;
/** Text snippet that is being cited. */
text?: string;
sources?: Cohere.Source[];
}
2 changes: 1 addition & 1 deletion src/api/types/JsonResponseFormatV2.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

export interface JsonResponseFormatV2 {
/**
* [BETA] A JSON schema object that the output will adhere to. There are some restrictions we have on the schema, refer to [our guide](/docs/structured-outputs-json#schema-constraints) for more information.
* A [JSON schema](https://json-schema.org/overview/what-is-jsonschema) object that the output will adhere to. There are some restrictions we have on the schema, refer to [our guide](/docs/structured-outputs-json#schema-constraints) for more information.
* Example (required name and age object):
*
* ```json
Expand Down
9 changes: 6 additions & 3 deletions src/api/types/ResponseFormatV2.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,17 @@
import * as Cohere from "../index";

/**
* Configuration for forcing the model output to adhere to the specified format. Supported on [Command R](https://docs.cohere.com/docs/command-r), [Command R+](https://docs.cohere.com/docs/command-r-plus) and newer models.
* Configuration for forcing the model output to adhere to the specified format. Supported on [Command R](https://docs.cohere.com/v2/docs/command-r), [Command R+](https://docs.cohere.com/v2/docs/command-r-plus) and newer models.
*
* The model can be forced into outputting JSON objects (with up to 5 levels of nesting) by setting `{ "type": "json_object" }`.
* The model can be forced into outputting JSON objects by setting `{ "type": "json_object" }`.
*
* A [JSON Schema](https://json-schema.org/) can optionally be provided, to ensure a specific structure.
*
* **Note**: When using `{ "type": "json_object" }` your `message` should always explicitly instruct the model to generate a JSON (eg: _"Generate a JSON ..."_) . Otherwise the model may end up getting stuck generating an infinite stream of characters and eventually run out of context length.
* **Limitation**: The parameter is not supported in RAG mode (when any of `connectors`, `documents`, `tools`, `tool_results` are provided).
*
* **Note**: When `json_schema` is not specified, the generated object can have up to 5 layers of nesting.
*
* **Limitation**: The parameter is not supported when used in combinations with the `documents` or `tools` parameters.
*/
export type ResponseFormatV2 = Cohere.ResponseFormatV2.Text | Cohere.ResponseFormatV2.JsonObject;

Expand Down
2 changes: 1 addition & 1 deletion src/api/types/ToolCallV2.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
import * as Cohere from "../index";

/**
* A array of tool calls to be made.
* An array of tool calls to be made.
*/
export interface ToolCallV2 {
id?: string;
Expand Down
11 changes: 4 additions & 7 deletions src/serialization/resources/v2/client/requests/V2EmbedRequest.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,8 @@ export const V2EmbedRequest: core.serialization.Schema<serializers.V2EmbedReques
texts: core.serialization.list(core.serialization.string()).optional(),
images: core.serialization.list(core.serialization.string()).optional(),
model: core.serialization.string(),
inputType: core.serialization.property("input_type", EmbedInputType.optional()),
embeddingTypes: core.serialization.property(
"embedding_types",
core.serialization.list(EmbeddingType).optional()
),
inputType: core.serialization.property("input_type", EmbedInputType),
embeddingTypes: core.serialization.property("embedding_types", core.serialization.list(EmbeddingType)),
truncate: V2EmbedRequestTruncate.optional(),
});

Expand All @@ -27,8 +24,8 @@ export declare namespace V2EmbedRequest {
texts?: string[] | null;
images?: string[] | null;
model: string;
input_type?: EmbedInputType.Raw | null;
embedding_types?: EmbeddingType.Raw[] | null;
input_type: EmbedInputType.Raw;
embedding_types: EmbeddingType.Raw[];
truncate?: V2EmbedRequestTruncate.Raw | null;
}
}
12 changes: 2 additions & 10 deletions src/serialization/types/ChatFinishReason.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,8 @@ import * as Cohere from "../../api/index";
import * as core from "../../core";

export const ChatFinishReason: core.serialization.Schema<serializers.ChatFinishReason.Raw, Cohere.ChatFinishReason> =
core.serialization.enum_([
"complete",
"stop_sequence",
"max_tokens",
"tool_call",
"error",
"content_blocked",
"error_limit",
]);
core.serialization.enum_(["COMPLETE", "STOP_SEQUENCE", "MAX_TOKENS", "TOOL_CALL", "ERROR"]);

export declare namespace ChatFinishReason {
type Raw = "complete" | "stop_sequence" | "max_tokens" | "tool_call" | "error" | "content_blocked" | "error_limit";
type Raw = "COMPLETE" | "STOP_SEQUENCE" | "MAX_TOKENS" | "TOOL_CALL" | "ERROR";
}
Loading
Loading