Skip to content

Commit cceb1a4

Browse files
Auto-generated API code (#3057)
1 parent 4fd0170 commit cceb1a4

File tree

4 files changed

+221
-35
lines changed

4 files changed

+221
-35
lines changed

docs/reference/api-reference.md

Lines changed: 48 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -2297,6 +2297,16 @@ from the cluster state of the master node. In both cases the coordinating
22972297
node will send requests for further information to each selected node.
22982298
- **`master_timeout` (Optional, string \| -1 \| 0)**: Period to wait for a connection to the master node.
22992299

2300+
## client.cat.circuitBreaker [_cat.circuit_breaker]
2301+
Get circuit breakers statistics
2302+
2303+
[Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch#TODO)
2304+
2305+
```ts
2306+
client.cat.circuitBreaker()
2307+
```
2308+
2309+
23002310
## client.cat.componentTemplates [_cat.component_templates]
23012311
Get component templates.
23022312

@@ -2792,6 +2802,16 @@ local cluster state. If `false` the list of selected nodes are computed
27922802
from the cluster state of the master node. In both cases the coordinating
27932803
node will send requests for further information to each selected node.
27942804
- **`master_timeout` (Optional, string \| -1 \| 0)**: Period to wait for a connection to the master node.
2805+
- **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard expressions can match. If the request can target data streams, this argument
2806+
determines whether wildcard expressions match hidden data streams. Supports a list of values,
2807+
such as open,hidden.
2808+
- **`allow_no_indices` (Optional, boolean)**: If false, the request returns an error if any wildcard expression, index alias, or _all value targets only
2809+
missing or closed indices. This behavior applies even if the request targets other open indices. For example,
2810+
a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.
2811+
- **`ignore_throttled` (Optional, boolean)**: If true, concrete, expanded or aliased indices are ignored when frozen.
2812+
- **`ignore_unavailable` (Optional, boolean)**: If true, missing or closed indices are not included in the response.
2813+
- **`allow_closed` (Optional, boolean)**: If true, allow closed indices to be returned in the response otherwise if false, keep the legacy behaviour
2814+
of throwing an exception if index pattern matches closed indices
27952815

27962816
## client.cat.shards [_cat.shards]
27972817
Get shard information.
@@ -5701,13 +5721,18 @@ To use the API, this parameter must be set to `true`.
57015721

57025722
## client.indices.downsample [_indices.downsample]
57035723
Downsample an index.
5704-
Aggregate a time series (TSDS) index and store pre-computed statistical summaries (`min`, `max`, `sum`, `value_count` and `avg`) for each metric field grouped by a configured time interval.
5724+
Downsamples a time series (TSDS) index and reduces its size by keeping the last value or by pre-aggregating metrics:
5725+
5726+
- When running in `aggregate` mode, it pre-calculates and stores statistical summaries (`min`, `max`, `sum`, `value_count` and `avg`)
5727+
for each metric field grouped by a configured time interval and their dimensions.
5728+
- When running in `last_value` mode, it keeps the last value for each metric in the configured interval and their dimensions.
5729+
57055730
For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index.
57065731
All documents within an hour interval are summarized and stored as a single document in the downsample index.
57075732

57085733
NOTE: Only indices in a time series data stream are supported.
57095734
Neither field nor document level security can be defined on the source index.
5710-
The source index must be read only (`index.blocks.write: true`).
5735+
The source index must be read-only (`index.blocks.write: true`).
57115736

57125737
[Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-downsample)
57135738

@@ -5720,7 +5745,7 @@ client.indices.downsample({ index, target_index })
57205745
#### Request (object) [_request_indices.downsample]
57215746
- **`index` (string)**: Name of the time series index to downsample.
57225747
- **`target_index` (string)**: Name of the index to create.
5723-
- **`config` (Optional, { fixed_interval })**
5748+
- **`config` (Optional, { fixed_interval, sampling_method })**
57245749

57255750
## client.indices.exists [_indices.exists]
57265751
Check indices.
@@ -6025,6 +6050,16 @@ Supports a list of values, such as `open,hidden`.
60256050
- **`master_timeout` (Optional, string \| -1 \| 0)**: Period to wait for a connection to the master node.
60266051
If no response is received before the timeout expires, the request fails and returns an error.
60276052

6053+
## client.indices.getAllSampleConfiguration [_indices.get_all_sample_configuration]
6054+
Get sampling configurations for all indices and data streams
6055+
6056+
[Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-all-sample-configuration)
6057+
6058+
```ts
6059+
client.indices.getAllSampleConfiguration()
6060+
```
6061+
6062+
60286063
## client.indices.getDataLifecycle [_indices.get_data_lifecycle]
60296064
Get data stream lifecycles.
60306065

@@ -6489,7 +6524,7 @@ To target all data streams use `*` or `_all`.
64896524
- **`data_retention` (Optional, string \| -1 \| 0)**: If defined, every document added to this data stream will be stored at least for this time frame.
64906525
Any time after this duration the document could be deleted.
64916526
When empty, every document in this data stream will be stored indefinitely.
6492-
- **`downsampling` (Optional, { rounds })**: The downsampling configuration to execute for the managed backing index after rollover.
6527+
- **`downsampling` (Optional, { after, fixed_interval }[])**: The downsampling configuration to execute for the managed backing index after rollover.
64936528
- **`enabled` (Optional, boolean)**: If defined, it turns data stream lifecycle on/off (`true`/`false`) for this data stream. A data stream lifecycle
64946529
that's disabled (enabled: `false`) will have no effect on the data stream.
64956530
- **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of data stream that wildcard patterns can match.
@@ -7568,7 +7603,7 @@ client.inference.completion({ inference_id, input })
75687603
- **`inference_id` (string)**: The inference Id
75697604
- **`input` (string \| string[])**: Inference input.
75707605
Either a string or an array of strings.
7571-
- **`task_settings` (Optional, User-defined value)**: Optional task settings
7606+
- **`task_settings` (Optional, User-defined value)**: Task settings for the individual inference request. These settings are specific to the <task_type> you specified and override the task settings specified when initializing the service.
75727607
- **`timeout` (Optional, string \| -1 \| 0)**: Specifies the amount of time to wait for the inference request to complete.
75737608

75747609
## client.inference.delete [_inference.delete]
@@ -7585,7 +7620,7 @@ client.inference.delete({ inference_id })
75857620
#### Request (object) [_request_inference.delete]
75867621
- **`inference_id` (string)**: The inference identifier.
75877622
- **`task_type` (Optional, Enum("sparse_embedding" \| "text_embedding" \| "rerank" \| "completion" \| "chat_completion"))**: The task type
7588-
- **`dry_run` (Optional, boolean)**: When true, the endpoint is not deleted and a list of ingest processors which reference this endpoint is returned.
7623+
- **`dry_run` (Optional, boolean)**: When true, checks the semantic_text fields and inference processors that reference the endpoint and returns them in a list, but does not delete the endpoint.
75897624
- **`force` (Optional, boolean)**: When true, the inference endpoint is forcefully deleted even if it is still being used by ingest processors or semantic text fields.
75907625

75917626
## client.inference.get [_inference.get]
@@ -7801,7 +7836,7 @@ client.inference.putAnthropic({ task_type, anthropic_inference_id, service, serv
78017836
The only valid task type for the model to perform is `completion`.
78027837
- **`anthropic_inference_id` (string)**: The unique identifier of the inference endpoint.
78037838
- **`service` (Enum("anthropic"))**: The type of service supported for the specified task type. In this case, `anthropic`.
7804-
- **`service_settings` ({ api_key, model_id, rate_limit })**: Settings used to install the inference model. These settings are specific to the `watsonxai` service.
7839+
- **`service_settings` ({ api_key, model_id, rate_limit })**: Settings used to install the inference model. These settings are specific to the `anthropic` service.
78057840
- **`chunking_settings` (Optional, { max_chunk_size, overlap, sentence_overlap, separator_group, separators, strategy })**: The chunking configuration object.
78067841
- **`task_settings` (Optional, { max_tokens, temperature, top_k, top_p })**: Settings to configure the inference task.
78077842
These settings are specific to the task type you specified.
@@ -7824,7 +7859,7 @@ client.inference.putAzureaistudio({ task_type, azureaistudio_inference_id, servi
78247859
- **`task_type` (Enum("completion" \| "rerank" \| "text_embedding"))**: The type of the inference task that the model will perform.
78257860
- **`azureaistudio_inference_id` (string)**: The unique identifier of the inference endpoint.
78267861
- **`service` (Enum("azureaistudio"))**: The type of service supported for the specified task type. In this case, `azureaistudio`.
7827-
- **`service_settings` ({ api_key, endpoint_type, target, provider, rate_limit })**: Settings used to install the inference model. These settings are specific to the `openai` service.
7862+
- **`service_settings` ({ api_key, endpoint_type, target, provider, rate_limit })**: Settings used to install the inference model. These settings are specific to the `azureaistudio` service.
78287863
- **`chunking_settings` (Optional, { max_chunk_size, overlap, sentence_overlap, separator_group, separators, strategy })**: The chunking configuration object.
78297864
- **`task_settings` (Optional, { do_sample, max_new_tokens, temperature, top_p, user, return_documents, top_n })**: Settings to configure the inference task.
78307865
These settings are specific to the task type you specified.
@@ -8346,7 +8381,7 @@ client.inference.sparseEmbedding({ inference_id, input })
83468381
- **`inference_id` (string)**: The inference Id
83478382
- **`input` (string \| string[])**: Inference input.
83488383
Either a string or an array of strings.
8349-
- **`task_settings` (Optional, User-defined value)**: Optional task settings
8384+
- **`task_settings` (Optional, User-defined value)**: Task settings for the individual inference request. These settings are specific to the <task_type> you specified and override the task settings specified when initializing the service.
83508385
- **`timeout` (Optional, string \| -1 \| 0)**: Specifies the amount of time to wait for the inference request to complete.
83518386

83528387
## client.inference.streamCompletion [_inference.stream_completion]
@@ -8372,7 +8407,7 @@ client.inference.streamCompletion({ inference_id, input })
83728407
It can be a single string or an array.
83738408

83748409
NOTE: Inference endpoints for the completion task type currently only support a single string as input.
8375-
- **`task_settings` (Optional, User-defined value)**: Optional task settings
8410+
- **`task_settings` (Optional, User-defined value)**: Task settings for the individual inference request. These settings are specific to the <task_type> you specified and override the task settings specified when initializing the service.
83768411
- **`timeout` (Optional, string \| -1 \| 0)**: The amount of time to wait for the inference request to complete.
83778412

83788413
## client.inference.textEmbedding [_inference.text_embedding]
@@ -8400,7 +8435,7 @@ Accepted values depend on the configured inference service, refer to the relevan
84008435

84018436
> info
84028437
> The `input_type` parameter specified on the root level of the request body will take precedence over the `input_type` parameter specified in `task_settings`.
8403-
- **`task_settings` (Optional, User-defined value)**: Optional task settings
8438+
- **`task_settings` (Optional, User-defined value)**: Task settings for the individual inference request. These settings are specific to the <task_type> you specified and override the task settings specified when initializing the service.
84048439
- **`timeout` (Optional, string \| -1 \| 0)**: Specifies the amount of time to wait for the inference request to complete.
84058440

84068441
## client.inference.update [_inference.update]
@@ -12996,7 +13031,8 @@ It must not be negative.
1299613031
By default, you cannot page through more than 10,000 hits using the `from` and `size` parameters.
1299713032
To page through more hits, use the `search_after` parameter.
1299813033
- **`sort` (Optional, string \| { _score, _doc, _geo_distance, _script } \| string \| { _score, _doc, _geo_distance, _script }[])**: The sort definition.
12999-
You can sort on `username`, `roles`, or `enabled`.
13034+
You can sort on `name`, `description`, `metadata`, `applications.application`, `applications.privileges`,
13035+
and `applications.resources`.
1300013036
In addition, sort can also be applied to the `_doc` field to sort by index order.
1300113037
- **`size` (Optional, number)**: The number of hits to return.
1300213038
It must not be negative.

src/api/api/cat.ts

Lines changed: 74 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,13 @@ export default class Cat {
5858
'master_timeout'
5959
]
6060
},
61+
'cat.circuit_breaker': {
62+
path: [
63+
'circuit_breaker_patterns'
64+
],
65+
body: [],
66+
query: []
67+
},
6168
'cat.component_templates': {
6269
path: [
6370
'name'
@@ -251,7 +258,12 @@ export default class Cat {
251258
'h',
252259
's',
253260
'local',
254-
'master_timeout'
261+
'master_timeout',
262+
'expand_wildcards',
263+
'allow_no_indices',
264+
'ignore_throttled',
265+
'ignore_unavailable',
266+
'allow_closed'
255267
]
256268
},
257269
'cat.shards': {
@@ -451,6 +463,61 @@ export default class Cat {
451463
return await this.transport.request({ path, method, querystring, body, meta }, options)
452464
}
453465

466+
/**
467+
* Get circuit breakers statistics
468+
* @see {@link https://www.elastic.co/docs/api/doc/elasticsearch#TODO | Elasticsearch API documentation}
469+
*/
470+
async circuitBreaker (this: That, params?: T.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>
471+
async circuitBreaker (this: That, params?: T.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>
472+
async circuitBreaker (this: That, params?: T.TODO, options?: TransportRequestOptions): Promise<T.TODO>
473+
async circuitBreaker (this: That, params?: T.TODO, options?: TransportRequestOptions): Promise<any> {
474+
const {
475+
path: acceptedPath
476+
} = this[kAcceptedParams]['cat.circuit_breaker']
477+
478+
const userQuery = params?.querystring
479+
const querystring: Record<string, any> = userQuery != null ? { ...userQuery } : {}
480+
481+
let body: Record<string, any> | string | undefined
482+
const userBody = params?.body
483+
if (userBody != null) {
484+
if (typeof userBody === 'string') {
485+
body = userBody
486+
} else {
487+
body = { ...userBody }
488+
}
489+
}
490+
491+
params = params ?? {}
492+
for (const key in params) {
493+
if (acceptedPath.includes(key)) {
494+
continue
495+
} else if (key !== 'body' && key !== 'querystring') {
496+
querystring[key] = params[key]
497+
}
498+
}
499+
500+
let method = ''
501+
let path = ''
502+
if (params.circuit_breaker_patterns != null) {
503+
method = 'GET'
504+
path = `/_cat/circuit_breaker/${encodeURIComponent(params.circuit_breaker_patterns.toString())}`
505+
} else {
506+
method = 'GET'
507+
path = '/_cat/circuit_breaker'
508+
}
509+
const meta: TransportRequestMetadata = {
510+
name: 'cat.circuit_breaker',
511+
pathParts: {
512+
circuit_breaker_patterns: params.circuit_breaker_patterns
513+
},
514+
acceptedParams: [
515+
'circuit_breaker_patterns'
516+
]
517+
}
518+
return await this.transport.request({ path, method, querystring, body, meta }, options)
519+
}
520+
454521
/**
455522
* Get component templates. Get information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases. IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API.
456523
* @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-component-templates | Elasticsearch API documentation}
@@ -1434,7 +1501,12 @@ export default class Cat {
14341501
'h',
14351502
's',
14361503
'local',
1437-
'master_timeout'
1504+
'master_timeout',
1505+
'expand_wildcards',
1506+
'allow_no_indices',
1507+
'ignore_throttled',
1508+
'ignore_unavailable',
1509+
'allow_closed'
14381510
]
14391511
}
14401512
return await this.transport.request({ path, method, querystring, body, meta }, options)

0 commit comments

Comments
 (0)