From ac2da09e6930e3988d1289717e2df5d4b7408f17 Mon Sep 17 00:00:00 2001
From: aws-sdk-cpp-automation The byte value of the file to attach, encoded as Base-64 string. The maximum
- * size of all files that is attached is 10MB. You can attach a maximum of 5 files.
- * The raw bytes of the file to attach. The maximum size of all files that is
+ * attached is 10MB. You can attach a maximum of 5 files. The identifier for the model that you want to call. The
- * If you use a base model, specify the model ID or its ARN. For a
- * list of model IDs for base models, see modelId to provide depends on the type of model or throughput that
+ * you use: If you use a base model, specify the model ID or its
+ * ARN. For a list of model IDs for base models, see Amazon
* Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User
+ * Guide. If you use an inference profile, specify the inference
+ * profile ID or its ARN. For a list of inference profile IDs, see Supported
+ * Regions and models for cross-region inference in the Amazon Bedrock User
* Guide. If you use a provisioned model, specify the ARN of the
* Provisioned Throughput. For more information, see Run
@@ -56,7 +60,9 @@ namespace Model
* more information, see Use
* a custom model in Amazon Bedrock in the Amazon Bedrock User Guide. The Converse API doesn't support imported
+ * models. The ID for the model. The If you use a base model,
- * specify the model ID or its ARN. For a list of model IDs for base models, see If you use a base
+ * model, specify the model ID or its ARN. For a list of model IDs for base models,
+ * see Amazon
* Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User
+ * Guide. If you use an inference profile, specify the inference
+ * profile ID or its ARN. For a list of inference profile IDs, see Supported
+ * Regions and models for cross-region inference in the Amazon Bedrock User
* Guide. If you use a provisioned model, specify the ARN of the
* Provisioned Throughput. For more information, see Run
@@ -77,7 +82,9 @@ namespace Model
* more information, see Use
* a custom model in Amazon Bedrock in the Amazon Bedrock User Guide. The Converse API doesn't support imported
+ * models.modelId
to provide depends on the type of model that you use:
+ * modelId
to provide depends on
- * the type of model that you use:
+ *
If you use an imported + * model, specify the ARN of the imported model. You can get the model ARN from + * a successful call to CreateModelImportJob + * or from the Imported models page in the Amazon Bedrock console.
If you use an imported + * model, specify the ARN of the imported model. You can get the model ARN from + * a successful call to CreateModelImportJob + * or from the Imported models page in the Amazon Bedrock console.
After you create a solution, you can’t change its configuration. - * By default, all new solutions use automatic training. With automatic training, - * you incur training costs while your solution is active. You can't stop automatic - * training for a solution. To avoid unnecessary costs, make sure to delete the - * solution when you are finished. For information about training costs, see Amazon Personalize - * pricing.
Creates the configuration for training a model - * (creating a solution version). This configuration includes the recipe to use for - * model training and optional training configuration, such as columns to use in - * training and feature transformation parameters. For more information about - * configuring a solution, see By default, all new solutions use automatic training. With + * automatic training, you incur training costs while your solution is active. To + * avoid unnecessary costs, when you are finished you can update + * the solution to turn off automatic training. For information about training + * costs, see Amazon + * Personalize pricing.
Creates the configuration for + * training a model (creating a solution version). This configuration includes the + * recipe to use for model training and optional training configuration, such as + * columns to use in training and feature transformation parameters. For more + * information about configuring a solution, see Creating * and configuring a solution.
By default, new solutions use automatic
* training to create solution versions every 7 days. You can change the training
- * frequency. Automatic solution version creation starts one hour after the
+ * frequency. Automatic solution version creation starts within one hour after the
* solution is ACTIVE. If you manually create a solution version within the hour,
* the solution skips the first automatic training. For more information, see Configuring
@@ -739,6 +739,8 @@ namespace Personalize
* If you use manual training, the status must be ACTIVE before you call
* CreateSolutionVersion
.
Related APIs *
UpdateSolution + *
CreateSolutionVersion @@ -2393,6 +2395,41 @@ namespace Personalize return SubmitAsync(&PersonalizeClient::UpdateRecommender, request, handler, context); } + /** + *
Updates an Amazon Personalize solution to use a different automatic training + * configuration. When you update a solution, you can change whether the solution + * uses automatic training, and you can change the training frequency. For more + * information about updating a solution, see Updating + * a solution.
A solution update can be in one of the following + * states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE + * FAILED
To get the status of a solution update, call the DescribeSolution
+ * API operation and find the status in the latestSolutionUpdate
.
+ *
AutoTrainingConfig
as part of solution configuration. For more
* information about automatic training, see Configuring
- * automatic training. Automatic solution version creation starts one - * hour after the solution is ACTIVE. If you manually create a solution version - * within the hour, the solution skips the first automatic training.
After - * training starts, you can get the solution version's Amazon Resource Name (ARN) - * with the .
Automatic solution version creation starts + * within one hour after the solution is ACTIVE. If you manually create a solution + * version within the hour, the solution skips the first automatic training.
+ *After training starts, you can get the solution version's Amazon Resource + * Name (ARN) with the ListSolutionVersions * API operation. To get its status, use the DescribeSolutionVersion. @@ -163,10 +163,11 @@ namespace Model ///@{ /** - *
The configuration to use with the solution. When performAutoML
- * is set to true, Amazon Personalize only evaluates the autoMLConfig
- * section of the solution configuration.
Amazon Personalize doesn't
- * support configuring the hpoObjective
at this time.
The configuration properties for the solution. When
+ * performAutoML
is set to true, Amazon Personalize only evaluates the
+ * autoMLConfig
section of the solution configuration.
Amazon Personalize doesn't support configuring the hpoObjective
+ * at this time.
The status of the recommender update.
A recommender can be in one of - * the following states:
CREATE PENDING > CREATE IN_PROGRESS - * > ACTIVE -or- CREATE FAILED
STOP PENDING > STOP - * IN_PROGRESS > INACTIVE > START PENDING > START IN_PROGRESS > - * ACTIVE
DELETE PENDING > DELETE IN_PROGRESS
The status of the recommender update. A recommender update can be in one of + * the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE + * -or- CREATE FAILED
*/ inline const Aws::String& GetStatus() const{ return m_status; } inline bool StatusHasBeenSet() const { return m_statusHasBeenSet; } diff --git a/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/Solution.h b/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/Solution.h index 4f88ad611bc..d1f4a065b53 100644 --- a/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/Solution.h +++ b/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/Solution.h @@ -10,6 +10,7 @@ #includeAfter you create a solution, you can’t change its configuration. - * By default, all new solutions use automatic training. With automatic training, - * you incur training costs while your solution is active. You can't stop automatic - * training for a solution. To avoid unnecessary costs, make sure to delete the - * solution when you are finished. For information about training costs, see Amazon Personalize - * pricing.
An object that provides information about a - * solution. A solution includes the custom recipe, customized parameters, and - * trained models (Solution Versions) that Amazon Personalize uses to generate + *
By default, all new solutions use automatic training. With + * automatic training, you incur training costs while your solution is active. To + * avoid unnecessary costs, when you are finished you can update + * the solution to turn off automatic training. For information about training + * costs, see Amazon + * Personalize pricing.
An object that provides information + * about a solution. A solution includes the custom recipe, customized parameters, + * and trained models (Solution Versions) that Amazon Personalize uses to generate * recommendations.
After you create a solution, you can’t change its
* configuration. If you need to make changes, you can clone
@@ -248,6 +249,18 @@ namespace Model
inline Solution& WithLatestSolutionVersion(const SolutionVersionSummary& value) { SetLatestSolutionVersion(value); return *this;}
inline Solution& WithLatestSolutionVersion(SolutionVersionSummary&& value) { SetLatestSolutionVersion(std::move(value)); return *this;}
///@}
+
+ ///@{
+ /**
+ * Provides a summary of the latest updates to the solution. The configuration details of the solution update.See Also:
+ * AWS
+ * API Reference
Provides a summary of the properties of a solution update. For a complete + * listing, call the DescribeSolution + * API.
The configuration details of the solution.
+ */ + inline const SolutionUpdateConfig& GetSolutionUpdateConfig() const{ return m_solutionUpdateConfig; } + inline bool SolutionUpdateConfigHasBeenSet() const { return m_solutionUpdateConfigHasBeenSet; } + inline void SetSolutionUpdateConfig(const SolutionUpdateConfig& value) { m_solutionUpdateConfigHasBeenSet = true; m_solutionUpdateConfig = value; } + inline void SetSolutionUpdateConfig(SolutionUpdateConfig&& value) { m_solutionUpdateConfigHasBeenSet = true; m_solutionUpdateConfig = std::move(value); } + inline SolutionUpdateSummary& WithSolutionUpdateConfig(const SolutionUpdateConfig& value) { SetSolutionUpdateConfig(value); return *this;} + inline SolutionUpdateSummary& WithSolutionUpdateConfig(SolutionUpdateConfig&& value) { SetSolutionUpdateConfig(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *The status of the solution update. A solution update can be in one of the + * following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- + * CREATE FAILED
+ */ + inline const Aws::String& GetStatus() const{ return m_status; } + inline bool StatusHasBeenSet() const { return m_statusHasBeenSet; } + inline void SetStatus(const Aws::String& value) { m_statusHasBeenSet = true; m_status = value; } + inline void SetStatus(Aws::String&& value) { m_statusHasBeenSet = true; m_status = std::move(value); } + inline void SetStatus(const char* value) { m_statusHasBeenSet = true; m_status.assign(value); } + inline SolutionUpdateSummary& WithStatus(const Aws::String& value) { SetStatus(value); return *this;} + inline SolutionUpdateSummary& WithStatus(Aws::String&& value) { SetStatus(std::move(value)); return *this;} + inline SolutionUpdateSummary& WithStatus(const char* value) { SetStatus(value); return *this;} + ///@} + + ///@{ + /** + *Whether the solution automatically creates solution versions.
+ */ + inline bool GetPerformAutoTraining() const{ return m_performAutoTraining; } + inline bool PerformAutoTrainingHasBeenSet() const { return m_performAutoTrainingHasBeenSet; } + inline void SetPerformAutoTraining(bool value) { m_performAutoTrainingHasBeenSet = true; m_performAutoTraining = value; } + inline SolutionUpdateSummary& WithPerformAutoTraining(bool value) { SetPerformAutoTraining(value); return *this;} + ///@} + + ///@{ + /** + *The date and time (in Unix format) that the solution update was created.
+ */ + inline const Aws::Utils::DateTime& GetCreationDateTime() const{ return m_creationDateTime; } + inline bool CreationDateTimeHasBeenSet() const { return m_creationDateTimeHasBeenSet; } + inline void SetCreationDateTime(const Aws::Utils::DateTime& value) { m_creationDateTimeHasBeenSet = true; m_creationDateTime = value; } + inline void SetCreationDateTime(Aws::Utils::DateTime&& value) { m_creationDateTimeHasBeenSet = true; m_creationDateTime = std::move(value); } + inline SolutionUpdateSummary& WithCreationDateTime(const Aws::Utils::DateTime& value) { SetCreationDateTime(value); return *this;} + inline SolutionUpdateSummary& WithCreationDateTime(Aws::Utils::DateTime&& value) { SetCreationDateTime(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *The date and time (in Unix time) that the solution update was last + * updated.
+ */ + inline const Aws::Utils::DateTime& GetLastUpdatedDateTime() const{ return m_lastUpdatedDateTime; } + inline bool LastUpdatedDateTimeHasBeenSet() const { return m_lastUpdatedDateTimeHasBeenSet; } + inline void SetLastUpdatedDateTime(const Aws::Utils::DateTime& value) { m_lastUpdatedDateTimeHasBeenSet = true; m_lastUpdatedDateTime = value; } + inline void SetLastUpdatedDateTime(Aws::Utils::DateTime&& value) { m_lastUpdatedDateTimeHasBeenSet = true; m_lastUpdatedDateTime = std::move(value); } + inline SolutionUpdateSummary& WithLastUpdatedDateTime(const Aws::Utils::DateTime& value) { SetLastUpdatedDateTime(value); return *this;} + inline SolutionUpdateSummary& WithLastUpdatedDateTime(Aws::Utils::DateTime&& value) { SetLastUpdatedDateTime(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *If a solution update fails, the reason behind the failure.
+ */ + inline const Aws::String& GetFailureReason() const{ return m_failureReason; } + inline bool FailureReasonHasBeenSet() const { return m_failureReasonHasBeenSet; } + inline void SetFailureReason(const Aws::String& value) { m_failureReasonHasBeenSet = true; m_failureReason = value; } + inline void SetFailureReason(Aws::String&& value) { m_failureReasonHasBeenSet = true; m_failureReason = std::move(value); } + inline void SetFailureReason(const char* value) { m_failureReasonHasBeenSet = true; m_failureReason.assign(value); } + inline SolutionUpdateSummary& WithFailureReason(const Aws::String& value) { SetFailureReason(value); return *this;} + inline SolutionUpdateSummary& WithFailureReason(Aws::String&& value) { SetFailureReason(std::move(value)); return *this;} + inline SolutionUpdateSummary& WithFailureReason(const char* value) { SetFailureReason(value); return *this;} + ///@} + private: + + SolutionUpdateConfig m_solutionUpdateConfig; + bool m_solutionUpdateConfigHasBeenSet = false; + + Aws::String m_status; + bool m_statusHasBeenSet = false; + + bool m_performAutoTraining; + bool m_performAutoTrainingHasBeenSet = false; + + Aws::Utils::DateTime m_creationDateTime; + bool m_creationDateTimeHasBeenSet = false; + + Aws::Utils::DateTime m_lastUpdatedDateTime; + bool m_lastUpdatedDateTimeHasBeenSet = false; + + Aws::String m_failureReason; + bool m_failureReasonHasBeenSet = false; + }; + +} // namespace Model +} // namespace Personalize +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/UpdateSolutionRequest.h b/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/UpdateSolutionRequest.h new file mode 100644 index 00000000000..ef00f4f927f --- /dev/null +++ b/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/UpdateSolutionRequest.h @@ -0,0 +1,101 @@ +/** + * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. + * SPDX-License-Identifier: Apache-2.0. + */ + +#pragma once +#includeThe Amazon Resource Name (ARN) of the solution to update.
+ */ + inline const Aws::String& GetSolutionArn() const{ return m_solutionArn; } + inline bool SolutionArnHasBeenSet() const { return m_solutionArnHasBeenSet; } + inline void SetSolutionArn(const Aws::String& value) { m_solutionArnHasBeenSet = true; m_solutionArn = value; } + inline void SetSolutionArn(Aws::String&& value) { m_solutionArnHasBeenSet = true; m_solutionArn = std::move(value); } + inline void SetSolutionArn(const char* value) { m_solutionArnHasBeenSet = true; m_solutionArn.assign(value); } + inline UpdateSolutionRequest& WithSolutionArn(const Aws::String& value) { SetSolutionArn(value); return *this;} + inline UpdateSolutionRequest& WithSolutionArn(Aws::String&& value) { SetSolutionArn(std::move(value)); return *this;} + inline UpdateSolutionRequest& WithSolutionArn(const char* value) { SetSolutionArn(value); return *this;} + ///@} + + ///@{ + /** + *Whether the solution uses automatic training to create new solution versions
+ * (trained models). You can change the training frequency by specifying a
+ * schedulingExpression
in the AutoTrainingConfig
as part
+ * of solution configuration.
If you turn on automatic training, the first + * automatic training starts within one hour after the solution update completes. + * If you manually create a solution version within the hour, the solution skips + * the first automatic training. For more information about automatic training, see + * Configuring + * automatic training.
After training starts, you can get the solution + * version's Amazon Resource Name (ARN) with the ListSolutionVersions + * API operation. To get its status, use the DescribeSolutionVersion. + *
+ */ + inline bool GetPerformAutoTraining() const{ return m_performAutoTraining; } + inline bool PerformAutoTrainingHasBeenSet() const { return m_performAutoTrainingHasBeenSet; } + inline void SetPerformAutoTraining(bool value) { m_performAutoTrainingHasBeenSet = true; m_performAutoTraining = value; } + inline UpdateSolutionRequest& WithPerformAutoTraining(bool value) { SetPerformAutoTraining(value); return *this;} + ///@} + + ///@{ + /** + *The new configuration details of the solution.
+ */ + inline const SolutionUpdateConfig& GetSolutionUpdateConfig() const{ return m_solutionUpdateConfig; } + inline bool SolutionUpdateConfigHasBeenSet() const { return m_solutionUpdateConfigHasBeenSet; } + inline void SetSolutionUpdateConfig(const SolutionUpdateConfig& value) { m_solutionUpdateConfigHasBeenSet = true; m_solutionUpdateConfig = value; } + inline void SetSolutionUpdateConfig(SolutionUpdateConfig&& value) { m_solutionUpdateConfigHasBeenSet = true; m_solutionUpdateConfig = std::move(value); } + inline UpdateSolutionRequest& WithSolutionUpdateConfig(const SolutionUpdateConfig& value) { SetSolutionUpdateConfig(value); return *this;} + inline UpdateSolutionRequest& WithSolutionUpdateConfig(SolutionUpdateConfig&& value) { SetSolutionUpdateConfig(std::move(value)); return *this;} + ///@} + private: + + Aws::String m_solutionArn; + bool m_solutionArnHasBeenSet = false; + + bool m_performAutoTraining; + bool m_performAutoTrainingHasBeenSet = false; + + SolutionUpdateConfig m_solutionUpdateConfig; + bool m_solutionUpdateConfigHasBeenSet = false; + }; + +} // namespace Model +} // namespace Personalize +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/UpdateSolutionResult.h b/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/UpdateSolutionResult.h new file mode 100644 index 00000000000..3e7e9df5717 --- /dev/null +++ b/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/UpdateSolutionResult.h @@ -0,0 +1,67 @@ +/** + * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. + * SPDX-License-Identifier: Apache-2.0. + */ + +#pragma once +#includeThe same solution Amazon Resource Name (ARN) as given in the request.
+ */ + inline const Aws::String& GetSolutionArn() const{ return m_solutionArn; } + inline void SetSolutionArn(const Aws::String& value) { m_solutionArn = value; } + inline void SetSolutionArn(Aws::String&& value) { m_solutionArn = std::move(value); } + inline void SetSolutionArn(const char* value) { m_solutionArn.assign(value); } + inline UpdateSolutionResult& WithSolutionArn(const Aws::String& value) { SetSolutionArn(value); return *this;} + inline UpdateSolutionResult& WithSolutionArn(Aws::String&& value) { SetSolutionArn(std::move(value)); return *this;} + inline UpdateSolutionResult& WithSolutionArn(const char* value) { SetSolutionArn(value); return *this;} + ///@} + + ///@{ + + inline const Aws::String& GetRequestId() const{ return m_requestId; } + inline void SetRequestId(const Aws::String& value) { m_requestId = value; } + inline void SetRequestId(Aws::String&& value) { m_requestId = std::move(value); } + inline void SetRequestId(const char* value) { m_requestId.assign(value); } + inline UpdateSolutionResult& WithRequestId(const Aws::String& value) { SetRequestId(value); return *this;} + inline UpdateSolutionResult& WithRequestId(Aws::String&& value) { SetRequestId(std::move(value)); return *this;} + inline UpdateSolutionResult& WithRequestId(const char* value) { SetRequestId(value); return *this;} + ///@} + private: + + Aws::String m_solutionArn; + + Aws::String m_requestId; + }; + +} // namespace Model +} // namespace Personalize +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-personalize/source/PersonalizeClient.cpp b/generated/src/aws-cpp-sdk-personalize/source/PersonalizeClient.cpp index 947e80f48c8..8e054eb50d7 100644 --- a/generated/src/aws-cpp-sdk-personalize/source/PersonalizeClient.cpp +++ b/generated/src/aws-cpp-sdk-personalize/source/PersonalizeClient.cpp @@ -91,6 +91,7 @@ #includeMinimum level of diagnostics to return. ERROR
returns only
+ * ERROR
diagnostics, whereas WARNING
returns both
+ * WARNING
and ERROR
diagnostics. The default is
+ * ERROR
.
The maximum number of diagnostics that are returned per call. The default and + * maximum value is 100. Setting the value to 0 will also use the default of + * 100.
If the number of diagnostics returned in the response exceeds
+ * maxResults
, the value of the truncated
field in the
+ * response will be set to true
.
The result value will be true
if the number of diagnostics found
+ * in the workflow definition exceeds maxResults
. When all diagnostics
+ * results are returned, the value will be false
.
Attaches an IAM policy to the specified resource. Use this to share a rule - * group across accounts.
You must be the owner of the rule group to perform - * this operation.
This action is subject to the following restrictions:
- *You can attach only one policy with each
- * PutPermissionPolicy
request.
The ARN in the + *
Use this to share a rule group with other accounts.
This action + * attaches an IAM policy to the specified resource. You must be the owner of the + * rule group to perform this operation.
This action is subject to the + * following restrictions:
You can attach only one policy with
+ * each PutPermissionPolicy
request.
The ARN in the * request must be a valid WAF RuleGroup ARN and the rule group must exist * in the same Region.
The user making the request must be the - * owner of the rule group.
If a rule group has been shared with
+ * your account, you can access it through the call GetRuleGroup
, and
+ * you can reference it in CreateWebACL
and UpdateWebACL
.
+ * Rule groups that are shared with you don't appear in your WAF console rule
+ * groups listing.
The byte value of the file to attach, encoded as Base-64 string. The maximum size of all files that is attached is 10MB. You can attach a maximum of 5 files.
" + "documentation":"The raw bytes of the file to attach. The maximum size of all files that is attached is 10MB. You can attach a maximum of 5 files.
" }, "mediaType":{ "shape":"MimeType", @@ -2628,7 +2630,6 @@ "RetrievalFilterList":{ "type":"list", "member":{"shape":"RetrievalFilter"}, - "max":5, "min":2 }, "RetrievalResultConfluenceLocation":{ diff --git a/tools/code-generation/api-descriptions/bedrock-runtime-2023-09-30.normal.json b/tools/code-generation/api-descriptions/bedrock-runtime-2023-09-30.normal.json index 0989dc48c81..283f7d6eb4a 100644 --- a/tools/code-generation/api-descriptions/bedrock-runtime-2023-09-30.normal.json +++ b/tools/code-generation/api-descriptions/bedrock-runtime-2023-09-30.normal.json @@ -368,7 +368,7 @@ "members":{ "modelId":{ "shape":"ConversationalModelId", - "documentation":"The identifier for the model that you want to call.
The modelId
to provide depends on the type of model that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
The identifier for the model that you want to call.
The modelId
to provide depends on the type of model or throughput that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use an inference profile, specify the inference profile ID or its ARN. For a list of inference profile IDs, see Supported Regions and models for cross-region inference in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
The Converse API doesn't support imported models.
", "location":"uri", "locationName":"modelId" }, @@ -542,7 +542,7 @@ "members":{ "modelId":{ "shape":"ConversationalModelId", - "documentation":"The ID for the model.
The modelId
to provide depends on the type of model that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
The ID for the model.
The modelId
to provide depends on the type of model or throughput that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use an inference profile, specify the inference profile ID or its ARN. For a list of inference profile IDs, see Supported Regions and models for cross-region inference in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
The Converse API doesn't support imported models.
", "location":"uri", "locationName":"modelId" }, @@ -1448,7 +1448,7 @@ "type":"string", "max":2048, "min":1, - "pattern":"(arn:aws(-[^:]+)?:bedrock:[a-z0-9-]{1,20}:(([0-9]{12}:custom-model/[a-z0-9-]{1,63}[.]{1}[a-z0-9-]{1,63}/[a-z0-9]{12})|(:foundation-model/[a-z0-9-]{1,63}[.]{1}[a-z0-9-]{1,63}([.:]?[a-z0-9-]{1,63}))|([0-9]{12}:provisioned-model/[a-z0-9]{12})|([0-9]{12}:inference-profile/[a-zA-Z0-9-:.]+)))|([a-z0-9-]{1,63}[.]{1}[a-z0-9-]{1,63}([.:]?[a-z0-9-]{1,63}))|(([0-9a-zA-Z][_-]?)+)|([a-zA-Z0-9-:.]+)" + "pattern":"(arn:aws(-[^:]+)?:bedrock:[a-z0-9-]{1,20}:(([0-9]{12}:custom-model/[a-z0-9-]{1,63}[.]{1}[a-z0-9-]{1,63}/[a-z0-9]{12})|(:foundation-model/[a-z0-9-]{1,63}[.]{1}[a-z0-9-]{1,63}([.:]?[a-z0-9-]{1,63}))|([0-9]{12}:imported-model/[a-z0-9]{12})|([0-9]{12}:provisioned-model/[a-z0-9]{12})|([0-9]{12}:inference-profile/[a-zA-Z0-9-:.]+)))|([a-z0-9-]{1,63}[.]{1}[a-z0-9-]{1,63}([.:]?[a-z0-9-]{1,63}))|(([0-9a-zA-Z][_-]?)+)|([a-zA-Z0-9-:.]+)" }, "InvokeModelRequest":{ "type":"structure", @@ -1475,7 +1475,7 @@ }, "modelId":{ "shape":"InvokeModelIdentifier", - "documentation":"The unique identifier of the model to invoke to run inference.
The modelId
to provide depends on the type of model that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
The unique identifier of the model to invoke to run inference.
The modelId
to provide depends on the type of model that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
If you use an imported model, specify the ARN of the imported model. You can get the model ARN from a successful call to CreateModelImportJob or from the Imported models page in the Amazon Bedrock console.
The unique identifier of the model to invoke to run inference.
The modelId
to provide depends on the type of model that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
The unique identifier of the model to invoke to run inference.
The modelId
to provide depends on the type of model that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
If you use an imported model, specify the ARN of the imported model. You can get the model ARN from a successful call to CreateModelImportJob or from the Imported models page in the Amazon Bedrock console.
The model specified in the request is not ready to serve inference requests.
", + "documentation":"The model specified in the request is not ready to serve inference requests. The AWS SDK will automatically retry the operation up to 5 times. For information about configuring automatic retries, see Retry behavior in the AWS SDKs and Tools reference guide.
", "error":{ "httpStatusCode":429, "senderFault":true }, - "exception":true + "exception":true, + "retryable":{"throttling":false} }, "ModelOutputs":{ "type":"list", diff --git a/tools/code-generation/api-descriptions/personalize-2018-05-22.normal.json b/tools/code-generation/api-descriptions/personalize-2018-05-22.normal.json index 6e547142686..b0a368f1c59 100644 --- a/tools/code-generation/api-descriptions/personalize-2018-05-22.normal.json +++ b/tools/code-generation/api-descriptions/personalize-2018-05-22.normal.json @@ -11,7 +11,8 @@ "signatureVersion":"v4", "signingName":"personalize", "targetPrefix":"AmazonPersonalize", - "uid":"personalize-2018-05-22" + "uid":"personalize-2018-05-22", + "auth":["aws.auth#sigv4"] }, "operations":{ "CreateBatchInferenceJob":{ @@ -263,7 +264,7 @@ {"shape":"ResourceInUseException"}, {"shape":"TooManyTagsException"} ], - "documentation":"After you create a solution, you can’t change its configuration. By default, all new solutions use automatic training. With automatic training, you incur training costs while your solution is active. You can't stop automatic training for a solution. To avoid unnecessary costs, make sure to delete the solution when you are finished. For information about training costs, see Amazon Personalize pricing.
Creates the configuration for training a model (creating a solution version). This configuration includes the recipe to use for model training and optional training configuration, such as columns to use in training and feature transformation parameters. For more information about configuring a solution, see Creating and configuring a solution.
By default, new solutions use automatic training to create solution versions every 7 days. You can change the training frequency. Automatic solution version creation starts one hour after the solution is ACTIVE. If you manually create a solution version within the hour, the solution skips the first automatic training. For more information, see Configuring automatic training.
To turn off automatic training, set performAutoTraining
to false. If you turn off automatic training, you must manually create a solution version by calling the CreateSolutionVersion operation.
After training starts, you can get the solution version's Amazon Resource Name (ARN) with the ListSolutionVersions API operation. To get its status, use the DescribeSolutionVersion.
After training completes you can evaluate model accuracy by calling GetSolutionMetrics. When you are satisfied with the solution version, you deploy it using CreateCampaign. The campaign provides recommendations to a client through the GetRecommendations API.
Amazon Personalize doesn't support configuring the hpoObjective
for solution hyperparameter optimization at this time.
Status
A solution can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
To get the status of the solution, call DescribeSolution. If you use manual training, the status must be ACTIVE before you call CreateSolutionVersion
.
Related APIs
" + "documentation":"By default, all new solutions use automatic training. With automatic training, you incur training costs while your solution is active. To avoid unnecessary costs, when you are finished you can update the solution to turn off automatic training. For information about training costs, see Amazon Personalize pricing.
Creates the configuration for training a model (creating a solution version). This configuration includes the recipe to use for model training and optional training configuration, such as columns to use in training and feature transformation parameters. For more information about configuring a solution, see Creating and configuring a solution.
By default, new solutions use automatic training to create solution versions every 7 days. You can change the training frequency. Automatic solution version creation starts within one hour after the solution is ACTIVE. If you manually create a solution version within the hour, the solution skips the first automatic training. For more information, see Configuring automatic training.
To turn off automatic training, set performAutoTraining
to false. If you turn off automatic training, you must manually create a solution version by calling the CreateSolutionVersion operation.
After training starts, you can get the solution version's Amazon Resource Name (ARN) with the ListSolutionVersions API operation. To get its status, use the DescribeSolutionVersion.
After training completes you can evaluate model accuracy by calling GetSolutionMetrics. When you are satisfied with the solution version, you deploy it using CreateCampaign. The campaign provides recommendations to a client through the GetRecommendations API.
Amazon Personalize doesn't support configuring the hpoObjective
for solution hyperparameter optimization at this time.
Status
A solution can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
To get the status of the solution, call DescribeSolution. If you use manual training, the status must be ACTIVE before you call CreateSolutionVersion
.
Related APIs
" }, "CreateSolutionVersion":{ "name":"CreateSolutionVersion", @@ -1113,6 +1114,23 @@ ], "documentation":"Updates the recommender to modify the recommender configuration. If you update the recommender to modify the columns used in training, Amazon Personalize automatically starts a full retraining of the models backing your recommender. While the update completes, you can still get recommendations from the recommender. The recommender uses the previous configuration until the update completes. To track the status of this update, use the latestRecommenderUpdate
returned in the DescribeRecommender operation.
Updates an Amazon Personalize solution to use a different automatic training configuration. When you update a solution, you can change whether the solution uses automatic training, and you can change the training frequency. For more information about updating a solution, see Updating a solution.
A solution update can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
To get the status of a solution update, call the DescribeSolution API operation and find the status in the latestSolutionUpdate
.
Whether the solution uses automatic training to create new solution versions (trained models). The default is True
and the solution automatically creates new solution versions every 7 days. You can change the training frequency by specifying a schedulingExpression
in the AutoTrainingConfig
as part of solution configuration. For more information about automatic training, see Configuring automatic training.
Automatic solution version creation starts one hour after the solution is ACTIVE. If you manually create a solution version within the hour, the solution skips the first automatic training.
After training starts, you can get the solution version's Amazon Resource Name (ARN) with the ListSolutionVersions API operation. To get its status, use the DescribeSolutionVersion.
" + "documentation":"Whether the solution uses automatic training to create new solution versions (trained models). The default is True
and the solution automatically creates new solution versions every 7 days. You can change the training frequency by specifying a schedulingExpression
in the AutoTrainingConfig
as part of solution configuration. For more information about automatic training, see Configuring automatic training.
Automatic solution version creation starts within one hour after the solution is ACTIVE. If you manually create a solution version within the hour, the solution skips the first automatic training.
After training starts, you can get the solution version's Amazon Resource Name (ARN) with the ListSolutionVersions API operation. To get its status, use the DescribeSolutionVersion.
" }, "recipeArn":{ "shape":"Arn", @@ -2260,7 +2278,7 @@ }, "solutionConfig":{ "shape":"SolutionConfig", - "documentation":"The configuration to use with the solution. When performAutoML
is set to true, Amazon Personalize only evaluates the autoMLConfig
section of the solution configuration.
Amazon Personalize doesn't support configuring the hpoObjective
at this time.
The configuration properties for the solution. When performAutoML
is set to true, Amazon Personalize only evaluates the autoMLConfig
section of the solution configuration.
Amazon Personalize doesn't support configuring the hpoObjective
at this time.
The status of the recommender update.
A recommender can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
STOP PENDING > STOP IN_PROGRESS > INACTIVE > START PENDING > START IN_PROGRESS > ACTIVE
DELETE PENDING > DELETE IN_PROGRESS
The status of the recommender update. A recommender update can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
" }, "failureReason":{ "shape":"FailureReason", @@ -4839,9 +4857,13 @@ "latestSolutionVersion":{ "shape":"SolutionVersionSummary", "documentation":"Describes the latest version of the solution, including the status and the ARN.
" + }, + "latestSolutionUpdate":{ + "shape":"SolutionUpdateSummary", + "documentation":"Provides a summary of the latest updates to the solution.
" } }, - "documentation":"After you create a solution, you can’t change its configuration. By default, all new solutions use automatic training. With automatic training, you incur training costs while your solution is active. You can't stop automatic training for a solution. To avoid unnecessary costs, make sure to delete the solution when you are finished. For information about training costs, see Amazon Personalize pricing.
An object that provides information about a solution. A solution includes the custom recipe, customized parameters, and trained models (Solution Versions) that Amazon Personalize uses to generate recommendations.
After you create a solution, you can’t change its configuration. If you need to make changes, you can clone the solution with the Amazon Personalize console or create a new one.
" + "documentation":"By default, all new solutions use automatic training. With automatic training, you incur training costs while your solution is active. To avoid unnecessary costs, when you are finished you can update the solution to turn off automatic training. For information about training costs, see Amazon Personalize pricing.
An object that provides information about a solution. A solution includes the custom recipe, customized parameters, and trained models (Solution Versions) that Amazon Personalize uses to generate recommendations.
After you create a solution, you can’t change its configuration. If you need to make changes, you can clone the solution with the Amazon Personalize console or create a new one.
" }, "SolutionConfig":{ "type":"structure", @@ -4911,6 +4933,43 @@ }, "documentation":"Provides a summary of the properties of a solution. For a complete listing, call the DescribeSolution API.
" }, + "SolutionUpdateConfig":{ + "type":"structure", + "members":{ + "autoTrainingConfig":{"shape":"AutoTrainingConfig"} + }, + "documentation":"The configuration details of the solution update.
" + }, + "SolutionUpdateSummary":{ + "type":"structure", + "members":{ + "solutionUpdateConfig":{ + "shape":"SolutionUpdateConfig", + "documentation":"The configuration details of the solution.
" + }, + "status":{ + "shape":"Status", + "documentation":"The status of the solution update. A solution update can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
" + }, + "performAutoTraining":{ + "shape":"PerformAutoTraining", + "documentation":"Whether the solution automatically creates solution versions.
" + }, + "creationDateTime":{ + "shape":"Date", + "documentation":"The date and time (in Unix format) that the solution update was created.
" + }, + "lastUpdatedDateTime":{ + "shape":"Date", + "documentation":"The date and time (in Unix time) that the solution update was last updated.
" + }, + "failureReason":{ + "shape":"FailureReason", + "documentation":"If a solution update fails, the reason behind the failure.
" + } + }, + "documentation":"Provides a summary of the properties of a solution update. For a complete listing, call the DescribeSolution API.
" + }, "SolutionVersion":{ "type":"structure", "members":{ @@ -5358,6 +5417,33 @@ "documentation":"The same recommender Amazon Resource Name (ARN) as given in the request.
" } } + }, + "UpdateSolutionRequest":{ + "type":"structure", + "required":["solutionArn"], + "members":{ + "solutionArn":{ + "shape":"Arn", + "documentation":"The Amazon Resource Name (ARN) of the solution to update.
" + }, + "performAutoTraining":{ + "shape":"PerformAutoTraining", + "documentation":"Whether the solution uses automatic training to create new solution versions (trained models). You can change the training frequency by specifying a schedulingExpression
in the AutoTrainingConfig
as part of solution configuration.
If you turn on automatic training, the first automatic training starts within one hour after the solution update completes. If you manually create a solution version within the hour, the solution skips the first automatic training. For more information about automatic training, see Configuring automatic training.
After training starts, you can get the solution version's Amazon Resource Name (ARN) with the ListSolutionVersions API operation. To get its status, use the DescribeSolutionVersion.
" + }, + "solutionUpdateConfig":{ + "shape":"SolutionUpdateConfig", + "documentation":"The new configuration details of the solution.
" + } + } + }, + "UpdateSolutionResponse":{ + "type":"structure", + "members":{ + "solutionArn":{ + "shape":"Arn", + "documentation":"The same solution Amazon Resource Name (ARN) as given in the request.
" + } + } } }, "documentation":"Amazon Personalize is a machine learning service that makes it easy to add individualized recommendations to customers.
" diff --git a/tools/code-generation/api-descriptions/quicksight-2018-04-01.normal.json b/tools/code-generation/api-descriptions/quicksight-2018-04-01.normal.json index 23bd0af6f83..65faa8277bb 100644 --- a/tools/code-generation/api-descriptions/quicksight-2018-04-01.normal.json +++ b/tools/code-generation/api-descriptions/quicksight-2018-04-01.normal.json @@ -6335,7 +6335,7 @@ "documentation":"A unique ID to identify a calculated column. During a dataset update, if the column ID of a calculated column matches that of an existing calculated column, Amazon QuickSight preserves the existing calculated column.
" }, "Expression":{ - "shape":"Expression", + "shape":"DataSetCalculatedFieldExpression", "documentation":"An expression that defines the calculated column.
" } }, @@ -6891,7 +6891,8 @@ "ColumnDescriptiveText":{ "type":"string", "max":500, - "min":0 + "min":0, + "sensitive":true }, "ColumnGroup":{ "type":"structure", @@ -10433,6 +10434,12 @@ "member":{"shape":"Arn"}, "max":100 }, + "DataSetCalculatedFieldExpression":{ + "type":"string", + "max":250000, + "min":1, + "sensitive":true + }, "DataSetConfiguration":{ "type":"structure", "members":{ diff --git a/tools/code-generation/api-descriptions/states-2016-11-23.normal.json b/tools/code-generation/api-descriptions/states-2016-11-23.normal.json index 9462c7dfec0..b7e565c574a 100644 --- a/tools/code-generation/api-descriptions/states-2016-11-23.normal.json +++ b/tools/code-generation/api-descriptions/states-2016-11-23.normal.json @@ -3964,10 +3964,23 @@ "type":{ "shape":"StateMachineType", "documentation":"The target type of state machine for this definition. The default is STANDARD
.
Minimum level of diagnostics to return. ERROR
returns only ERROR
diagnostics, whereas WARNING
returns both WARNING
and ERROR
diagnostics. The default is ERROR
.
The maximum number of diagnostics that are returned per call. The default and maximum value is 100. Setting the value to 0 will also use the default of 100.
If the number of diagnostics returned in the response exceeds maxResults
, the value of the truncated
field in the response will be set to true
.
If the result is OK
, this field will be empty. When there are errors, this field will contain an array of Diagnostic objects to help you troubleshoot.
The result value will be true
if the number of diagnostics found in the workflow definition exceeds maxResults
. When all diagnostics results are returned, the value will be false
.
Attaches an IAM policy to the specified resource. Use this to share a rule group across accounts.
You must be the owner of the rule group to perform this operation.
This action is subject to the following restrictions:
You can attach only one policy with each PutPermissionPolicy
request.
The ARN in the request must be a valid WAF RuleGroup ARN and the rule group must exist in the same Region.
The user making the request must be the owner of the rule group.
Use this to share a rule group with other accounts.
This action attaches an IAM policy to the specified resource. You must be the owner of the rule group to perform this operation.
This action is subject to the following restrictions:
You can attach only one policy with each PutPermissionPolicy
request.
The ARN in the request must be a valid WAF RuleGroup ARN and the rule group must exist in the same Region.
The user making the request must be the owner of the rule group.
If a rule group has been shared with your account, you can access it through the call GetRuleGroup
, and you can reference it in CreateWebACL
and UpdateWebACL
. Rule groups that are shared with you don't appear in your WAF console rule groups listing.