diff --git a/.changes/1.35.27.json b/.changes/1.35.27.json new file mode 100644 index 0000000000..0326c2924b --- /dev/null +++ b/.changes/1.35.27.json @@ -0,0 +1,22 @@ +[ + { + "category": "``cloudtrail``", + "description": "Doc-only update for CloudTrail network activity events release (in preview)", + "type": "api-change" + }, + { + "category": "``ec2``", + "description": "Updates to documentation for the transit gateway security group referencing feature.", + "type": "api-change" + }, + { + "category": "``fsx``", + "description": "Doc-only update to address Lustre S3 hard-coded names.", + "type": "api-change" + }, + { + "category": "``worklink``", + "description": "The worklink client has been removed following the deprecation of the service.", + "type": "api-change" + } +] \ No newline at end of file diff --git a/.changes/next-release/api-change-worklink-80875.json b/.changes/next-release/api-change-worklink-80875.json deleted file mode 100644 index 71d9771878..0000000000 --- a/.changes/next-release/api-change-worklink-80875.json +++ /dev/null @@ -1,5 +0,0 @@ -{ - "type": "api-change", - "category": "``worklink``", - "description": "The worklink client has been removed following the deprecation of the service." -} diff --git a/CHANGELOG.rst b/CHANGELOG.rst index 84f4c44c6c..a0473493c2 100644 --- a/CHANGELOG.rst +++ b/CHANGELOG.rst @@ -2,6 +2,15 @@ CHANGELOG ========= +1.35.27 +======= + +* api-change:``cloudtrail``: Doc-only update for CloudTrail network activity events release (in preview) +* api-change:``ec2``: Updates to documentation for the transit gateway security group referencing feature. +* api-change:``fsx``: Doc-only update to address Lustre S3 hard-coded names. +* api-change:``worklink``: The worklink client has been removed following the deprecation of the service. + + 1.35.26 ======= diff --git a/botocore/__init__.py b/botocore/__init__.py index 518d8aad91..596f11f407 100644 --- a/botocore/__init__.py +++ b/botocore/__init__.py @@ -16,7 +16,7 @@ import os import re -__version__ = '1.35.26' +__version__ = '1.35.27' class NullHandler(logging.Handler): diff --git a/botocore/data/cloudtrail/2013-11-01/service-2.json b/botocore/data/cloudtrail/2013-11-01/service-2.json index 10db39af11..78ea1efa67 100644 --- a/botocore/data/cloudtrail/2013-11-01/service-2.json +++ b/botocore/data/cloudtrail/2013-11-01/service-2.json @@ -420,7 +420,7 @@ {"shape":"OperationNotPermittedException"}, {"shape":"NoManagementAccountSLRExistsException"} ], - "documentation":"
Describes the settings for the event selectors that you configured for your trail. The information returned for your event selectors includes the following:
If your event selector includes read-only events, write-only events, or all events. This applies to both management events and data events.
If your event selector includes management events.
If your event selector includes data events, the resources on which you are logging data events.
For more information about logging management and data events, see the following topics in the CloudTrail User Guide:
", + "documentation":"Describes the settings for the event selectors that you configured for your trail. The information returned for your event selectors includes the following:
If your event selector includes read-only events, write-only events, or all events. This applies to management events, data events, and network activity events.
If your event selector includes management events.
If your event selector includes network activity events, the event sources for which you are logging network activity events.
If your event selector includes data events, the resources on which you are logging data events.
For more information about logging management, data, and network activity events, see the following topics in the CloudTrail User Guide:
", "idempotent":true }, "GetImport":{ @@ -748,7 +748,7 @@ {"shape":"NoManagementAccountSLRExistsException"}, {"shape":"InsufficientDependencyServiceAccessPermissionException"} ], - "documentation":"Configures an event selector or advanced event selectors for your trail. Use event selectors or advanced event selectors to specify management and data event settings for your trail. If you want your trail to log Insights events, be sure the event selector enables logging of the Insights event types you want configured for your trail. For more information about logging Insights events, see Logging Insights events in the CloudTrail User Guide. By default, trails created without specific event selectors are configured to log all read and write management events, and no data events.
When an event occurs in your account, CloudTrail evaluates the event selectors or advanced event selectors in all trails. For each trail, if the event matches any event selector, the trail processes and logs the event. If the event doesn't match any event selector, the trail doesn't log the event.
Example
You create an event selector for a trail and specify that you want write-only events.
The EC2 GetConsoleOutput
and RunInstances
API operations occur in your account.
CloudTrail evaluates whether the events match your event selectors.
The RunInstances
is a write-only event and it matches your event selector. The trail logs the event.
The GetConsoleOutput
is a read-only event that doesn't match your event selector. The trail doesn't log the event.
The PutEventSelectors
operation must be called from the Region in which the trail was created; otherwise, an InvalidHomeRegionException
exception is thrown.
You can configure up to five event selectors for each trail. For more information, see Logging management events, Logging data events, and Quotas in CloudTrail in the CloudTrail User Guide.
You can add advanced event selectors, and conditions for your advanced event selectors, up to a maximum of 500 values for all conditions and selectors on a trail. You can use either AdvancedEventSelectors
or EventSelectors
, but not both. If you apply AdvancedEventSelectors
to a trail, any existing EventSelectors
are overwritten. For more information about advanced event selectors, see Logging data events in the CloudTrail User Guide.
Configures event selectors (also referred to as basic event selectors) or advanced event selectors for your trail. You can use either AdvancedEventSelectors
or EventSelectors
, but not both. If you apply AdvancedEventSelectors
to a trail, any existing EventSelectors
are overwritten.
You can use AdvancedEventSelectors
to log management events, data events for all resource types, and network activity events.
You can use EventSelectors
to log management events and data events for the following resource types:
AWS::DynamoDB::Table
AWS::Lambda::Function
AWS::S3::Object
You can't use EventSelectors
to log network activity events.
If you want your trail to log Insights events, be sure the event selector or advanced event selector enables logging of the Insights event types you want configured for your trail. For more information about logging Insights events, see Logging Insights events in the CloudTrail User Guide. By default, trails created without specific event selectors are configured to log all read and write management events, and no data events or network activity events.
When an event occurs in your account, CloudTrail evaluates the event selectors or advanced event selectors in all trails. For each trail, if the event matches any event selector, the trail processes and logs the event. If the event doesn't match any event selector, the trail doesn't log the event.
Example
You create an event selector for a trail and specify that you want to log write-only events.
The EC2 GetConsoleOutput
and RunInstances
API operations occur in your account.
CloudTrail evaluates whether the events match your event selectors.
The RunInstances
is a write-only event and it matches your event selector. The trail logs the event.
The GetConsoleOutput
is a read-only event that doesn't match your event selector. The trail doesn't log the event.
The PutEventSelectors
operation must be called from the Region in which the trail was created; otherwise, an InvalidHomeRegionException
exception is thrown.
You can configure up to five event selectors for each trail.
You can add advanced event selectors, and conditions for your advanced event selectors, up to a maximum of 500 values for all conditions and selectors on a trail. For more information, see Logging management events, Logging data events, Logging network activity events, and Quotas in CloudTrail in the CloudTrail User Guide.
", "idempotent":true }, "PutInsightSelectors":{ @@ -897,7 +897,7 @@ {"shape":"NoManagementAccountSLRExistsException"}, {"shape":"InsufficientDependencyServiceAccessPermissionException"} ], - "documentation":"Starts the ingestion of live events on an event data store specified as either an ARN or the ID portion of the ARN. To start ingestion, the event data store Status
must be STOPPED_INGESTION
and the eventCategory
must be Management
, Data
, or ConfigurationItem
.
Starts the ingestion of live events on an event data store specified as either an ARN or the ID portion of the ARN. To start ingestion, the event data store Status
must be STOPPED_INGESTION
and the eventCategory
must be Management
, Data
, NetworkActivity
, or ConfigurationItem
.
Stops the ingestion of live events on an event data store specified as either an ARN or the ID portion of the ARN. To stop ingestion, the event data store Status
must be ENABLED
and the eventCategory
must be Management
, Data
, or ConfigurationItem
.
Stops the ingestion of live events on an event data store specified as either an ARN or the ID portion of the ARN. To stop ingestion, the event data store Status
must be ENABLED
and the eventCategory
must be Management
, Data
, NetworkActivity
, or ConfigurationItem
.
Updates an event data store. The required EventDataStore
value is an ARN or the ID portion of the ARN. Other parameters are optional, but at least one optional parameter must be specified, or CloudTrail throws an error. RetentionPeriod
is in days, and valid values are integers between 7 and 3653 if the BillingMode
is set to EXTENDABLE_RETENTION_PRICING
, or between 7 and 2557 if BillingMode
is set to FIXED_RETENTION_PRICING
. By default, TerminationProtection
is enabled.
For event data stores for CloudTrail events, AdvancedEventSelectors
includes or excludes management or data events in your event data store. For more information about AdvancedEventSelectors
, see AdvancedEventSelectors.
For event data stores for CloudTrail Insights events, Config configuration items, Audit Manager evidence, or non-Amazon Web Services events, AdvancedEventSelectors
includes events of that type in your event data store.
Updates an event data store. The required EventDataStore
value is an ARN or the ID portion of the ARN. Other parameters are optional, but at least one optional parameter must be specified, or CloudTrail throws an error. RetentionPeriod
is in days, and valid values are integers between 7 and 3653 if the BillingMode
is set to EXTENDABLE_RETENTION_PRICING
, or between 7 and 2557 if BillingMode
is set to FIXED_RETENTION_PRICING
. By default, TerminationProtection
is enabled.
For event data stores for CloudTrail events, AdvancedEventSelectors
includes or excludes management, data, or network activity events in your event data store. For more information about AdvancedEventSelectors
, see AdvancedEventSelectors.
For event data stores for CloudTrail Insights events, Config configuration items, Audit Manager evidence, or non-Amazon Web Services events, AdvancedEventSelectors
includes events of that type in your event data store.
Contains all selector statements in an advanced event selector.
" } }, - "documentation":"Advanced event selectors let you create fine-grained selectors for CloudTrail management and data events. They help you control costs by logging only those events that are important to you. For more information about advanced event selectors, see Logging management events and Logging data events in the CloudTrail User Guide.
You cannot apply both event selectors and advanced event selectors to a trail.
Supported CloudTrail event record fields for management events
eventCategory
(required)
eventSource
readOnly
Supported CloudTrail event record fields for data events
eventCategory
(required)
resources.type
(required)
readOnly
eventName
resources.ARN
For event data stores for CloudTrail Insights events, Config configuration items, Audit Manager evidence, or events outside of Amazon Web Services, the only supported field is eventCategory
.
Advanced event selectors let you create fine-grained selectors for CloudTrail management, data, and network activity events. They help you control costs by logging only those events that are important to you. For more information about configuring advanced event selectors, see the Logging data events, Logging network activity events, and Logging management events topics in the CloudTrail User Guide.
You cannot apply both event selectors and advanced event selectors to a trail.
Supported CloudTrail event record fields for management events
eventCategory
(required)
eventSource
readOnly
Supported CloudTrail event record fields for data events
eventCategory
(required)
resources.type
(required)
readOnly
eventName
resources.ARN
Supported CloudTrail event record fields for network activity events
Network activity events is in preview release for CloudTrail and is subject to change.
eventCategory
(required)
eventSource
(required)
eventName
errorCode
- The only valid value for errorCode
is VpceAccessDenied
.
vpcEndpointId
For event data stores for CloudTrail Insights events, Config configuration items, Audit Manager evidence, or events outside of Amazon Web Services, the only supported field is eventCategory
.
A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, Config configuration items, Audit Manager evidence, or events outside of Amazon Web Services, the field is used only for selecting events as filtering is not supported.
For CloudTrail management events, supported fields include readOnly
, eventCategory
, and eventSource
.
For CloudTrail data events, supported fields include readOnly
, eventCategory
, eventName
, resources.type
, and resources.ARN
.
For event data stores for CloudTrail Insights events, Config configuration items, Audit Manager evidence, or events outside of Amazon Web Services, the only supported field is eventCategory
.
readOnly
- Optional. Can be set to Equals
a value of true
or false
. If you do not add this field, CloudTrail logs both read
and write
events. A value of true
logs only read
events. A value of false
logs only write
events.
eventSource
- For filtering management events only. This can be set to NotEquals
kms.amazonaws.com
or NotEquals
rdsdata.amazonaws.com
.
eventName
- Can use any operator. You can use it to filter in or filter out any data event logged to CloudTrail, such as PutBucket
or GetSnapshotBlock
. You can have multiple values for this field, separated by commas.
eventCategory
- This is required and must be set to Equals
.
For CloudTrail management events, the value must be Management
.
For CloudTrail data events, the value must be Data
.
The following are used only for event data stores:
For CloudTrail Insights events, the value must be Insight
.
For Config configuration items, the value must be ConfigurationItem
.
For Audit Manager evidence, the value must be Evidence
.
For non-Amazon Web Services events, the value must be ActivityAuditLog
.
resources.type
- This field is required for CloudTrail data events. resources.type
can only use the Equals
operator, and the value can be one of the following:
AWS::DynamoDB::Table
AWS::Lambda::Function
AWS::S3::Object
AWS::AppConfig::Configuration
AWS::B2BI::Transformer
AWS::Bedrock::AgentAlias
AWS::Bedrock::KnowledgeBase
AWS::Cassandra::Table
AWS::CloudFront::KeyValueStore
AWS::CloudTrail::Channel
AWS::CodeWhisperer::Customization
AWS::CodeWhisperer::Profile
AWS::Cognito::IdentityPool
AWS::DynamoDB::Stream
AWS::EC2::Snapshot
AWS::EMRWAL::Workspace
AWS::FinSpace::Environment
AWS::Glue::Table
AWS::GreengrassV2::ComponentVersion
AWS::GreengrassV2::Deployment
AWS::GuardDuty::Detector
AWS::IoT::Certificate
AWS::IoT::Thing
AWS::IoTSiteWise::Asset
AWS::IoTSiteWise::TimeSeries
AWS::IoTTwinMaker::Entity
AWS::IoTTwinMaker::Workspace
AWS::KendraRanking::ExecutionPlan
AWS::KinesisVideo::Stream
AWS::ManagedBlockchain::Network
AWS::ManagedBlockchain::Node
AWS::MedicalImaging::Datastore
AWS::NeptuneGraph::Graph
AWS::PCAConnectorAD::Connector
AWS::QApps:QApp
AWS::QBusiness::Application
AWS::QBusiness::DataSource
AWS::QBusiness::Index
AWS::QBusiness::WebExperience
AWS::RDS::DBCluster
AWS::S3::AccessPoint
AWS::S3ObjectLambda::AccessPoint
AWS::S3Outposts::Object
AWS::SageMaker::Endpoint
AWS::SageMaker::ExperimentTrialComponent
AWS::SageMaker::FeatureGroup
AWS::ServiceDiscovery::Namespace
AWS::ServiceDiscovery::Service
AWS::SCN::Instance
AWS::SNS::PlatformEndpoint
AWS::SNS::Topic
AWS::SQS::Queue
AWS::SSM::ManagedNode
AWS::SSMMessages::ControlChannel
AWS::SWF::Domain
AWS::ThinClient::Device
AWS::ThinClient::Environment
AWS::Timestream::Database
AWS::Timestream::Table
AWS::VerifiedPermissions::PolicyStore
AWS::XRay::Trace
You can have only one resources.type
field per selector. To log data events on more than one resource type, add another selector.
resources.ARN
- You can use any operator with resources.ARN
, but if you use Equals
or NotEquals
, the value must exactly match the ARN of a valid resource of the type you've specified in the template as the value of resources.type.
You can't use the resources.ARN
field to filter resource types that do not have ARNs.
The resources.ARN
field can be set one of the following.
If resources.type equals AWS::S3::Object
, the ARN must be in one of the following formats. To log all data events for all objects in a specific S3 bucket, use the StartsWith
operator, and include only the bucket ARN as the matching value.
The trailing slash is intentional; do not exclude it. Replace the text between less than and greater than symbols (<>) with resource-specific information.
arn:<partition>:s3:::<bucket_name>/
arn:<partition>:s3:::<bucket_name>/<object_path>/
When resources.type equals AWS::DynamoDB::Table
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:dynamodb:<region>:<account_ID>:table/<table_name>
When resources.type equals AWS::Lambda::Function
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:lambda:<region>:<account_ID>:function:<function_name>
When resources.type equals AWS::AppConfig::Configuration
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:appconfig:<region>:<account_ID>:application/<application_ID>/environment/<environment_ID>/configuration/<configuration_profile_ID>
When resources.type equals AWS::B2BI::Transformer
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:b2bi:<region>:<account_ID>:transformer/<transformer_ID>
When resources.type equals AWS::Bedrock::AgentAlias
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:bedrock:<region>:<account_ID>:agent-alias/<agent_ID>/<alias_ID>
When resources.type equals AWS::Bedrock::KnowledgeBase
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:bedrock:<region>:<account_ID>:knowledge-base/<knowledge_base_ID>
When resources.type equals AWS::Cassandra::Table
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:cassandra:<region>:<account_ID>:/keyspace/<keyspace_name>/table/<table_name>
When resources.type equals AWS::CloudFront::KeyValueStore
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:cloudfront:<region>:<account_ID>:key-value-store/<KVS_name>
When resources.type equals AWS::CloudTrail::Channel
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:cloudtrail:<region>:<account_ID>:channel/<channel_UUID>
When resources.type equals AWS::CodeWhisperer::Customization
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:codewhisperer:<region>:<account_ID>:customization/<customization_ID>
When resources.type equals AWS::CodeWhisperer::Profile
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:codewhisperer:<region>:<account_ID>:profile/<profile_ID>
When resources.type equals AWS::Cognito::IdentityPool
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:cognito-identity:<region>:<account_ID>:identitypool/<identity_pool_ID>
When resources.type
equals AWS::DynamoDB::Stream
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:dynamodb:<region>:<account_ID>:table/<table_name>/stream/<date_time>
When resources.type
equals AWS::EC2::Snapshot
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:ec2:<region>::snapshot/<snapshot_ID>
When resources.type
equals AWS::EMRWAL::Workspace
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:emrwal:<region>:<account_ID>:workspace/<workspace_name>
When resources.type
equals AWS::FinSpace::Environment
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:finspace:<region>:<account_ID>:environment/<environment_ID>
When resources.type
equals AWS::Glue::Table
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:glue:<region>:<account_ID>:table/<database_name>/<table_name>
When resources.type
equals AWS::GreengrassV2::ComponentVersion
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:greengrass:<region>:<account_ID>:components/<component_name>
When resources.type
equals AWS::GreengrassV2::Deployment
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:greengrass:<region>:<account_ID>:deployments/<deployment_ID
When resources.type
equals AWS::GuardDuty::Detector
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:guardduty:<region>:<account_ID>:detector/<detector_ID>
When resources.type
equals AWS::IoT::Certificate
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:iot:<region>:<account_ID>:cert/<certificate_ID>
When resources.type
equals AWS::IoT::Thing
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:iot:<region>:<account_ID>:thing/<thing_ID>
When resources.type
equals AWS::IoTSiteWise::Asset
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:iotsitewise:<region>:<account_ID>:asset/<asset_ID>
When resources.type
equals AWS::IoTSiteWise::TimeSeries
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:iotsitewise:<region>:<account_ID>:timeseries/<timeseries_ID>
When resources.type
equals AWS::IoTTwinMaker::Entity
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:iottwinmaker:<region>:<account_ID>:workspace/<workspace_ID>/entity/<entity_ID>
When resources.type
equals AWS::IoTTwinMaker::Workspace
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:iottwinmaker:<region>:<account_ID>:workspace/<workspace_ID>
When resources.type
equals AWS::KendraRanking::ExecutionPlan
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:kendra-ranking:<region>:<account_ID>:rescore-execution-plan/<rescore_execution_plan_ID>
When resources.type
equals AWS::KinesisVideo::Stream
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:kinesisvideo:<region>:<account_ID>:stream/<stream_name>/<creation_time>
When resources.type
equals AWS::ManagedBlockchain::Network
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:managedblockchain:::networks/<network_name>
When resources.type
equals AWS::ManagedBlockchain::Node
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:managedblockchain:<region>:<account_ID>:nodes/<node_ID>
When resources.type
equals AWS::MedicalImaging::Datastore
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:medical-imaging:<region>:<account_ID>:datastore/<data_store_ID>
When resources.type
equals AWS::NeptuneGraph::Graph
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:neptune-graph:<region>:<account_ID>:graph/<graph_ID>
When resources.type
equals AWS::PCAConnectorAD::Connector
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:pca-connector-ad:<region>:<account_ID>:connector/<connector_ID>
When resources.type
equals AWS::QApps:QApp
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:qapps:<region>:<account_ID>:application/<application_UUID>/qapp/<qapp_UUID>
When resources.type
equals AWS::QBusiness::Application
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:qbusiness:<region>:<account_ID>:application/<application_ID>
When resources.type
equals AWS::QBusiness::DataSource
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:qbusiness:<region>:<account_ID>:application/<application_ID>/index/<index_ID>/data-source/<datasource_ID>
When resources.type
equals AWS::QBusiness::Index
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:qbusiness:<region>:<account_ID>:application/<application_ID>/index/<index_ID>
When resources.type
equals AWS::QBusiness::WebExperience
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:qbusiness:<region>:<account_ID>:application/<application_ID>/web-experience/<web_experience_ID>
When resources.type
equals AWS::RDS::DBCluster
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:rds:<region>:<account_ID>:cluster/<cluster_name>
When resources.type
equals AWS::S3::AccessPoint
, and the operator is set to Equals
or NotEquals
, the ARN must be in one of the following formats. To log events on all objects in an S3 access point, we recommend that you use only the access point ARN, don’t include the object path, and use the StartsWith
or NotStartsWith
operators.
arn:<partition>:s3:<region>:<account_ID>:accesspoint/<access_point_name>
arn:<partition>:s3:<region>:<account_ID>:accesspoint/<access_point_name>/object/<object_path>
When resources.type
equals AWS::S3ObjectLambda::AccessPoint
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:s3-object-lambda:<region>:<account_ID>:accesspoint/<access_point_name>
When resources.type
equals AWS::S3Outposts::Object
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:s3-outposts:<region>:<account_ID>:<object_path>
When resources.type
equals AWS::SageMaker::Endpoint
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:sagemaker:<region>:<account_ID>:endpoint/<endpoint_name>
When resources.type
equals AWS::SageMaker::ExperimentTrialComponent
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:sagemaker:<region>:<account_ID>:experiment-trial-component/<experiment_trial_component_name>
When resources.type
equals AWS::SageMaker::FeatureGroup
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:sagemaker:<region>:<account_ID>:feature-group/<feature_group_name>
When resources.type
equals AWS::SCN::Instance
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:scn:<region>:<account_ID>:instance/<instance_ID>
When resources.type
equals AWS::ServiceDiscovery::Namespace
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:servicediscovery:<region>:<account_ID>:namespace/<namespace_ID>
When resources.type
equals AWS::ServiceDiscovery::Service
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:servicediscovery:<region>:<account_ID>:service/<service_ID>
When resources.type
equals AWS::SNS::PlatformEndpoint
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:sns:<region>:<account_ID>:endpoint/<endpoint_type>/<endpoint_name>/<endpoint_ID>
When resources.type
equals AWS::SNS::Topic
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:sns:<region>:<account_ID>:<topic_name>
When resources.type
equals AWS::SQS::Queue
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:sqs:<region>:<account_ID>:<queue_name>
When resources.type
equals AWS::SSM::ManagedNode
, and the operator is set to Equals
or NotEquals
, the ARN must be in one of the following formats:
arn:<partition>:ssm:<region>:<account_ID>:managed-instance/<instance_ID>
arn:<partition>:ec2:<region>:<account_ID>:instance/<instance_ID>
When resources.type
equals AWS::SSMMessages::ControlChannel
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:ssmmessages:<region>:<account_ID>:control-channel/<channel_ID>
When resources.type
equals AWS::SWF::Domain
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:swf:<region>:<account_ID>:domain/<domain_name>
When resources.type
equals AWS::ThinClient::Device
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:thinclient:<region>:<account_ID>:device/<device_ID>
When resources.type
equals AWS::ThinClient::Environment
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:thinclient:<region>:<account_ID>:environment/<environment_ID>
When resources.type
equals AWS::Timestream::Database
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:timestream:<region>:<account_ID>:database/<database_name>
When resources.type
equals AWS::Timestream::Table
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:timestream:<region>:<account_ID>:database/<database_name>/table/<table_name>
When resources.type equals AWS::VerifiedPermissions::PolicyStore
, and the operator is set to Equals
or NotEquals
, the ARN must be in the following format:
arn:<partition>:verifiedpermissions:<region>:<account_ID>:policy-store/<policy_store_UUID>
A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, Config configuration items, Audit Manager evidence, or events outside of Amazon Web Services, the field is used only for selecting events as filtering is not supported.
For CloudTrail management events, supported fields include eventCategory
(required), eventSource
, and readOnly
.
For CloudTrail data events, supported fields include eventCategory
(required), resources.type
(required), eventName
, readOnly
, and resources.ARN
.
For CloudTrail network activity events, supported fields include eventCategory
(required), eventSource
(required), eventName
, errorCode
, and vpcEndpointId
.
For event data stores for CloudTrail Insights events, Config configuration items, Audit Manager evidence, or events outside of Amazon Web Services, the only supported field is eventCategory
.
readOnly
- This is an optional field that is only used for management events and data events. This field can be set to Equals
with a value of true
or false
. If you do not add this field, CloudTrail logs both read
and write
events. A value of true
logs only read
events. A value of false
logs only write
events.
eventSource
- This field is only used for management events and network activity events.
For management events, this is an optional field that can be set to NotEquals
kms.amazonaws.com
to exclude KMS management events, or NotEquals
rdsdata.amazonaws.com
to exclude RDS management events.
For network activity events, this is a required field that only uses the Equals
operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source.
The following are valid values for network activity events:
cloudtrail.amazonaws.com
ec2.amazonaws.com
kms.amazonaws.com
secretsmanager.amazonaws.com
eventName
- This is an optional field that is only used for data events and network activity events. You can use any operator with eventName
. You can use it to filter in or filter out specific events. You can have multiple values for this field, separated by commas.
eventCategory
- This field is required and must be set to Equals
.
For CloudTrail management events, the value must be Management
.
For CloudTrail data events, the value must be Data
.
For CloudTrail network activity events, the value must be NetworkActivity
.
The following are used only for event data stores:
For CloudTrail Insights events, the value must be Insight
.
For Config configuration items, the value must be ConfigurationItem
.
For Audit Manager evidence, the value must be Evidence
.
For non-Amazon Web Services events, the value must be ActivityAuditLog
.
errorCode
- This field is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid errorCode
is VpceAccessDenied
. errorCode
can only use the Equals
operator.
resources.type
- This field is required for CloudTrail data events. resources.type
can only use the Equals
operator.
The value can be one of the following:
AWS::AppConfig::Configuration
AWS::B2BI::Transformer
AWS::Bedrock::AgentAlias
AWS::Bedrock::FlowAlias
AWS::Bedrock::Guardrail
AWS::Bedrock::KnowledgeBase
AWS::Cassandra::Table
AWS::CloudFront::KeyValueStore
AWS::CloudTrail::Channel
AWS::CloudWatch::Metric
AWS::CodeWhisperer::Customization
AWS::CodeWhisperer::Profile
AWS::Cognito::IdentityPool
AWS::DynamoDB::Stream
AWS::DynamoDB::Table
AWS::EC2::Snapshot
AWS::EMRWAL::Workspace
AWS::FinSpace::Environment
AWS::Glue::Table
AWS::GreengrassV2::ComponentVersion
AWS::GreengrassV2::Deployment
AWS::GuardDuty::Detector
AWS::IoT::Certificate
AWS::IoT::Thing
AWS::IoTSiteWise::Asset
AWS::IoTSiteWise::TimeSeries
AWS::IoTTwinMaker::Entity
AWS::IoTTwinMaker::Workspace
AWS::KendraRanking::ExecutionPlan
AWS::Kinesis::Stream
AWS::Kinesis::StreamConsumer
AWS::KinesisVideo::Stream
AWS::Lambda::Function
AWS::MachineLearning::MlModel
AWS::ManagedBlockchain::Network
AWS::ManagedBlockchain::Node
AWS::MedicalImaging::Datastore
AWS::NeptuneGraph::Graph
AWS::One::UKey
AWS::One::User
AWS::PaymentCryptography::Alias
AWS::PaymentCryptography::Key
AWS::PCAConnectorAD::Connector
AWS::PCAConnectorSCEP::Connector
AWS::QApps:QApp
AWS::QBusiness::Application
AWS::QBusiness::DataSource
AWS::QBusiness::Index
AWS::QBusiness::WebExperience
AWS::RDS::DBCluster
AWS::RUM::AppMonitor
AWS::S3::AccessPoint
AWS::S3::Object
AWS::S3Express::Object
AWS::S3ObjectLambda::AccessPoint
AWS::S3Outposts::Object
AWS::SageMaker::Endpoint
AWS::SageMaker::ExperimentTrialComponent
AWS::SageMaker::FeatureGroup
AWS::ServiceDiscovery::Namespace
AWS::ServiceDiscovery::Service
AWS::SCN::Instance
AWS::SNS::PlatformEndpoint
AWS::SNS::Topic
AWS::SQS::Queue
AWS::SSM::ManagedNode
AWS::SSMMessages::ControlChannel
AWS::StepFunctions::StateMachine
AWS::SWF::Domain
AWS::ThinClient::Device
AWS::ThinClient::Environment
AWS::Timestream::Database
AWS::Timestream::Table
AWS::VerifiedPermissions::PolicyStore
AWS::XRay::Trace
You can have only one resources.type
field per selector. To log events on more than one resource type, add another selector.
resources.ARN
- The resources.ARN
is an optional field for data events. You can use any operator with resources.ARN
, but if you use Equals
or NotEquals
, the value must exactly match the ARN of a valid resource of the type you've specified in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the StartsWith
operator, and include only the bucket ARN as the matching value.
For information about filtering data events on the resources.ARN
field, see Filtering data events by resources.ARN in the CloudTrail User Guide.
You can't use the resources.ARN
field to filter resource types that do not have ARNs.
vpcEndpointId
- This field is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with vpcEndpointId
.
An array of Amazon Resource Name (ARN) strings or partial ARN strings for the specified resource type.
To log data events for all objects in all S3 buckets in your Amazon Web Services account, specify the prefix as arn:aws:s3
.
This also enables logging of data event activity performed by any user or role in your Amazon Web Services account, even if that activity is performed on a bucket that belongs to another Amazon Web Services account.
To log data events for all objects in an S3 bucket, specify the bucket and an empty object prefix such as arn:aws:s3:::bucket-1/
. The trail logs data events for all objects in this S3 bucket.
To log data events for specific objects, specify the S3 bucket and object prefix such as arn:aws:s3:::bucket-1/example-images
. The trail logs data events for objects in this S3 bucket that match the prefix.
To log data events for all Lambda functions in your Amazon Web Services account, specify the prefix as arn:aws:lambda
.
This also enables logging of Invoke
activity performed by any user or role in your Amazon Web Services account, even if that activity is performed on a function that belongs to another Amazon Web Services account.
To log data events for a specific Lambda function, specify the function ARN.
Lambda function ARNs are exact. For example, if you specify a function ARN arn:aws:lambda:us-west-2:111111111111:function:helloworld, data events will only be logged for arn:aws:lambda:us-west-2:111111111111:function:helloworld. They will not be logged for arn:aws:lambda:us-west-2:111111111111:function:helloworld2.
To log data events for all DynamoDB tables in your Amazon Web Services account, specify the prefix as arn:aws:dynamodb
.
An array of Amazon Resource Name (ARN) strings or partial ARN strings for the specified resource type.
To log data events for all objects in all S3 buckets in your Amazon Web Services account, specify the prefix as arn:aws:s3
.
This also enables logging of data event activity performed by any user or role in your Amazon Web Services account, even if that activity is performed on a bucket that belongs to another Amazon Web Services account.
To log data events for all objects in an S3 bucket, specify the bucket and an empty object prefix such as arn:aws:s3:::amzn-s3-demo-bucket1/
. The trail logs data events for all objects in this S3 bucket.
To log data events for specific objects, specify the S3 bucket and object prefix such as arn:aws:s3:::amzn-s3-demo-bucket1/example-images
. The trail logs data events for objects in this S3 bucket that match the prefix.
To log data events for all Lambda functions in your Amazon Web Services account, specify the prefix as arn:aws:lambda
.
This also enables logging of Invoke
activity performed by any user or role in your Amazon Web Services account, even if that activity is performed on a function that belongs to another Amazon Web Services account.
To log data events for a specific Lambda function, specify the function ARN.
Lambda function ARNs are exact. For example, if you specify a function ARN arn:aws:lambda:us-west-2:111111111111:function:helloworld, data events will only be logged for arn:aws:lambda:us-west-2:111111111111:function:helloworld. They will not be logged for arn:aws:lambda:us-west-2:111111111111:function:helloworld2.
To log data events for all DynamoDB tables in your Amazon Web Services account, specify the prefix as arn:aws:dynamodb
.
Data events provide information about the resource operations performed on or within a resource itself. These are also known as data plane operations. You can specify up to 250 data resources for a trail.
Configure the DataResource
to specify the resource type and resource ARNs for which you want to log data events.
You can specify the following resource types in your event selectors for your trail:
AWS::DynamoDB::Table
AWS::Lambda::Function
AWS::S3::Object
The total number of allowed data resources is 250. This number can be distributed between 1 and 5 event selectors, but the total cannot exceed 250 across all selectors for the trail.
If you are using advanced event selectors, the maximum total number of values for all conditions, across all advanced event selectors for the trail, is 500.
The following example demonstrates how logging works when you configure logging of all data events for an S3 bucket named bucket-1
. In this example, the CloudTrail user specified an empty prefix, and the option to log both Read
and Write
data events.
A user uploads an image file to bucket-1
.
The PutObject
API operation is an Amazon S3 object-level API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified an S3 bucket with an empty prefix, events that occur on any object in that bucket are logged. The trail processes and logs the event.
A user uploads an object to an Amazon S3 bucket named arn:aws:s3:::bucket-2
.
The PutObject
API operation occurred for an object in an S3 bucket that the CloudTrail user didn't specify for the trail. The trail doesn’t log the event.
The following example demonstrates how logging works when you configure logging of Lambda data events for a Lambda function named MyLambdaFunction, but not for all Lambda functions.
A user runs a script that includes a call to the MyLambdaFunction function and the MyOtherLambdaFunction function.
The Invoke
API operation on MyLambdaFunction is an Lambda API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified logging data events for MyLambdaFunction, any invocations of that function are logged. The trail processes and logs the event.
The Invoke
API operation on MyOtherLambdaFunction is an Lambda API. Because the CloudTrail user did not specify logging data events for all Lambda functions, the Invoke
operation for MyOtherLambdaFunction does not match the function specified for the trail. The trail doesn’t log the event.
You can configure the DataResource
in an EventSelector
to log data events for the following three resource types:
AWS::DynamoDB::Table
AWS::Lambda::Function
AWS::S3::Object
To log data events for all other resource types including objects stored in directory buckets, you must use AdvancedEventSelectors. You must also use AdvancedEventSelectors
if you want to filter on the eventName
field.
Configure the DataResource
to specify the resource type and resource ARNs for which you want to log data events.
The total number of allowed data resources is 250. This number can be distributed between 1 and 5 event selectors, but the total cannot exceed 250 across all selectors for the trail.
The following example demonstrates how logging works when you configure logging of all data events for a general purpose bucket named amzn-s3-demo-bucket1
. In this example, the CloudTrail user specified an empty prefix, and the option to log both Read
and Write
data events.
A user uploads an image file to amzn-s3-demo-bucket1
.
The PutObject
API operation is an Amazon S3 object-level API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified an S3 bucket with an empty prefix, events that occur on any object in that bucket are logged. The trail processes and logs the event.
A user uploads an object to an Amazon S3 bucket named arn:aws:s3:::amzn-s3-demo-bucket1
.
The PutObject
API operation occurred for an object in an S3 bucket that the CloudTrail user didn't specify for the trail. The trail doesn’t log the event.
The following example demonstrates how logging works when you configure logging of Lambda data events for a Lambda function named MyLambdaFunction, but not for all Lambda functions.
A user runs a script that includes a call to the MyLambdaFunction function and the MyOtherLambdaFunction function.
The Invoke
API operation on MyLambdaFunction is an Lambda API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified logging data events for MyLambdaFunction, any invocations of that function are logged. The trail processes and logs the event.
The Invoke
API operation on MyOtherLambdaFunction is an Lambda API. Because the CloudTrail user did not specify logging data events for all Lambda functions, the Invoke
operation for MyOtherLambdaFunction does not match the function specified for the trail. The trail doesn’t log the event.
CloudTrail supports data event logging for Amazon S3 objects, Lambda functions, and Amazon DynamoDB tables with basic event selectors. You can specify up to 250 resources for an individual event selector, but the total number of data resources cannot exceed 250 across all event selectors in a trail. This limit does not apply if you configure resource logging for all data events.
For more information, see Data Events and Limits in CloudTrail in the CloudTrail User Guide.
" + "documentation":"CloudTrail supports data event logging for Amazon S3 objects in standard S3 buckets, Lambda functions, and Amazon DynamoDB tables with basic event selectors. You can specify up to 250 resources for an individual event selector, but the total number of data resources cannot exceed 250 across all event selectors in a trail. This limit does not apply if you configure resource logging for all data events.
For more information, see Data Events and Limits in CloudTrail in the CloudTrail User Guide.
To log data events for all other resource types including objects stored in directory buckets, you must use AdvancedEventSelectors. You must also use AdvancedEventSelectors
if you want to filter on the eventName
field.
Specifies the settings for your event selectors. You can configure up to five event selectors for a trail. You can use either EventSelectors
or AdvancedEventSelectors
in a PutEventSelectors
request, but not both. If you apply EventSelectors
to a trail, any existing AdvancedEventSelectors
are overwritten.
Specifies the settings for your event selectors. You can use event selectors to log management events and data events for the following resource types:
AWS::DynamoDB::Table
AWS::Lambda::Function
AWS::S3::Object
You can't use event selectors to log network activity events.
You can configure up to five event selectors for a trail. You can use either EventSelectors
or AdvancedEventSelectors
in a PutEventSelectors
request, but not both. If you apply EventSelectors
to a trail, any existing AdvancedEventSelectors
are overwritten.
Specifies the settings for advanced event selectors. You can add advanced event selectors, and conditions for your advanced event selectors, up to a maximum of 500 values for all conditions and selectors on a trail. You can use either AdvancedEventSelectors
or EventSelectors
, but not both. If you apply AdvancedEventSelectors
to a trail, any existing EventSelectors
are overwritten. For more information about advanced event selectors, see Logging data events in the CloudTrail User Guide.
Specifies the settings for advanced event selectors. You can use advanced event selectors to log management events, data events for all resource types, and network activity events.
You can add advanced event selectors, and conditions for your advanced event selectors, up to a maximum of 500 values for all conditions and selectors on a trail. You can use either AdvancedEventSelectors
or EventSelectors
, but not both. If you apply AdvancedEventSelectors
to a trail, any existing EventSelectors
are overwritten. For more information about advanced event selectors, see Logging data events and Logging network activity events in the CloudTrail User Guide.
This parameter is in preview and may not be available for your account.
Enables you to reference a security group across VPCs attached to a transit gateway. Use this option to simplify security group management and control of instance-to-instance traffic across VPCs that are connected by transit gateway. You can also use this option to migrate from VPC peering (which was the only option that supported security group referencing) to transit gateways (which now also support security group referencing). This option is disabled by default and there are no additional costs to use this feature.
If you don't enable or disable SecurityGroupReferencingSupport in the request, the attachment will inherit the security group referencing support setting on the transit gateway.
" + "documentation":"Enables you to reference a security group across VPCs attached to a transit gateway to simplify security group management.
This option is disabled by default.
If you don't enable or disable SecurityGroupReferencingSupport in the request, the attachment will inherit the security group referencing support setting on the transit gateway.
For more information about security group referencing, see Security group referencing in the Amazon Web Services Transit Gateways Guide.
" }, "Ipv6Support":{ "shape":"Ipv6SupportValue", @@ -44135,7 +44135,7 @@ }, "SecurityGroupReferencingSupport":{ "shape":"SecurityGroupReferencingSupportValue", - "documentation":"This parameter is in preview and may not be available for your account.
Enables you to reference a security group across VPCs attached to a transit gateway. Use this option to simplify security group management and control of instance-to-instance traffic across VPCs that are connected by transit gateway. You can also use this option to migrate from VPC peering (which was the only option that supported security group referencing) to transit gateways (which now also support security group referencing). This option is disabled by default and there are no additional costs to use this feature.
" + "documentation":"Enables you to reference a security group across VPCs attached to a transit gateway to simplify security group management.
This option is disabled by default.
For more information about security group referencing, see Security group referencing in the Amazon Web Services Transit Gateways Guide.
" }, "AutoAcceptSharedAttachments":{ "shape":"AutoAcceptSharedAttachmentsValue", @@ -44270,7 +44270,7 @@ }, "SecurityGroupReferencingSupport":{ "shape":"SecurityGroupReferencingSupportValue", - "documentation":"This parameter is in preview and may not be available for your account.
Enables you to reference a security group across VPCs attached to a transit gateway. Use this option to simplify security group management and control of instance-to-instance traffic across VPCs that are connected by transit gateway. You can also use this option to migrate from VPC peering (which was the only option that supported security group referencing) to transit gateways (which now also support security group referencing). This option is disabled by default and there are no additional costs to use this feature.
" + "documentation":"Enables you to reference a security group across VPCs attached to a transit gateway to simplify security group management.
This option is disabled by default.
For more information about security group referencing, see Security group referencing in the Amazon Web Services Transit Gateways Guide.
" }, "Ipv6Support":{ "shape":"Ipv6SupportValue", @@ -57703,7 +57703,7 @@ }, "SecurityGroupReferencingSupport":{ "shape":"SecurityGroupReferencingSupportValue", - "documentation":"This parameter is in preview and may not be available for your account.
Enables you to reference a security group across VPCs attached to a transit gateway. Use this option to simplify security group management and control of instance-to-instance traffic across VPCs that are connected by transit gateway. You can also use this option to migrate from VPC peering (which was the only option that supported security group referencing) to transit gateways (which now also support security group referencing). This option is disabled by default and there are no additional costs to use this feature.
", + "documentation":"Enables you to reference a security group across VPCs attached to a transit gateway to simplify security group management.
This option is enabled by default.
For more information about security group referencing, see Security group referencing in the Amazon Web Services Transit Gateways Guide.
", "locationName":"securityGroupReferencingSupport" }, "MulticastSupport":{ @@ -58103,7 +58103,7 @@ }, "SecurityGroupReferencingSupport":{ "shape":"SecurityGroupReferencingSupportValue", - "documentation":"This parameter is in preview and may not be available for your account.
Enables you to reference a security group across VPCs attached to a transit gateway. Use this option to simplify security group management and control of instance-to-instance traffic across VPCs that are connected by transit gateway. You can also use this option to migrate from VPC peering (which was the only option that supported security group referencing) to transit gateways (which now also support security group referencing). This option is disabled by default and there are no additional costs to use this feature.
" + "documentation":"Enables you to reference a security group across VPCs attached to a transit gateway to simplify security group management.
This option is disabled by default.
For more information about security group referencing, see Security group referencing in the Amazon Web Services Transit Gateways Guide.
" }, "MulticastSupport":{ "shape":"MulticastSupportValue", @@ -58560,7 +58560,7 @@ }, "SecurityGroupReferencingSupport":{ "shape":"SecurityGroupReferencingSupportValue", - "documentation":"This parameter is in preview and may not be available for your account.
Enables you to reference a security group across VPCs attached to a transit gateway. Use this option to simplify security group management and control of instance-to-instance traffic across VPCs that are connected by transit gateway. You can also use this option to migrate from VPC peering (which was the only option that supported security group referencing) to transit gateways (which now also support security group referencing). This option is disabled by default and there are no additional costs to use this feature.
", + "documentation":"Enables you to reference a security group across VPCs attached to a transit gateway to simplify security group management.
This option is disabled by default.
For more information about security group referencing, see Security group referencing in the Amazon Web Services Transit Gateways Guide.
", "locationName":"securityGroupReferencingSupport" }, "Ipv6Support":{ diff --git a/botocore/data/endpoints.json b/botocore/data/endpoints.json index a55056557b..a7a3389535 100644 --- a/botocore/data/endpoints.json +++ b/botocore/data/endpoints.json @@ -20913,6 +20913,7 @@ "vpc-lattice" : { "endpoints" : { "af-south-1" : { }, + "ap-east-1" : { }, "ap-northeast-1" : { }, "ap-northeast-2" : { }, "ap-south-1" : { }, @@ -20925,6 +20926,7 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, "us-east-2" : { }, diff --git a/botocore/data/fsx/2018-03-01/service-2.json b/botocore/data/fsx/2018-03-01/service-2.json index 16d7dda7a4..5197dc7076 100644 --- a/botocore/data/fsx/2018-03-01/service-2.json +++ b/botocore/data/fsx/2018-03-01/service-2.json @@ -1324,7 +1324,7 @@ }, "Path":{ "shape":"ArchivePath", - "documentation":"Required if Enabled
is set to true
. Specifies the location of the report on the file system's linked S3 data repository. An absolute path that defines where the completion report will be stored in the destination location. The Path
you provide must be located within the file system’s ExportPath. An example Path
value is \"s3://myBucket/myExportPath/optionalPrefix\". The report provides the following information for each file in the report: FilePath, FileStatus, and ErrorCode.
Required if Enabled
is set to true
. Specifies the location of the report on the file system's linked S3 data repository. An absolute path that defines where the completion report will be stored in the destination location. The Path
you provide must be located within the file system’s ExportPath. An example Path
value is \"s3://amzn-s3-demo-bucket/myExportPath/optionalPrefix\". The report provides the following information for each file in the report: FilePath, FileStatus, and ErrorCode.
The path to the Amazon S3 data repository that will be linked to the file system. The path can be an S3 bucket or prefix in the format s3://myBucket/myPrefix/
. This path specifies where in the S3 data repository files will be imported from or exported to.
The path to the Amazon S3 data repository that will be linked to the file system. The path can be an S3 bucket or prefix in the format s3://bucket-name/prefix/
(where prefix
is optional). This path specifies where in the S3 data repository files will be imported from or exported to.
A list of paths for the data repository task to use when the task is processed. If a path that you provide isn't valid, the task fails. If you don't provide paths, the default behavior is to export all files to S3 (for export tasks), import all files from S3 (for import tasks), or release all exported files that meet the last accessed time criteria (for release tasks).
For export tasks, the list contains paths on the FSx for Lustre file system from which the files are exported to the Amazon S3 bucket. The default path is the file system root directory. The paths you provide need to be relative to the mount point of the file system. If the mount point is /mnt/fsx
and /mnt/fsx/path1
is a directory or file on the file system you want to export, then the path to provide is path1
.
For import tasks, the list contains paths in the Amazon S3 bucket from which POSIX metadata changes are imported to the FSx for Lustre file system. The path can be an S3 bucket or prefix in the format s3://myBucket/myPrefix
(where myPrefix
is optional).
For release tasks, the list contains directory or file paths on the FSx for Lustre file system from which to release exported files. If a directory is specified, files within the directory are released. If a file path is specified, only that file is released. To release all exported files in the file system, specify a forward slash (/) as the path.
A file must also meet the last accessed time criteria specified in for the file to be released.
A list of paths for the data repository task to use when the task is processed. If a path that you provide isn't valid, the task fails. If you don't provide paths, the default behavior is to export all files to S3 (for export tasks), import all files from S3 (for import tasks), or release all exported files that meet the last accessed time criteria (for release tasks).
For export tasks, the list contains paths on the FSx for Lustre file system from which the files are exported to the Amazon S3 bucket. The default path is the file system root directory. The paths you provide need to be relative to the mount point of the file system. If the mount point is /mnt/fsx
and /mnt/fsx/path1
is a directory or file on the file system you want to export, then the path to provide is path1
.
For import tasks, the list contains paths in the Amazon S3 bucket from which POSIX metadata changes are imported to the FSx for Lustre file system. The path can be an S3 bucket or prefix in the format s3://bucket-name/prefix
(where prefix
is optional).
For release tasks, the list contains directory or file paths on the FSx for Lustre file system from which to release exported files. If a directory is specified, files within the directory are released. If a file path is specified, only that file is released. To release all exported files in the file system, specify a forward slash (/) as the path.
A file must also meet the last accessed time criteria specified in for the file to be released.
The path to the data repository that will be linked to the cache or file system.
For Amazon File Cache, the path can be an NFS data repository that will be linked to the cache. The path can be in one of two formats:
If you are not using the DataRepositorySubdirectories
parameter, the path is to an NFS Export directory (or one of its subdirectories) in the format nsf://nfs-domain-name/exportpath
. You can therefore link a single NFS Export to a single data repository association.
If you are using the DataRepositorySubdirectories
parameter, the path is the domain name of the NFS file system in the format nfs://filer-domain-name
, which indicates the root of the subdirectories specified with the DataRepositorySubdirectories
parameter.
For Amazon File Cache, the path can be an S3 bucket or prefix in the format s3://myBucket/myPrefix/
.
For Amazon FSx for Lustre, the path can be an S3 bucket or prefix in the format s3://myBucket/myPrefix/
.
The path to the data repository that will be linked to the cache or file system.
For Amazon File Cache, the path can be an NFS data repository that will be linked to the cache. The path can be in one of two formats:
If you are not using the DataRepositorySubdirectories
parameter, the path is to an NFS Export directory (or one of its subdirectories) in the format nsf://nfs-domain-name/exportpath
. You can therefore link a single NFS Export to a single data repository association.
If you are using the DataRepositorySubdirectories
parameter, the path is the domain name of the NFS file system in the format nfs://filer-domain-name
, which indicates the root of the subdirectories specified with the DataRepositorySubdirectories
parameter.
For Amazon File Cache, the path can be an S3 bucket or prefix in the format s3://bucket-name/prefix/
(where prefix
is optional).
For Amazon FSx for Lustre, the path can be an S3 bucket or prefix in the format s3://bucket-name/prefix/
(where prefix
is optional).
The path to the S3 or NFS data repository that links to the cache. You must provide one of the following paths:
The path can be an NFS data repository that links to the cache. The path can be in one of two formats:
If you are not using the DataRepositorySubdirectories
parameter, the path is to an NFS Export directory (or one of its subdirectories) in the format nfs://nfs-domain-name/exportpath
. You can therefore link a single NFS Export to a single data repository association.
If you are using the DataRepositorySubdirectories
parameter, the path is the domain name of the NFS file system in the format nfs://filer-domain-name
, which indicates the root of the subdirectories specified with the DataRepositorySubdirectories
parameter.
The path can be an S3 bucket or prefix in the format s3://myBucket/myPrefix/
.
The path to the S3 or NFS data repository that links to the cache. You must provide one of the following paths:
The path can be an NFS data repository that links to the cache. The path can be in one of two formats:
If you are not using the DataRepositorySubdirectories
parameter, the path is to an NFS Export directory (or one of its subdirectories) in the format nfs://nfs-domain-name/exportpath
. You can therefore link a single NFS Export to a single data repository association.
If you are using the DataRepositorySubdirectories
parameter, the path is the domain name of the NFS file system in the format nfs://filer-domain-name
, which indicates the root of the subdirectories specified with the DataRepositorySubdirectories
parameter.
The path can be an S3 bucket or prefix in the format s3://bucket-name/prefix/
(where prefix
is optional).