Skip to content

Commit

Permalink
Automated SDK generation @ aws-cloudformation-user-guide 19dc52cd3f20…
Browse files Browse the repository at this point in the history
…07d6d268b65b739ffb5ebf8c1e76 (#1960)

*Automated PR*
  • Loading branch information
pulumi-bot authored Jan 3, 2025
1 parent 19a27fe commit e03ce48
Show file tree
Hide file tree
Showing 56 changed files with 242 additions and 163 deletions.
2 changes: 1 addition & 1 deletion .docs.version
Original file line number Diff line number Diff line change
@@ -1 +1 @@
209eb2812860ebcaae26946768d8bb066d64a1a8
a45e7d1fb3467a9cff5bfadc703c5b974b79dabf
4 changes: 2 additions & 2 deletions aws-cloudformation-schema/aws-ecs-taskdefinition.json
Original file line number Diff line number Diff line change
Expand Up @@ -446,7 +446,7 @@
"type" : "string"
}
},
"description" : "The configuration options to send to the log driver.\n The options you can specify depend on the log driver. Some of the options you can specify when you use the ``awslogs`` log driver to route logs to Amazon CloudWatch include the following:\n + awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false. Your IAM policy must include the logs:CreateLogGroup permission before you attempt to use awslogs-create-group. + awslogs-region Required: Yes Specify the Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. + awslogs-group Required: Yes Make sure to specify a log group that the awslogs log driver sends its log streams to. + awslogs-stream-prefix Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type. Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id. If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. + awslogs-datetime-format Required: No This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. + awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline-pattern. This option is ignored if awslogs-datetime-format is also configured. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. + mode Required: No Valid values: non-blocking | blocking This option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted. If you use the blocking mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver. + max-buffer-size Required: No Default value: 1m When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. \n To route logs using the ``splunk`` log router, you need to specify a ``splunk-token`` and a ``splunk-url``.\n When you use the ``awsfirelens`` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the ``log-driver-buffer-limit`` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n Other options you can specify when using ``awsfirelens`` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with ``region`` and a name for the log stream with ``delivery_stream``.\n When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with ``region`` and a data stream name with ``stream``.\n When you export logs to Amazon OpenSearch Service, you can specify options like ``Name``, ``Host`` (OpenSearch Service endpoint without protocol), ``Port``, ``Index``, ``Type``, ``Aws_auth``, ``Aws_region``, ``Suppress_Type_Name``, and ``tls``.\n When you export logs to Amazon S3, you can specify the bucket using the ``bucket`` option. You can also specify ``region``, ``total_file_size``, ``upload_timeout``, and ``use_put_object`` as options.\n This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: ``sudo docker version --format '{{.Server.APIVersion}}'``",
"description" : "The configuration options to send to the log driver.\n The options you can specify depend on the log driver. Some of the options you can specify when you use the ``awslogs`` log driver to route logs to Amazon CloudWatch include the following:\n + awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false. Your IAM policy must include the logs:CreateLogGroup permission before you attempt to use awslogs-create-group. + awslogs-region Required: Yes Specify the Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. + awslogs-group Required: Yes Make sure to specify a log group that the awslogs log driver sends its log streams to. + awslogs-stream-prefix Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type. Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id. If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. + awslogs-datetime-format Required: No This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. + awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline-pattern. This option is ignored if awslogs-datetime-format is also configured. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. + mode Required: No Valid values: non-blocking | blocking This option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted. If you use the blocking mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver. + max-buffer-size Required: No Default value: 1m When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. \n To route logs using the ``splunk`` log router, you need to specify a ``splunk-token`` and a ``splunk-url``.\n When you use the ``awsfirelens`` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the ``log-driver-buffer-limit`` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n Other options you can specify when using ``awsfirelens`` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with ``region`` and a name for the log stream with ``delivery_stream``.\n When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with ``region`` and a data stream name with ``stream``.\n When you export logs to Amazon OpenSearch Service, you can specify options like ``Name``, ``Host`` (OpenSearch Service endpoint without protocol), ``Port``, ``Index``, ``Type``, ``Aws_auth``, ``Aws_region``, ``Suppress_Type_Name``, and ``tls``. For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/).\n When you export logs to Amazon S3, you can specify the bucket using the ``bucket`` option. You can also specify ``region``, ``total_file_size``, ``upload_timeout``, and ``use_put_object`` as options.\n This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: ``sudo docker version --format '{{.Server.APIVersion}}'``",
"additionalProperties" : false,
"type" : "object"
},
Expand Down Expand Up @@ -1032,7 +1032,7 @@
"type" : "string"
},
"EnableFaultInjection" : {
"description" : "",
"description" : "Enables fault injection and allows for fault injection requests to be accepted from the task's containers. The default value is ``false``.",
"type" : "boolean"
},
"ExecutionRoleArn" : {
Expand Down
3 changes: 2 additions & 1 deletion aws-cloudformation-schema/aws-kafkaconnect-connector.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,8 @@
"tagOnCreate" : true,
"tagUpdatable" : true,
"cloudFormationSystemTags" : true,
"tagProperty" : "/properties/Tags"
"tagProperty" : "/properties/Tags",
"permissions" : [ "kafkaconnect:ListTagsForResource", "kafkaconnect:UntagResource", "kafkaconnect:TagResource", "firehose:TagDeliveryStream" ]
},
"sourceUrl" : "https://github.com/aws-cloudformation/aws-cloudformation-resource-providers-kafkaconnect.git",
"properties" : {
Expand Down
3 changes: 2 additions & 1 deletion aws-cloudformation-schema/aws-kafkaconnect-customplugin.json
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,8 @@
"tagOnCreate" : true,
"tagUpdatable" : true,
"cloudFormationSystemTags" : true,
"tagProperty" : "/properties/Tags"
"tagProperty" : "/properties/Tags",
"permissions" : [ "kafkaconnect:ListTagsForResource", "kafkaconnect:UntagResource", "kafkaconnect:TagResource" ]
},
"handlers" : {
"create" : {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,8 @@
"tagOnCreate" : true,
"tagUpdatable" : true,
"cloudFormationSystemTags" : true,
"tagProperty" : "/properties/Tags"
"tagProperty" : "/properties/Tags",
"permissions" : [ "kafkaconnect:ListTagsForResource", "kafkaconnect:UntagResource", "kafkaconnect:TagResource" ]
},
"handlers" : {
"create" : {
Expand Down
2 changes: 1 addition & 1 deletion aws-cloudformation-schema/aws-logs-metricfilter.json
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@
"maxLength" : 512
},
"ApplyOnTransformedLogs" : {
"description" : "",
"description" : "This parameter is valid only for log groups that have an active log transformer. For more information about log transformers, see [PutTransformer](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutTransformer.html).\n If this value is ``true``, the metric filter is applied on the transformed version of the log events instead of the original ingested log events.",
"type" : "boolean"
},
"FilterName" : {
Expand Down
2 changes: 1 addition & 1 deletion aws-cloudformation-schema/aws-logs-subscriptionfilter.json
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
"enum" : [ "Random", "ByLogStream" ]
},
"ApplyOnTransformedLogs" : {
"description" : "",
"description" : "This parameter is valid only for log groups that have an active log transformer. For more information about log transformers, see [PutTransformer](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutTransformer.html).\n If this value is ``true``, the subscription filter is applied on the transformed version of the log events instead of the original ingested log events.",
"type" : "boolean"
}
},
Expand Down
5 changes: 4 additions & 1 deletion aws-cloudformation-schema/aws-wisdom-aiagent.json
Original file line number Diff line number Diff line change
Expand Up @@ -298,10 +298,13 @@
},
"Type" : {
"$ref" : "#/definitions/AIAgentType"
},
"ModifiedTimeSeconds" : {
"type" : "number"
}
},
"required" : [ "AssistantId", "Configuration", "Type" ],
"readOnlyProperties" : [ "/properties/AIAgentArn", "/properties/AIAgentId", "/properties/AssistantArn" ],
"readOnlyProperties" : [ "/properties/AIAgentArn", "/properties/AIAgentId", "/properties/AssistantArn", "/properties/ModifiedTimeSeconds" ],
"createOnlyProperties" : [ "/properties/AssistantId", "/properties/Name", "/properties/Tags", "/properties/Type" ],
"primaryIdentifier" : [ "/properties/AIAgentId", "/properties/AssistantId" ],
"additionalIdentifiers" : [ [ "/properties/AIAgentArn", "/properties/AssistantArn" ] ],
Expand Down
5 changes: 4 additions & 1 deletion aws-cloudformation-schema/aws-wisdom-aiprompt.json
Original file line number Diff line number Diff line change
Expand Up @@ -101,10 +101,13 @@
},
"Type" : {
"$ref" : "#/definitions/AIPromptType"
},
"ModifiedTimeSeconds" : {
"type" : "number"
}
},
"required" : [ "ApiFormat", "ModelId", "TemplateConfiguration", "TemplateType", "Type" ],
"readOnlyProperties" : [ "/properties/AIPromptArn", "/properties/AIPromptId", "/properties/AssistantArn" ],
"readOnlyProperties" : [ "/properties/AIPromptArn", "/properties/AIPromptId", "/properties/AssistantArn", "/properties/ModifiedTimeSeconds" ],
"createOnlyProperties" : [ "/properties/ApiFormat", "/properties/AssistantId", "/properties/ModelId", "/properties/Name", "/properties/Tags", "/properties/TemplateType", "/properties/Type" ],
"primaryIdentifier" : [ "/properties/AIPromptId", "/properties/AssistantId" ],
"additionalIdentifiers" : [ [ "/properties/AIPromptArn", "/properties/AssistantArn" ] ],
Expand Down
2 changes: 1 addition & 1 deletion meta/.botocore.version
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.35.90
1.35.91
Loading

0 comments on commit e03ce48

Please sign in to comment.