diff --git a/VERSION b/VERSION
index 660f3a232aa..94dc9891dd4 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-1.11.442
\ No newline at end of file
+1.11.443
\ No newline at end of file
diff --git a/generated/src/aws-cpp-sdk-batch/include/aws/batch/model/LaunchTemplateSpecification.h b/generated/src/aws-cpp-sdk-batch/include/aws/batch/model/LaunchTemplateSpecification.h
index 65959c119b7..36a6ac0784f 100644
--- a/generated/src/aws-cpp-sdk-batch/include/aws/batch/model/LaunchTemplateSpecification.h
+++ b/generated/src/aws-cpp-sdk-batch/include/aws/batch/model/LaunchTemplateSpecification.h
@@ -6,6 +6,8 @@
#pragma once
#include The version number of the launch template, If the value is If the
+ * The version number of the launch template, If the value is If the
* AMI ID that's used in a compute environment is from the launch template, the AMI
* isn't changed when the compute environment is updated. It's only changed if the
* Default: Default: Latest: A launch template to use in place of the default launch template. You must
+ * specify either the launch template ID or launch template name in the request,
+ * but not both. You can specify up to ten (10) launch template overrides
+ * that are associated to unique instance types or families for each compute
+ * environment. To unset all override templates for a compute
+ * environment, you can pass an empty array to the UpdateComputeEnvironment.overrides
+ * parameter, or not include the An object that represents a launch template to use in place of the default
+ * launch template. You must specify either the launch template ID or launch
+ * template name in the request, but not both. If security groups are
+ * specified using both the You can define up to ten (10) overrides for each compute
+ * environment. This object isn't applicable to jobs that are running
+ * on Fargate resources. To unset all override templates for
+ * a compute environment, you can pass an empty array to the UpdateComputeEnvironment.overrides
+ * parameter, or not include the $Latest
, or
- * $Default
.$Latest
, the latest
- * version of the launch template is used. If the value is $Default
,
- * the default version of the launch template is used.$Default
, or
+ * $Latest
.$Default
, the default
+ * version of the launch template is used. If the value is $Latest
,
+ * the latest version of the launch template is used. updateToLatestImageVersion
parameter for the compute environment is
* set to true
. During an infrastructure update, if either
- * $Latest
or $Default
is specified, Batch re-evaluates
+ * $Default
or $Latest
is specified, Batch re-evaluates
* the launch template version, and it might use a different version of the launch
* template. This is the case even if the launch template isn't specified in the
* update. When updating a compute environment, changing the launch template
@@ -90,7 +92,7 @@ namespace Model
* information, see Updating
* compute environments in the Batch User Guide.$Default
.$Default
$Latest
overrides
parameter when submitting
+ * the UpdateComputeEnvironment
API operation.securityGroupIds
parameter of
+ * CreateComputeEnvironment
and the launch template, the values in the
+ * securityGroupIds
parameter of CreateComputeEnvironment
+ * will be used.overrides
parameter when submitting
+ * the UpdateComputeEnvironment
API operation.See
+ * Also:
AWS
+ * API Reference
The ID of the launch template.
Note: If you specify the
+ * launchTemplateId
you can't specify the
+ * launchTemplateName
as well.
The name of the launch template.
Note: If you specify the
+ * launchTemplateName
you can't specify the
+ * launchTemplateId
as well.
The version number of the launch template, $Default
, or
+ * $Latest
.
If the value is $Default
, the default
+ * version of the launch template is used. If the value is $Latest
,
+ * the latest version of the launch template is used.
If the
+ * AMI ID that's used in a compute environment is from the launch template, the AMI
+ * isn't changed when the compute environment is updated. It's only changed if the
+ * updateToLatestImageVersion
parameter for the compute environment is
+ * set to true
. During an infrastructure update, if either
+ * $Default
or $Latest
is specified, Batch re-evaluates
+ * the launch template version, and it might use a different version of the launch
+ * template. This is the case even if the launch template isn't specified in the
+ * update. When updating a compute environment, changing the launch template
+ * requires an infrastructure update of the compute environment. For more
+ * information, see Updating
+ * compute environments in the Batch User Guide.
Default: $Default
Latest: $Latest
The instance type or family that this this override launch template should be + * applied to.
This parameter is required when defining a launch template + * override.
Information included in this parameter must meet the following + * requirements:
Must be a valid Amazon EC2 instance type or + * family.
optimal
isn't allowed.
targetInstanceTypes
can target only instance types and families
+ * that are included within the
+ * ComputeResource.instanceTypes
set.
+ * targetInstanceTypes
doesn't need to include all of the instances
+ * from the instanceType
set, but at least a subset. For example, if
+ * ComputeResource.instanceTypes
includes [m5, g5]
,
+ * targetInstanceTypes
can include [m5.2xlarge]
and
+ * [m5.large]
but not [c5.large]
.
+ * targetInstanceTypes
included within the same launch template
+ * override or across launch template overrides can't overlap for the same compute
+ * environment. For example, you can't define one launch template override to
+ * target an instance family and another define an instance type within this same
+ * family.
Contains information about why a flow completed.
This data type is - * used in the following API operations:
Contains information about an input into the prompt flow and where to send - * it.
This data type is used in the following API operations:
Contains information about an input into the flow.
This data type is - * used in the following API operations:
Contains information about the content in an output from prompt flow - * invocation.
This data type is used in the following API operations:
- *Contains information about an output from prompt flow invoction.
This - * data type is used in the following API operations:
The output of the flow.
This data type is used in the following API - * operations:
Contains information about a trace, which tracks an input or output for a + * node in the flow.
+ */ + inline const FlowTraceEvent& GetFlowTraceEvent() const{ return m_flowTraceEvent; } + inline bool FlowTraceEventHasBeenSet() const { return m_flowTraceEventHasBeenSet; } + inline void SetFlowTraceEvent(const FlowTraceEvent& value) { m_flowTraceEventHasBeenSet = true; m_flowTraceEvent = value; } + inline void SetFlowTraceEvent(FlowTraceEvent&& value) { m_flowTraceEventHasBeenSet = true; m_flowTraceEvent = std::move(value); } + inline FlowResponseStream& WithFlowTraceEvent(const FlowTraceEvent& value) { SetFlowTraceEvent(value); return *this;} + inline FlowResponseStream& WithFlowTraceEvent(FlowTraceEvent&& value) { SetFlowTraceEvent(std::move(value)); return *this;} + ///@} + ///@{ /** *An internal server error occurred. Retry your request.
@@ -208,6 +219,9 @@ namespace Model FlowOutputEvent m_flowOutputEvent; bool m_flowOutputEventHasBeenSet = false; + FlowTraceEvent m_flowTraceEvent; + bool m_flowTraceEventHasBeenSet = false; + InternalServerException m_internalServerException; bool m_internalServerExceptionHasBeenSet = false; diff --git a/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTrace.h b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTrace.h new file mode 100644 index 00000000000..2fdaa762ad4 --- /dev/null +++ b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTrace.h @@ -0,0 +1,95 @@ +/** + * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. + * SPDX-License-Identifier: Apache-2.0. + */ + +#pragma once +#includeContains information about an input or output for a node in the flow. For + * more information, see Track + * each step in your prompt flow by viewing its trace in Amazon + * Bedrock.
Contains information about an output from a condition node.
+ */ + inline const FlowTraceConditionNodeResultEvent& GetConditionNodeResultTrace() const{ return m_conditionNodeResultTrace; } + inline bool ConditionNodeResultTraceHasBeenSet() const { return m_conditionNodeResultTraceHasBeenSet; } + inline void SetConditionNodeResultTrace(const FlowTraceConditionNodeResultEvent& value) { m_conditionNodeResultTraceHasBeenSet = true; m_conditionNodeResultTrace = value; } + inline void SetConditionNodeResultTrace(FlowTraceConditionNodeResultEvent&& value) { m_conditionNodeResultTraceHasBeenSet = true; m_conditionNodeResultTrace = std::move(value); } + inline FlowTrace& WithConditionNodeResultTrace(const FlowTraceConditionNodeResultEvent& value) { SetConditionNodeResultTrace(value); return *this;} + inline FlowTrace& WithConditionNodeResultTrace(FlowTraceConditionNodeResultEvent&& value) { SetConditionNodeResultTrace(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *Contains information about the input into a node.
+ */ + inline const FlowTraceNodeInputEvent& GetNodeInputTrace() const{ return m_nodeInputTrace; } + inline bool NodeInputTraceHasBeenSet() const { return m_nodeInputTraceHasBeenSet; } + inline void SetNodeInputTrace(const FlowTraceNodeInputEvent& value) { m_nodeInputTraceHasBeenSet = true; m_nodeInputTrace = value; } + inline void SetNodeInputTrace(FlowTraceNodeInputEvent&& value) { m_nodeInputTraceHasBeenSet = true; m_nodeInputTrace = std::move(value); } + inline FlowTrace& WithNodeInputTrace(const FlowTraceNodeInputEvent& value) { SetNodeInputTrace(value); return *this;} + inline FlowTrace& WithNodeInputTrace(FlowTraceNodeInputEvent&& value) { SetNodeInputTrace(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *Contains information about the output from a node.
+ */ + inline const FlowTraceNodeOutputEvent& GetNodeOutputTrace() const{ return m_nodeOutputTrace; } + inline bool NodeOutputTraceHasBeenSet() const { return m_nodeOutputTraceHasBeenSet; } + inline void SetNodeOutputTrace(const FlowTraceNodeOutputEvent& value) { m_nodeOutputTraceHasBeenSet = true; m_nodeOutputTrace = value; } + inline void SetNodeOutputTrace(FlowTraceNodeOutputEvent&& value) { m_nodeOutputTraceHasBeenSet = true; m_nodeOutputTrace = std::move(value); } + inline FlowTrace& WithNodeOutputTrace(const FlowTraceNodeOutputEvent& value) { SetNodeOutputTrace(value); return *this;} + inline FlowTrace& WithNodeOutputTrace(FlowTraceNodeOutputEvent&& value) { SetNodeOutputTrace(std::move(value)); return *this;} + ///@} + private: + + FlowTraceConditionNodeResultEvent m_conditionNodeResultTrace; + bool m_conditionNodeResultTraceHasBeenSet = false; + + FlowTraceNodeInputEvent m_nodeInputTrace; + bool m_nodeInputTraceHasBeenSet = false; + + FlowTraceNodeOutputEvent m_nodeOutputTrace; + bool m_nodeOutputTraceHasBeenSet = false; + }; + +} // namespace Model +} // namespace BedrockAgentRuntime +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceCondition.h b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceCondition.h new file mode 100644 index 00000000000..9c6d825c9b5 --- /dev/null +++ b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceCondition.h @@ -0,0 +1,65 @@ +/** + * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. + * SPDX-License-Identifier: Apache-2.0. + */ + +#pragma once +#includeContains information about a condition that was satisfied. For more + * information, see Track + * each step in your prompt flow by viewing its trace in Amazon + * Bedrock.
The name of the condition.
+ */ + inline const Aws::String& GetConditionName() const{ return m_conditionName; } + inline bool ConditionNameHasBeenSet() const { return m_conditionNameHasBeenSet; } + inline void SetConditionName(const Aws::String& value) { m_conditionNameHasBeenSet = true; m_conditionName = value; } + inline void SetConditionName(Aws::String&& value) { m_conditionNameHasBeenSet = true; m_conditionName = std::move(value); } + inline void SetConditionName(const char* value) { m_conditionNameHasBeenSet = true; m_conditionName.assign(value); } + inline FlowTraceCondition& WithConditionName(const Aws::String& value) { SetConditionName(value); return *this;} + inline FlowTraceCondition& WithConditionName(Aws::String&& value) { SetConditionName(std::move(value)); return *this;} + inline FlowTraceCondition& WithConditionName(const char* value) { SetConditionName(value); return *this;} + ///@} + private: + + Aws::String m_conditionName; + bool m_conditionNameHasBeenSet = false; + }; + +} // namespace Model +} // namespace BedrockAgentRuntime +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceConditionNodeResultEvent.h b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceConditionNodeResultEvent.h new file mode 100644 index 00000000000..2f2b7b20226 --- /dev/null +++ b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceConditionNodeResultEvent.h @@ -0,0 +1,101 @@ +/** + * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. + * SPDX-License-Identifier: Apache-2.0. + */ + +#pragma once +#includeContains information about an output from a condition node. For more + * information, see Track + * each step in your prompt flow by viewing its trace in Amazon + * Bedrock.
The name of the condition node.
+ */ + inline const Aws::String& GetNodeName() const{ return m_nodeName; } + inline bool NodeNameHasBeenSet() const { return m_nodeNameHasBeenSet; } + inline void SetNodeName(const Aws::String& value) { m_nodeNameHasBeenSet = true; m_nodeName = value; } + inline void SetNodeName(Aws::String&& value) { m_nodeNameHasBeenSet = true; m_nodeName = std::move(value); } + inline void SetNodeName(const char* value) { m_nodeNameHasBeenSet = true; m_nodeName.assign(value); } + inline FlowTraceConditionNodeResultEvent& WithNodeName(const Aws::String& value) { SetNodeName(value); return *this;} + inline FlowTraceConditionNodeResultEvent& WithNodeName(Aws::String&& value) { SetNodeName(std::move(value)); return *this;} + inline FlowTraceConditionNodeResultEvent& WithNodeName(const char* value) { SetNodeName(value); return *this;} + ///@} + + ///@{ + /** + *An array of objects containing information about the conditions that were + * satisfied.
+ */ + inline const Aws::VectorThe date and time that the trace was returned.
+ */ + inline const Aws::Utils::DateTime& GetTimestamp() const{ return m_timestamp; } + inline bool TimestampHasBeenSet() const { return m_timestampHasBeenSet; } + inline void SetTimestamp(const Aws::Utils::DateTime& value) { m_timestampHasBeenSet = true; m_timestamp = value; } + inline void SetTimestamp(Aws::Utils::DateTime&& value) { m_timestampHasBeenSet = true; m_timestamp = std::move(value); } + inline FlowTraceConditionNodeResultEvent& WithTimestamp(const Aws::Utils::DateTime& value) { SetTimestamp(value); return *this;} + inline FlowTraceConditionNodeResultEvent& WithTimestamp(Aws::Utils::DateTime&& value) { SetTimestamp(std::move(value)); return *this;} + ///@} + private: + + Aws::String m_nodeName; + bool m_nodeNameHasBeenSet = false; + + Aws::VectorContains information about a trace, which tracks an input or output for a + * node in the flow. For more information, see Track + * each step in your prompt flow by viewing its trace in Amazon + * Bedrock.
The trace object containing information about an input or output for a node + * in the flow.
+ */ + inline const FlowTrace& GetTrace() const{ return m_trace; } + inline bool TraceHasBeenSet() const { return m_traceHasBeenSet; } + inline void SetTrace(const FlowTrace& value) { m_traceHasBeenSet = true; m_trace = value; } + inline void SetTrace(FlowTrace&& value) { m_traceHasBeenSet = true; m_trace = std::move(value); } + inline FlowTraceEvent& WithTrace(const FlowTrace& value) { SetTrace(value); return *this;} + inline FlowTraceEvent& WithTrace(FlowTrace&& value) { SetTrace(std::move(value)); return *this;} + ///@} + private: + + FlowTrace m_trace; + bool m_traceHasBeenSet = false; + }; + +} // namespace Model +} // namespace BedrockAgentRuntime +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceNodeInputContent.h b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceNodeInputContent.h new file mode 100644 index 00000000000..e0c103837dd --- /dev/null +++ b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceNodeInputContent.h @@ -0,0 +1,62 @@ +/** + * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. + * SPDX-License-Identifier: Apache-2.0. + */ + +#pragma once +#includeContains the content of the node input. For more information, see Track + * each step in your prompt flow by viewing its trace in Amazon + * Bedrock.
The content of the node input.
+ */ + inline Aws::Utils::DocumentView GetDocument() const{ return m_document; } + inline bool DocumentHasBeenSet() const { return m_documentHasBeenSet; } + inline void SetDocument(const Aws::Utils::Document& value) { m_documentHasBeenSet = true; m_document = value; } + inline void SetDocument(Aws::Utils::Document&& value) { m_documentHasBeenSet = true; m_document = std::move(value); } + inline FlowTraceNodeInputContent& WithDocument(const Aws::Utils::Document& value) { SetDocument(value); return *this;} + inline FlowTraceNodeInputContent& WithDocument(Aws::Utils::Document&& value) { SetDocument(std::move(value)); return *this;} + ///@} + private: + + Aws::Utils::Document m_document; + bool m_documentHasBeenSet = false; + }; + +} // namespace Model +} // namespace BedrockAgentRuntime +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceNodeInputEvent.h b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceNodeInputEvent.h new file mode 100644 index 00000000000..1cbe9d640d0 --- /dev/null +++ b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceNodeInputEvent.h @@ -0,0 +1,100 @@ +/** + * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. + * SPDX-License-Identifier: Apache-2.0. + */ + +#pragma once +#includeContains information about the input into a node. For more information, see + * Track + * each step in your prompt flow by viewing its trace in Amazon + * Bedrock.
An array of objects containing information about each field in the input.
+ */ + inline const Aws::VectorThe name of the node that received the input.
+ */ + inline const Aws::String& GetNodeName() const{ return m_nodeName; } + inline bool NodeNameHasBeenSet() const { return m_nodeNameHasBeenSet; } + inline void SetNodeName(const Aws::String& value) { m_nodeNameHasBeenSet = true; m_nodeName = value; } + inline void SetNodeName(Aws::String&& value) { m_nodeNameHasBeenSet = true; m_nodeName = std::move(value); } + inline void SetNodeName(const char* value) { m_nodeNameHasBeenSet = true; m_nodeName.assign(value); } + inline FlowTraceNodeInputEvent& WithNodeName(const Aws::String& value) { SetNodeName(value); return *this;} + inline FlowTraceNodeInputEvent& WithNodeName(Aws::String&& value) { SetNodeName(std::move(value)); return *this;} + inline FlowTraceNodeInputEvent& WithNodeName(const char* value) { SetNodeName(value); return *this;} + ///@} + + ///@{ + /** + *The date and time that the trace was returned.
+ */ + inline const Aws::Utils::DateTime& GetTimestamp() const{ return m_timestamp; } + inline bool TimestampHasBeenSet() const { return m_timestampHasBeenSet; } + inline void SetTimestamp(const Aws::Utils::DateTime& value) { m_timestampHasBeenSet = true; m_timestamp = value; } + inline void SetTimestamp(Aws::Utils::DateTime&& value) { m_timestampHasBeenSet = true; m_timestamp = std::move(value); } + inline FlowTraceNodeInputEvent& WithTimestamp(const Aws::Utils::DateTime& value) { SetTimestamp(value); return *this;} + inline FlowTraceNodeInputEvent& WithTimestamp(Aws::Utils::DateTime&& value) { SetTimestamp(std::move(value)); return *this;} + ///@} + private: + + Aws::VectorContains information about a field in the input into a node. For more + * information, see Track + * each step in your prompt flow by viewing its trace in Amazon + * Bedrock.
The content of the node input.
+ */ + inline const FlowTraceNodeInputContent& GetContent() const{ return m_content; } + inline bool ContentHasBeenSet() const { return m_contentHasBeenSet; } + inline void SetContent(const FlowTraceNodeInputContent& value) { m_contentHasBeenSet = true; m_content = value; } + inline void SetContent(FlowTraceNodeInputContent&& value) { m_contentHasBeenSet = true; m_content = std::move(value); } + inline FlowTraceNodeInputField& WithContent(const FlowTraceNodeInputContent& value) { SetContent(value); return *this;} + inline FlowTraceNodeInputField& WithContent(FlowTraceNodeInputContent&& value) { SetContent(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *The name of the node input.
+ */ + inline const Aws::String& GetNodeInputName() const{ return m_nodeInputName; } + inline bool NodeInputNameHasBeenSet() const { return m_nodeInputNameHasBeenSet; } + inline void SetNodeInputName(const Aws::String& value) { m_nodeInputNameHasBeenSet = true; m_nodeInputName = value; } + inline void SetNodeInputName(Aws::String&& value) { m_nodeInputNameHasBeenSet = true; m_nodeInputName = std::move(value); } + inline void SetNodeInputName(const char* value) { m_nodeInputNameHasBeenSet = true; m_nodeInputName.assign(value); } + inline FlowTraceNodeInputField& WithNodeInputName(const Aws::String& value) { SetNodeInputName(value); return *this;} + inline FlowTraceNodeInputField& WithNodeInputName(Aws::String&& value) { SetNodeInputName(std::move(value)); return *this;} + inline FlowTraceNodeInputField& WithNodeInputName(const char* value) { SetNodeInputName(value); return *this;} + ///@} + private: + + FlowTraceNodeInputContent m_content; + bool m_contentHasBeenSet = false; + + Aws::String m_nodeInputName; + bool m_nodeInputNameHasBeenSet = false; + }; + +} // namespace Model +} // namespace BedrockAgentRuntime +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceNodeOutputContent.h b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceNodeOutputContent.h new file mode 100644 index 00000000000..aae69254452 --- /dev/null +++ b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceNodeOutputContent.h @@ -0,0 +1,62 @@ +/** + * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. + * SPDX-License-Identifier: Apache-2.0. + */ + +#pragma once +#includeContains the content of the node output. For more information, see Track + * each step in your prompt flow by viewing its trace in Amazon + * Bedrock.
The content of the node output.
+ */ + inline Aws::Utils::DocumentView GetDocument() const{ return m_document; } + inline bool DocumentHasBeenSet() const { return m_documentHasBeenSet; } + inline void SetDocument(const Aws::Utils::Document& value) { m_documentHasBeenSet = true; m_document = value; } + inline void SetDocument(Aws::Utils::Document&& value) { m_documentHasBeenSet = true; m_document = std::move(value); } + inline FlowTraceNodeOutputContent& WithDocument(const Aws::Utils::Document& value) { SetDocument(value); return *this;} + inline FlowTraceNodeOutputContent& WithDocument(Aws::Utils::Document&& value) { SetDocument(std::move(value)); return *this;} + ///@} + private: + + Aws::Utils::Document m_document; + bool m_documentHasBeenSet = false; + }; + +} // namespace Model +} // namespace BedrockAgentRuntime +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceNodeOutputEvent.h b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceNodeOutputEvent.h new file mode 100644 index 00000000000..d11b877c36a --- /dev/null +++ b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/FlowTraceNodeOutputEvent.h @@ -0,0 +1,101 @@ +/** + * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. + * SPDX-License-Identifier: Apache-2.0. + */ + +#pragma once +#includeContains information about the output from a node. For more information, see + * Track + * each step in your prompt flow by viewing its trace in Amazon + * Bedrock.
An array of objects containing information about each field in the + * output.
+ */ + inline const Aws::VectorThe name of the node that yielded the output.
+ */ + inline const Aws::String& GetNodeName() const{ return m_nodeName; } + inline bool NodeNameHasBeenSet() const { return m_nodeNameHasBeenSet; } + inline void SetNodeName(const Aws::String& value) { m_nodeNameHasBeenSet = true; m_nodeName = value; } + inline void SetNodeName(Aws::String&& value) { m_nodeNameHasBeenSet = true; m_nodeName = std::move(value); } + inline void SetNodeName(const char* value) { m_nodeNameHasBeenSet = true; m_nodeName.assign(value); } + inline FlowTraceNodeOutputEvent& WithNodeName(const Aws::String& value) { SetNodeName(value); return *this;} + inline FlowTraceNodeOutputEvent& WithNodeName(Aws::String&& value) { SetNodeName(std::move(value)); return *this;} + inline FlowTraceNodeOutputEvent& WithNodeName(const char* value) { SetNodeName(value); return *this;} + ///@} + + ///@{ + /** + *The date and time that the trace was returned.
+ */ + inline const Aws::Utils::DateTime& GetTimestamp() const{ return m_timestamp; } + inline bool TimestampHasBeenSet() const { return m_timestampHasBeenSet; } + inline void SetTimestamp(const Aws::Utils::DateTime& value) { m_timestampHasBeenSet = true; m_timestamp = value; } + inline void SetTimestamp(Aws::Utils::DateTime&& value) { m_timestampHasBeenSet = true; m_timestamp = std::move(value); } + inline FlowTraceNodeOutputEvent& WithTimestamp(const Aws::Utils::DateTime& value) { SetTimestamp(value); return *this;} + inline FlowTraceNodeOutputEvent& WithTimestamp(Aws::Utils::DateTime&& value) { SetTimestamp(std::move(value)); return *this;} + ///@} + private: + + Aws::VectorContains information about a field in the output from a node. For more + * information, see Track + * each step in your prompt flow by viewing its trace in Amazon + * Bedrock.
The content of the node output.
+ */ + inline const FlowTraceNodeOutputContent& GetContent() const{ return m_content; } + inline bool ContentHasBeenSet() const { return m_contentHasBeenSet; } + inline void SetContent(const FlowTraceNodeOutputContent& value) { m_contentHasBeenSet = true; m_content = value; } + inline void SetContent(FlowTraceNodeOutputContent&& value) { m_contentHasBeenSet = true; m_content = std::move(value); } + inline FlowTraceNodeOutputField& WithContent(const FlowTraceNodeOutputContent& value) { SetContent(value); return *this;} + inline FlowTraceNodeOutputField& WithContent(FlowTraceNodeOutputContent&& value) { SetContent(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *The name of the node output.
+ */ + inline const Aws::String& GetNodeOutputName() const{ return m_nodeOutputName; } + inline bool NodeOutputNameHasBeenSet() const { return m_nodeOutputNameHasBeenSet; } + inline void SetNodeOutputName(const Aws::String& value) { m_nodeOutputNameHasBeenSet = true; m_nodeOutputName = value; } + inline void SetNodeOutputName(Aws::String&& value) { m_nodeOutputNameHasBeenSet = true; m_nodeOutputName = std::move(value); } + inline void SetNodeOutputName(const char* value) { m_nodeOutputNameHasBeenSet = true; m_nodeOutputName.assign(value); } + inline FlowTraceNodeOutputField& WithNodeOutputName(const Aws::String& value) { SetNodeOutputName(value); return *this;} + inline FlowTraceNodeOutputField& WithNodeOutputName(Aws::String&& value) { SetNodeOutputName(std::move(value)); return *this;} + inline FlowTraceNodeOutputField& WithNodeOutputName(const char* value) { SetNodeOutputName(value); return *this;} + ///@} + private: + + FlowTraceNodeOutputContent m_content; + bool m_contentHasBeenSet = false; + + Aws::String m_nodeOutputName; + bool m_nodeOutputNameHasBeenSet = false; + }; + +} // namespace Model +} // namespace BedrockAgentRuntime +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/GenerationConfiguration.h b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/GenerationConfiguration.h index 01e7db7db0f..9a54c3a521e 100644 --- a/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/GenerationConfiguration.h +++ b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/GenerationConfiguration.h @@ -94,7 +94,10 @@ namespace Model ///@{ /** *Contains the template for the prompt that's sent to the model for response - * generation.
+ * generation. Generation prompts must include the$search_results$
+ * variable. For more information, see Use
+ * placeholder variables in the user guide.
*/
inline const PromptTemplate& GetPromptTemplate() const{ return m_promptTemplate; }
inline bool PromptTemplateHasBeenSet() const { return m_promptTemplateHasBeenSet; }
diff --git a/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/InvokeFlowHandler.h b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/InvokeFlowHandler.h
index 13219a28ba0..e56e12206ee 100644
--- a/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/InvokeFlowHandler.h
+++ b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/InvokeFlowHandler.h
@@ -13,6 +13,7 @@
#include Specifies whether to return the trace for the flow or not. Traces track + * inputs and outputs for nodes in the flow. For more information, see Track + * each step in your prompt flow by viewing its trace in Amazon Bedrock.
+ */ + inline bool GetEnableTrace() const{ return m_enableTrace; } + inline bool EnableTraceHasBeenSet() const { return m_enableTraceHasBeenSet; } + inline void SetEnableTrace(bool value) { m_enableTraceHasBeenSet = true; m_enableTrace = value; } + inline InvokeFlowRequest& WithEnableTrace(bool value) { SetEnableTrace(value); return *this;} + ///@} + ///@{ /** *The unique identifier of the flow alias.
@@ -100,6 +113,9 @@ namespace Model ///@} private: + bool m_enableTrace; + bool m_enableTraceHasBeenSet = false; + Aws::String m_flowAliasIdentifier; bool m_flowAliasIdentifierHasBeenSet = false; diff --git a/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/OrchestrationConfiguration.h b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/OrchestrationConfiguration.h index 4153e06dc54..5a5af1bfe02 100644 --- a/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/OrchestrationConfiguration.h +++ b/generated/src/aws-cpp-sdk-bedrock-agent-runtime/include/aws/bedrock-agent-runtime/model/OrchestrationConfiguration.h @@ -78,8 +78,12 @@ namespace Model ///@{ /** - *Contains the template for the prompt that's sent to the model for response - * generation.
+ *Contains the template for the prompt that's sent to the model. Orchestration
+ * prompts must include the $conversation_history$
and
+ * $output_format_instructions$
variables. For more information, see
+ * Use
+ * placeholder variables in the user guide.
Turns language identification on or off for multiple languages.
+ *Turns language identification on or off for multiple languages.
+ *Calls to this API must include a LanguageCode
,
+ * IdentifyLanguage
, or IdentifyMultipleLanguages
+ * parameter. If you include more than one of those parameters, your transcription
+ * job fails.
An object that contains server side encryption parameters to be used by media + * capture pipeline. The parameters can also be used by media concatenation + * pipeline taking media capture pipeline as a media source.
+ */ + inline const SseAwsKeyManagementParams& GetSseAwsKeyManagementParams() const{ return m_sseAwsKeyManagementParams; } + inline bool SseAwsKeyManagementParamsHasBeenSet() const { return m_sseAwsKeyManagementParamsHasBeenSet; } + inline void SetSseAwsKeyManagementParams(const SseAwsKeyManagementParams& value) { m_sseAwsKeyManagementParamsHasBeenSet = true; m_sseAwsKeyManagementParams = value; } + inline void SetSseAwsKeyManagementParams(SseAwsKeyManagementParams&& value) { m_sseAwsKeyManagementParamsHasBeenSet = true; m_sseAwsKeyManagementParams = std::move(value); } + inline CreateMediaCapturePipelineRequest& WithSseAwsKeyManagementParams(const SseAwsKeyManagementParams& value) { SetSseAwsKeyManagementParams(value); return *this;} + inline CreateMediaCapturePipelineRequest& WithSseAwsKeyManagementParams(SseAwsKeyManagementParams&& value) { SetSseAwsKeyManagementParams(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *The Amazon Resource Name (ARN) of the sink role to be used with
+ * AwsKmsKeyId
in SseAwsKeyManagementParams
. Can only
+ * interact with S3Bucket
sink type. The role must belong to the
+ * caller’s account and be able to act on behalf of the caller during the API call.
+ * All minimum policy permissions requirements for the caller to perform
+ * sink-related actions are the same for SinkIamRoleArn
.
Additionally, the role must have permission to
+ * kms:GenerateDataKey
using KMS key supplied as
+ * AwsKmsKeyId
in SseAwsKeyManagementParams
. If media
+ * concatenation will be required later, the role must also have permission to
+ * kms:Decrypt
for the same KMS key.
The tag key-value pairs.
@@ -153,6 +192,12 @@ namespace Model ChimeSdkMeetingConfiguration m_chimeSdkMeetingConfiguration; bool m_chimeSdkMeetingConfigurationHasBeenSet = false; + SseAwsKeyManagementParams m_sseAwsKeyManagementParams; + bool m_sseAwsKeyManagementParamsHasBeenSet = false; + + Aws::String m_sinkIamRoleArn; + bool m_sinkIamRoleArnHasBeenSet = false; + Aws::VectorAn object that contains server side encryption parameters to be used by media + * capture pipeline. The parameters can also be used by media concatenation + * pipeline taking media capture pipeline as a media source.
+ */ + inline const SseAwsKeyManagementParams& GetSseAwsKeyManagementParams() const{ return m_sseAwsKeyManagementParams; } + inline bool SseAwsKeyManagementParamsHasBeenSet() const { return m_sseAwsKeyManagementParamsHasBeenSet; } + inline void SetSseAwsKeyManagementParams(const SseAwsKeyManagementParams& value) { m_sseAwsKeyManagementParamsHasBeenSet = true; m_sseAwsKeyManagementParams = value; } + inline void SetSseAwsKeyManagementParams(SseAwsKeyManagementParams&& value) { m_sseAwsKeyManagementParamsHasBeenSet = true; m_sseAwsKeyManagementParams = std::move(value); } + inline MediaCapturePipeline& WithSseAwsKeyManagementParams(const SseAwsKeyManagementParams& value) { SetSseAwsKeyManagementParams(value); return *this;} + inline MediaCapturePipeline& WithSseAwsKeyManagementParams(SseAwsKeyManagementParams&& value) { SetSseAwsKeyManagementParams(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *The Amazon Resource Name (ARN) of the sink role to be used with
+ * AwsKmsKeyId
in SseAwsKeyManagementParams
.
Contains server side encryption parameters to be used by media capture + * pipeline. The parameters can also be used by media concatenation pipeline taking + * media capture pipeline as a media source.
The KMS key you want to use to encrypt your media pipeline output. Decryption + * is required for concatenation pipeline. If using a key located in the current + * Amazon Web Services account, you can specify your KMS key in one of four + * ways:
Use the KMS key ID itself. For example,
+ * 1234abcd-12ab-34cd-56ef-1234567890ab
.
Use an
+ * alias for the KMS key ID. For example, alias/ExampleAlias
.
Use the Amazon Resource Name (ARN) for the KMS key ID. For
+ * example,
+ * arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
.
Use the ARN for the KMS key alias. For example,
+ * arn:aws:kms:region:account-ID:alias/ExampleAlias
.
If using a key located in a different Amazon Web Services account than the + * current Amazon Web Services account, you can specify your KMS key in one of two + * ways:
Use the ARN for the KMS key ID. For example,
+ * arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
.
Use the ARN for the KMS key alias. For example,
+ * arn:aws:kms:region:account-ID:alias/ExampleAlias
.
If you don't specify an encryption key, your output is encrypted with the + * default Amazon S3 key (SSE-S3).
Note that the role specified in the
+ * SinkIamRoleArn
request parameter must have permission to use the
+ * specified KMS key.
Base64-encoded string of a UTF-8 encoded JSON, which contains the encryption + * context as non-secret key-value pair known as encryption context pairs, that + * provides an added layer of security for your data. For more information, see KMS + * encryption context and Asymmetric + * keys in KMS in the Key Management Service Developer Guide.
+ */ + inline const Aws::String& GetAwsKmsEncryptionContext() const{ return m_awsKmsEncryptionContext; } + inline bool AwsKmsEncryptionContextHasBeenSet() const { return m_awsKmsEncryptionContextHasBeenSet; } + inline void SetAwsKmsEncryptionContext(const Aws::String& value) { m_awsKmsEncryptionContextHasBeenSet = true; m_awsKmsEncryptionContext = value; } + inline void SetAwsKmsEncryptionContext(Aws::String&& value) { m_awsKmsEncryptionContextHasBeenSet = true; m_awsKmsEncryptionContext = std::move(value); } + inline void SetAwsKmsEncryptionContext(const char* value) { m_awsKmsEncryptionContextHasBeenSet = true; m_awsKmsEncryptionContext.assign(value); } + inline SseAwsKeyManagementParams& WithAwsKmsEncryptionContext(const Aws::String& value) { SetAwsKmsEncryptionContext(value); return *this;} + inline SseAwsKeyManagementParams& WithAwsKmsEncryptionContext(Aws::String&& value) { SetAwsKmsEncryptionContext(std::move(value)); return *this;} + inline SseAwsKeyManagementParams& WithAwsKmsEncryptionContext(const char* value) { SetAwsKmsEncryptionContext(value); return *this;} + ///@} + private: + + Aws::String m_awsKmsKeyId; + bool m_awsKmsKeyIdHasBeenSet = false; + + Aws::String m_awsKmsEncryptionContext; + bool m_awsKmsEncryptionContextHasBeenSet = false; + }; + +} // namespace Model +} // namespace ChimeSDKMediaPipelines +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-chime-sdk-media-pipelines/source/model/CreateMediaCapturePipelineRequest.cpp b/generated/src/aws-cpp-sdk-chime-sdk-media-pipelines/source/model/CreateMediaCapturePipelineRequest.cpp index 4a33f0f1a82..21cfc57d844 100644 --- a/generated/src/aws-cpp-sdk-chime-sdk-media-pipelines/source/model/CreateMediaCapturePipelineRequest.cpp +++ b/generated/src/aws-cpp-sdk-chime-sdk-media-pipelines/source/model/CreateMediaCapturePipelineRequest.cpp @@ -22,6 +22,8 @@ CreateMediaCapturePipelineRequest::CreateMediaCapturePipelineRequest() : m_clientRequestToken(Aws::Utils::UUID::PseudoRandomUUID()), m_clientRequestTokenHasBeenSet(true), m_chimeSdkMeetingConfigurationHasBeenSet(false), + m_sseAwsKeyManagementParamsHasBeenSet(false), + m_sinkIamRoleArnHasBeenSet(false), m_tagsHasBeenSet(false) { } @@ -64,6 +66,18 @@ Aws::String CreateMediaCapturePipelineRequest::SerializePayload() const } + if(m_sseAwsKeyManagementParamsHasBeenSet) + { + payload.WithObject("SseAwsKeyManagementParams", m_sseAwsKeyManagementParams.Jsonize()); + + } + + if(m_sinkIamRoleArnHasBeenSet) + { + payload.WithString("SinkIamRoleArn", m_sinkIamRoleArn); + + } + if(m_tagsHasBeenSet) { Aws::Utils::ArrayFour types of control parameters are supported.
+ * AllowedRegions: List of Amazon Web Services Regions exempted from the + * control. Each string is expected to be an Amazon Web Services Region code. This + * parameter is mandatory for the OU Region deny control, + * CT.MULTISERVICE.PV.1.
Example:
+ * ["us-east-1","us-west-2"]
+ * ExemptedActions: List of Amazon Web Services IAM actions exempted from + * the control. Each string is expected to be an IAM action.
Example:
+ * ["logs:DescribeLogGroups","logs:StartQuery","logs:GetQueryResults"]
+ *
ExemptedPrincipalArns: List of Amazon Web Services
+ * IAM principal ARNs exempted from the control. Each string is expected to be an
+ * IAM principal that follows the pattern
+ * ^arn:(aws|aws-us-gov):(iam|sts)::.+:.+$
Example:
+ * ["arn:aws:iam::*:role/ReadOnly","arn:aws:sts::*:assumed-role/ReadOnly/ *"]
+ *
ExemptedResourceArns: List of resource ARNs exempted + * from the control. Each string is expected to be a resource ARN.
Example:
+ * ["arn:aws:s3:::my-bucket-name"]
The parameter name. This name is the parameter key
when you call
+ *
+ * EnableControl
or
+ * UpdateEnabledControl
.
A term that identifies the control's functional behavior. One of
- * Preventive
, Deteictive
, Proactive
Preventive
, Detective
, Proactive
*/
inline const ControlBehavior& GetBehavior() const{ return m_behavior; }
inline void SetBehavior(const ControlBehavior& value) { m_behavior = value; }
@@ -94,6 +97,34 @@ namespace Model
inline GetControlResult& WithRegionConfiguration(RegionConfiguration&& value) { SetRegionConfiguration(std::move(value)); return *this;}
///@}
+ ///@{
+ /**
+ * Returns information about the control, as an
+ * ImplementationDetails
object that shows the underlying
+ * implementation type for a control.
Returns an array of ControlParameter
objects that specify the
+ * parameters a control supports. An empty list is returned for controls that don’t
+ * support parameters.
An object that describes the implementation type for a control.
Our
+ * ImplementationDetails
Type
format has three required
+ * segments:
+ * SERVICE-PROVIDER::SERVICE-NAME::RESOURCE-NAME
For example, AWS::Config::ConfigRule
or
+ * AWS::SecurityHub::SecurityControl
resources have the format with
+ * three required segments.
Our ImplementationDetails
+ * Type
format has an optional fourth segment, which is present for
+ * applicable implementation types. The format is as follows:
+ * SERVICE-PROVIDER::SERVICE-NAME::RESOURCE-NAME::RESOURCE-TYPE-DESCRIPTION
+ *
For example,
+ * AWS::Organizations::Policy::SERVICE_CONTROL_POLICY
or
+ * AWS::CloudFormation::Type::HOOK
have the format with four
+ * segments.
Although the format is similar, the values for the
+ * Type
field do not match any Amazon Web Services CloudFormation
+ * values, and we do not use CloudFormation to implement these
+ * controls.
A string that describes a control's implementation type.
+ */ + inline const Aws::String& GetType() const{ return m_type; } + inline bool TypeHasBeenSet() const { return m_typeHasBeenSet; } + inline void SetType(const Aws::String& value) { m_typeHasBeenSet = true; m_type = value; } + inline void SetType(Aws::String&& value) { m_typeHasBeenSet = true; m_type = std::move(value); } + inline void SetType(const char* value) { m_typeHasBeenSet = true; m_type.assign(value); } + inline ImplementationDetails& WithType(const Aws::String& value) { SetType(value); return *this;} + inline ImplementationDetails& WithType(Aws::String&& value) { SetType(std::move(value)); return *this;} + inline ImplementationDetails& WithType(const char* value) { SetType(value); return *this;} + ///@} + private: + + Aws::String m_type; + bool m_typeHasBeenSet = false; + }; + +} // namespace Model +} // namespace ControlCatalog +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-controlcatalog/include/aws/controlcatalog/model/RegionConfiguration.h b/generated/src/aws-cpp-sdk-controlcatalog/include/aws/controlcatalog/model/RegionConfiguration.h index e11e4d744b3..540625ea041 100644 --- a/generated/src/aws-cpp-sdk-controlcatalog/include/aws/controlcatalog/model/RegionConfiguration.h +++ b/generated/src/aws-cpp-sdk-controlcatalog/include/aws/controlcatalog/model/RegionConfiguration.h @@ -28,7 +28,9 @@ namespace Model /** *Returns information about the control, including the scope of the control, if * enabled, and the Regions in which the control currently is available for - * deployment.
If you are applying controls through an Amazon Web Services + * deployment. For more information about scope, see Global + * services.
If you are applying controls through an Amazon Web Services
* Control Tower landing zone environment, remember that the values returned in the
* RegionConfiguration
API operation are not related to the governed
* Regions in your landing zone. For example, if you are governing Regions
diff --git a/generated/src/aws-cpp-sdk-controlcatalog/source/model/ControlParameter.cpp b/generated/src/aws-cpp-sdk-controlcatalog/source/model/ControlParameter.cpp
new file mode 100644
index 00000000000..df39ad17c16
--- /dev/null
+++ b/generated/src/aws-cpp-sdk-controlcatalog/source/model/ControlParameter.cpp
@@ -0,0 +1,59 @@
+/**
+ * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
+ * SPDX-License-Identifier: Apache-2.0.
+ */
+
+#include
Ec2LaunchTemplateNotFound: We - * couldn't find the Amazon EC2 launch template for your managed node group. You - * may be able to recreate a launch template with the same settings to recover.
- *Ec2LaunchTemplateVersionMismatch: The Amazon EC2 launch - * template version for your managed node group does not match the version that - * Amazon EKS created. You may be able to revert to the version that Amazon EKS - * created to recover.
Ec2SecurityGroupDeletionFailure: - * We could not delete the remote access security group for your managed node - * group. Remove any dependencies from the security group.
+ * processing requests.
Ec2InstanceTypeDoesNotExist: One + * or more of the supplied Amazon EC2 instance types do not exist. Amazon EKS + * checked for the instance types that you provided in this Amazon Web Services + * Region, and one or more aren't available.
+ * Ec2LaunchTemplateNotFound: We couldn't find the Amazon EC2 launch + * template for your managed node group. You may be able to recreate a launch + * template with the same settings to recover.
+ * Ec2LaunchTemplateVersionMismatch: The Amazon EC2 launch template version + * for your managed node group does not match the version that Amazon EKS created. + * You may be able to revert to the version that Amazon EKS created to recover.
+ *Ec2SecurityGroupDeletionFailure: We could not delete the + * remote access security group for your managed node group. Remove any + * dependencies from the security group.
* Ec2SecurityGroupNotFound: We couldn't find the cluster security group for * the cluster. You must recreate your cluster.
* Ec2SubnetInvalidConfiguration: One or more Amazon EC2 subnets specified diff --git a/generated/src/aws-cpp-sdk-eks/include/aws/eks/model/NodegroupIssueCode.h b/generated/src/aws-cpp-sdk-eks/include/aws/eks/model/NodegroupIssueCode.h index c2161ecf545..cae8064953f 100644 --- a/generated/src/aws-cpp-sdk-eks/include/aws/eks/model/NodegroupIssueCode.h +++ b/generated/src/aws-cpp-sdk-eks/include/aws/eks/model/NodegroupIssueCode.h @@ -50,7 +50,8 @@ namespace Model Unknown, AutoScalingGroupInstanceRefreshActive, KubernetesLabelInvalid, - Ec2LaunchTemplateVersionMaxLimitExceeded + Ec2LaunchTemplateVersionMaxLimitExceeded, + Ec2InstanceTypeDoesNotExist }; namespace NodegroupIssueCodeMapper diff --git a/generated/src/aws-cpp-sdk-eks/source/model/NodegroupIssueCode.cpp b/generated/src/aws-cpp-sdk-eks/source/model/NodegroupIssueCode.cpp index 5d449446295..845cc7f36dc 100644 --- a/generated/src/aws-cpp-sdk-eks/source/model/NodegroupIssueCode.cpp +++ b/generated/src/aws-cpp-sdk-eks/source/model/NodegroupIssueCode.cpp @@ -55,6 +55,7 @@ namespace Aws static const int AutoScalingGroupInstanceRefreshActive_HASH = HashingUtils::HashString("AutoScalingGroupInstanceRefreshActive"); static const int KubernetesLabelInvalid_HASH = HashingUtils::HashString("KubernetesLabelInvalid"); static const int Ec2LaunchTemplateVersionMaxLimitExceeded_HASH = HashingUtils::HashString("Ec2LaunchTemplateVersionMaxLimitExceeded"); + static const int Ec2InstanceTypeDoesNotExist_HASH = HashingUtils::HashString("Ec2InstanceTypeDoesNotExist"); NodegroupIssueCode GetNodegroupIssueCodeForName(const Aws::String& name) @@ -200,6 +201,10 @@ namespace Aws { return NodegroupIssueCode::Ec2LaunchTemplateVersionMaxLimitExceeded; } + else if (hashCode == Ec2InstanceTypeDoesNotExist_HASH) + { + return NodegroupIssueCode::Ec2InstanceTypeDoesNotExist; + } EnumParseOverflowContainer* overflowContainer = Aws::GetEnumOverflowContainer(); if(overflowContainer) { @@ -286,6 +291,8 @@ namespace Aws return "KubernetesLabelInvalid"; case NodegroupIssueCode::Ec2LaunchTemplateVersionMaxLimitExceeded: return "Ec2LaunchTemplateVersionMaxLimitExceeded"; + case NodegroupIssueCode::Ec2InstanceTypeDoesNotExist: + return "Ec2InstanceTypeDoesNotExist"; default: EnumParseOverflowContainer* overflowContainer = Aws::GetEnumOverflowContainer(); if(overflowContainer) diff --git a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/FirehoseClient.h b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/FirehoseClient.h index 24e5342e2f9..6ae0b218085 100644 --- a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/FirehoseClient.h +++ b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/FirehoseClient.h @@ -82,30 +82,29 @@ namespace Firehose virtual ~FirehoseClient(); /** - *
Creates a Firehose delivery stream.
By default, you can create up to - * 50 delivery streams per Amazon Web Services Region.
This is an
- * asynchronous operation that immediately returns. The initial status of the
- * delivery stream is CREATING
. After the delivery stream is created,
- * its status is ACTIVE
and it now accepts data. If the delivery
- * stream creation fails, the status transitions to CREATING_FAILED
.
- * Attempts to send data to a delivery stream that is not in the
- * ACTIVE
state cause an exception. To check the state of a delivery
- * stream, use DescribeDeliveryStream.
If the status of a delivery
- * stream is CREATING_FAILED
, this status doesn't change, and you
- * can't invoke CreateDeliveryStream
again on it. However, you can
- * invoke the DeleteDeliveryStream operation to delete it.
A Firehose
- * delivery stream can be configured to receive records directly from providers
- * using PutRecord or PutRecordBatch, or it can be configured to use
- * an existing Kinesis stream as its source. To specify a Kinesis data stream as
- * input, set the DeliveryStreamType
parameter to
- * KinesisStreamAsSource
, and provide the Kinesis stream Amazon
- * Resource Name (ARN) and role ARN in the
+ *
Creates a Firehose stream.
By default, you can create up to 50 + * Firehose streams per Amazon Web Services Region.
This is an asynchronous
+ * operation that immediately returns. The initial status of the Firehose stream is
+ * CREATING
. After the Firehose stream is created, its status is
+ * ACTIVE
and it now accepts data. If the Firehose stream creation
+ * fails, the status transitions to CREATING_FAILED
. Attempts to send
+ * data to a delivery stream that is not in the ACTIVE
state cause an
+ * exception. To check the state of a Firehose stream, use
+ * DescribeDeliveryStream.
If the status of a Firehose stream is
+ * CREATING_FAILED
, this status doesn't change, and you can't invoke
+ * CreateDeliveryStream
again on it. However, you can invoke the
+ * DeleteDeliveryStream operation to delete it.
A Firehose stream can
+ * be configured to receive records directly from providers using PutRecord
+ * or PutRecordBatch, or it can be configured to use an existing Kinesis
+ * stream as its source. To specify a Kinesis data stream as input, set the
+ * DeliveryStreamType
parameter to KinesisStreamAsSource
,
+ * and provide the Kinesis stream Amazon Resource Name (ARN) and role ARN in the
* KinesisStreamSourceConfiguration
parameter.
To create a - * delivery stream with server-side encryption (SSE) enabled, include + * Firehose stream with server-side encryption (SSE) enabled, include * DeliveryStreamEncryptionConfigurationInput in your request. This is * optional. You can also invoke StartDeliveryStreamEncryption to turn on - * SSE for an existing delivery stream that doesn't have SSE enabled.
A - * delivery stream is configured with a single destination, such as Amazon Simple + * SSE for an existing Firehose stream that doesn't have SSE enabled.
A + * Firehose stream is configured with a single destination, such as Amazon Simple * Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service, Amazon * OpenSearch Serverless, Splunk, and any custom HTTP endpoint or HTTP endpoints * owned by or supported by third-party service providers, including Datadog, @@ -168,19 +167,19 @@ namespace Firehose } /** - *
Deletes a delivery stream and its data.
You can delete a delivery + *
Deletes a Firehose stream and its data.
You can delete a Firehose
* stream only if it is in one of the following states: ACTIVE
,
* DELETING
, CREATING_FAILED
, or
- * DELETING_FAILED
. You can't delete a delivery stream that is in the
- * CREATING
state. To check the state of a delivery stream, use
+ * DELETING_FAILED
. You can't delete a Firehose stream that is in the
+ * CREATING
state. To check the state of a Firehose stream, use
* DescribeDeliveryStream.
DeleteDeliveryStream is an asynchronous
- * API. When an API request to DeleteDeliveryStream succeeds, the delivery stream
+ * API. When an API request to DeleteDeliveryStream succeeds, the Firehose stream
* is marked for deletion, and it goes into the DELETING
state.While
- * the delivery stream is in the DELETING
state, the service might
+ * the Firehose stream is in the DELETING
state, the service might
* continue to accept records, but it doesn't make any guarantees with respect to
* delivering the data. Therefore, as a best practice, first stop any applications
- * that are sending records before you delete a delivery stream.
Removal of
- * a delivery stream that is in the DELETING
state is a low priority
+ * that are sending records before you delete a Firehose stream.
Removal of
+ * a Firehose stream that is in the DELETING
state is a low priority
* operation for the service. A stream may remain in the DELETING
* state for several minutes. Therefore, as a best practice, applications should
* not wait for streams in the DELETING
state to be removed.
@@ -209,10 +208,10 @@ namespace Firehose
}
/**
- *
Describes the specified delivery stream and its status. For example, after
- * your delivery stream is created, call DescribeDeliveryStream
to see
- * whether the delivery stream is ACTIVE
and therefore ready for data
- * to be sent to it.
If the status of a delivery stream is + *
Describes the specified Firehose stream and its status. For example, after
+ * your Firehose stream is created, call DescribeDeliveryStream
to see
+ * whether the Firehose stream is ACTIVE
and therefore ready for data
+ * to be sent to it.
If the status of a Firehose stream is
* CREATING_FAILED
, this status doesn't change, and you can't invoke
* CreateDeliveryStream again on it. However, you can invoke the
* DeleteDeliveryStream operation to delete it. If the status is
@@ -244,15 +243,15 @@ namespace Firehose
}
/**
- *
Lists your delivery streams in alphabetical order of their names.
The
- * number of delivery streams might be too large to return using a single call to
- * ListDeliveryStreams
. You can limit the number of delivery streams
+ *
Lists your Firehose streams in alphabetical order of their names.
The
+ * number of Firehose streams might be too large to return using a single call to
+ * ListDeliveryStreams
. You can limit the number of Firehose streams
* returned, using the Limit
parameter. To determine whether there are
* more delivery streams to list, check the value of
- * HasMoreDeliveryStreams
in the output. If there are more delivery
+ * HasMoreDeliveryStreams
in the output. If there are more Firehose
* streams to list, you can request them by calling this operation again and
* setting the ExclusiveStartDeliveryStreamName
parameter to the name
- * of the last delivery stream returned in the last call.
Lists the tags for the specified delivery stream. This operation has a limit + *
Lists the tags for the specified Firehose stream. This operation has a limit * of five transactions per second per account.
Writes a single data record into an Amazon Firehose delivery stream. To write - * multiple data records into a delivery stream, use PutRecordBatch. - * Applications using these operations are referred to as producers.
By - * default, each delivery stream can take in up to 2,000 transactions per second, - * 5,000 records per second, or 5 MB per second. If you use PutRecord and + *
Writes a single data record into an Firehose stream. To write multiple data + * records into a Firehose stream, use PutRecordBatch. Applications using + * these operations are referred to as producers.
By default, each Firehose + * stream can take in up to 2,000 transactions per second, 5,000 records per + * second, or 5 MB per second. If you use PutRecord and * PutRecordBatch, the limits are an aggregate across these two operations - * for each delivery stream. For more information about limits and how to request + * for each Firehose stream. For more information about limits and how to request * an increase, see Amazon * Firehose Limits.
Firehose accumulates and publishes a particular * metric for a customer account in one minute intervals. It is possible that the - * bursts of incoming bytes/records ingested to a delivery stream last only for a + * bursts of incoming bytes/records ingested to a Firehose stream last only for a * few seconds. Due to this, the actual spikes in the traffic might not be fully * visible in the customer's 1 minute CloudWatch metrics.
You must specify - * the name of the delivery stream and the data record when using PutRecord. + * the name of the Firehose stream and the data record when using PutRecord. * The data record consists of a data blob that can be up to 1,000 KiB in size, and * any kind of data. For example, it can be a segment from a log file, geographic - * location data, website clickstream data, and so on.
Firehose buffers
- * records before delivering them to the destination. To disambiguate the data
- * blobs at the destination, a common solution is to use delimiters in the data,
- * such as a newline (\n
) or some other character unique within the
- * data. This allows the consumer application to parse individual data items when
- * reading the data from the destination.
The PutRecord
- * operation returns a RecordId
, which is a unique string assigned to
- * each record. Producer applications can use this ID for purposes such as
- * auditability and investigation.
If the PutRecord
operation
- * throws a ServiceUnavailableException
, the API is automatically
- * reinvoked (retried) 3 times. If the exception persists, it is possible that the
- * throughput limits have been exceeded for the delivery stream.
Re-invoking the Put API operations (for example, PutRecord and - * PutRecordBatch) can result in data duplicates. For larger data assets, allow for - * a longer time out before retrying Put API operations.
Data records sent - * to Firehose are stored for 24 hours from the time they are added to a delivery - * stream as it tries to send the records to the destination. If the destination is - * unreachable for more than 24 hours, the data is no longer available.
- *Don't concatenate two or more base64 strings to form the data - * fields of your records. Instead, concatenate the raw data, then perform base64 - * encoding.
For multi record + * de-aggregation, you can not put more than 500 records even if the data blob + * length is less than 1000 KiB. If you include more than 500 records, the request + * succeeds but the record de-aggregation doesn't work as expected and + * transformation lambda is invoked with the complete base64 encoded data blob + * instead of de-aggregated base64 decoded records.
Firehose buffers records
+ * before delivering them to the destination. To disambiguate the data blobs at the
+ * destination, a common solution is to use delimiters in the data, such as a
+ * newline (\n
) or some other character unique within the data. This
+ * allows the consumer application to parse individual data items when reading the
+ * data from the destination.
The PutRecord
operation returns a
+ * RecordId
, which is a unique string assigned to each record.
+ * Producer applications can use this ID for purposes such as auditability and
+ * investigation.
If the PutRecord
operation throws a
+ * ServiceUnavailableException
, the API is automatically reinvoked
+ * (retried) 3 times. If the exception persists, it is possible that the throughput
+ * limits have been exceeded for the Firehose stream.
Re-invoking the Put + * API operations (for example, PutRecord and PutRecordBatch) can result in data + * duplicates. For larger data assets, allow for a longer time out before retrying + * Put API operations.
Data records sent to Firehose are stored for 24 hours + * from the time they are added to a Firehose stream as it tries to send the + * records to the destination. If the destination is unreachable for more than 24 + * hours, the data is no longer available.
Don't concatenate two + * or more base64 strings to form the data fields of your records. Instead, + * concatenate the raw data, then perform base64 encoding.
+ *Writes multiple data records into a delivery stream in a single call, which + *
Writes multiple data records into a Firehose stream in a single call, which * can achieve higher throughput per producer than when writing single records. To - * write single data records into a delivery stream, use PutRecord. + * write single data records into a Firehose stream, use PutRecord. * Applications using these operations are referred to as producers.
*Firehose accumulates and publishes a particular metric for a customer account * in one minute intervals. It is possible that the bursts of incoming - * bytes/records ingested to a delivery stream last only for a few seconds. Due to + * bytes/records ingested to a Firehose stream last only for a few seconds. Due to * this, the actual spikes in the traffic might not be fully visible in the * customer's 1 minute CloudWatch metrics.
For information about service * quota, see .
Each PutRecordBatch request supports up to 500 * records. Each record in the request can be as large as 1,000 KB (before base64 * encoding), up to a limit of 4 MB for the entire request. These limits cannot be - * changed.
You must specify the name of the delivery stream and the data + * changed.
You must specify the name of the Firehose stream and the data * record when using PutRecord. The data record consists of a data blob that * can be up to 1,000 KB in size, and any kind of data. For example, it could be a * segment from a log file, geographic location data, website clickstream data, and - * so on.
Firehose buffers records before delivering them to the
- * destination. To disambiguate the data blobs at the destination, a common
- * solution is to use delimiters in the data, such as a newline (\n
)
- * or some other character unique within the data. This allows the consumer
- * application to parse individual data items when reading the data from the
- * destination.
The PutRecordBatch response includes a count of
- * failed records, FailedPutCount
, and an array of responses,
+ * so on.
For multi record de-aggregation, you can not put more than 500 + * records even if the data blob length is less than 1000 KiB. If you include more + * than 500 records, the request succeeds but the record de-aggregation doesn't + * work as expected and transformation lambda is invoked with the complete base64 + * encoded data blob instead of de-aggregated base64 decoded records.
+ *Firehose buffers records before delivering them to the destination. To
+ * disambiguate the data blobs at the destination, a common solution is to use
+ * delimiters in the data, such as a newline (\n
) or some other
+ * character unique within the data. This allows the consumer application to parse
+ * individual data items when reading the data from the destination.
The
+ * PutRecordBatch response includes a count of failed records,
+ * FailedPutCount
, and an array of responses,
* RequestResponses
. Even if the PutRecordBatch call succeeds,
* the value of FailedPutCount
may be greater than 0, indicating that
* there are records for which the operation didn't succeed. Each entry in the
@@ -416,11 +425,11 @@ namespace Firehose
* handle any duplicates at the destination.
If PutRecordBatch throws
* ServiceUnavailableException
, the API is automatically reinvoked
* (retried) 3 times. If the exception persists, it is possible that the throughput
- * limits have been exceeded for the delivery stream.
Re-invoking the Put + * limits have been exceeded for the Firehose stream.
Re-invoking the Put * API operations (for example, PutRecord and PutRecordBatch) can result in data * duplicates. For larger data assets, allow for a longer time out before retrying * Put API operations.
Data records sent to Firehose are stored for 24 hours - * from the time they are added to a delivery stream as it attempts to send the + * from the time they are added to a Firehose stream as it attempts to send the * records to the destination. If the destination is unreachable for more than 24 * hours, the data is no longer available.
Don't concatenate two * or more base64 strings to form the data fields of your records. Instead, @@ -450,21 +459,21 @@ namespace Firehose } /** - *
Enables server-side encryption (SSE) for the delivery stream.
This + *
Enables server-side encryption (SSE) for the Firehose stream.
This
* operation is asynchronous. It returns immediately. When you invoke it, Firehose
* first sets the encryption status of the stream to ENABLING
, and
- * then to ENABLED
. The encryption status of a delivery stream is the
+ * then to ENABLED
. The encryption status of a Firehose stream is the
* Status
property in DeliveryStreamEncryptionConfiguration. If
* the operation fails, the encryption status changes to
* ENABLING_FAILED
. You can continue to read and write data to your
- * delivery stream while the encryption status is ENABLING
, but the
+ * Firehose stream while the encryption status is ENABLING
, but the
* data is not encrypted. It can take up to 5 seconds after the encryption status
- * changes to ENABLED
before all records written to the delivery
+ * changes to ENABLED
before all records written to the Firehose
* stream are encrypted. To find out whether a record or a batch of records was
* encrypted, check the response elements PutRecordOutput$Encrypted and
* PutRecordBatchOutput$Encrypted, respectively.
To check the - * encryption status of a delivery stream, use DescribeDeliveryStream.
- *Even if encryption is currently enabled for a delivery stream, you can still + * encryption status of a Firehose stream, use DescribeDeliveryStream.
+ *Even if encryption is currently enabled for a Firehose stream, you can still
* invoke this operation on it to change the ARN of the CMK or both its type and
* ARN. If you invoke this method to change the CMK, and the old CMK is of type
* CUSTOMER_MANAGED_CMK
, Firehose schedules the grant it had on the
@@ -474,21 +483,21 @@ namespace Firehose
* the KMS grant creation to be successful, the Firehose API operations
* StartDeliveryStreamEncryption
and CreateDeliveryStream
* should not be called with session credentials that are more than 6 hours
- * old.
If a delivery stream already has encryption enabled and then you + * old.
If a Firehose stream already has encryption enabled and then you
* invoke this operation to change the ARN of the CMK or both its type and ARN and
* you get ENABLING_FAILED
, this only means that the attempt to change
* the CMK failed. In this case, encryption remains enabled with the old CMK.
If the encryption status of your delivery stream is + *
If the encryption status of your Firehose stream is
* ENABLING_FAILED
, you can invoke this operation again with a valid
* CMK. The CMK must be enabled and the key policy mustn't explicitly deny the
* permission for Firehose to invoke KMS encrypt and decrypt operations.
You
- * can enable SSE for a delivery stream only if it's a delivery stream that uses
+ * can enable SSE for a Firehose stream only if it's a Firehose stream that uses
* DirectPut
as its source.
The
* StartDeliveryStreamEncryption
and
* StopDeliveryStreamEncryption
operations have a combined limit of 25
- * calls per delivery stream per 24 hours. For example, you reach the limit if you
+ * calls per Firehose stream per 24 hours. For example, you reach the limit if you
* call StartDeliveryStreamEncryption
13 times and
- * StopDeliveryStreamEncryption
12 times for the same delivery stream
+ * StopDeliveryStreamEncryption
12 times for the same Firehose stream
* in a 24-hour period.
Disables server-side encryption (SSE) for the delivery stream.
This + *
Disables server-side encryption (SSE) for the Firehose stream.
This
* operation is asynchronous. It returns immediately. When you invoke it, Firehose
* first sets the encryption status of the stream to DISABLING
, and
* then to DISABLED
. You can continue to read and write data to your
* stream while its status is DISABLING
. It can take up to 5 seconds
* after the encryption status changes to DISABLED
before all records
- * written to the delivery stream are no longer subject to encryption. To find out
+ * written to the Firehose stream are no longer subject to encryption. To find out
* whether a record or a batch of records was encrypted, check the response
* elements PutRecordOutput$Encrypted and
* PutRecordBatchOutput$Encrypted, respectively.
To check the - * encryption state of a delivery stream, use DescribeDeliveryStream.
+ * encryption state of a Firehose stream, use DescribeDeliveryStream. *If SSE is enabled using a customer managed CMK and then you invoke
* StopDeliveryStreamEncryption
, Firehose schedules the related KMS
* grant for retirement and then retires it after it ensures that it is finished
* delivering records to the destination.
The
* StartDeliveryStreamEncryption
and
* StopDeliveryStreamEncryption
operations have a combined limit of 25
- * calls per delivery stream per 24 hours. For example, you reach the limit if you
+ * calls per Firehose stream per 24 hours. For example, you reach the limit if you
* call StartDeliveryStreamEncryption
13 times and
- * StopDeliveryStreamEncryption
12 times for the same delivery stream
+ * StopDeliveryStreamEncryption
12 times for the same Firehose stream
* in a 24-hour period.
Adds or updates tags for the specified delivery stream. A tag is a key-value + *
Adds or updates tags for the specified Firehose stream. A tag is a key-value * pair that you can define and assign to Amazon Web Services resources. If you * specify a tag that already exists, the tag value is replaced with the value that * you specify in the request. Tags are metadata. For example, you can add friendly * names and descriptions or other types of information that can help you - * distinguish the delivery stream. For more information about tags, see Using * Cost Allocation Tags in the Amazon Web Services Billing and Cost - * Management User Guide.
Each delivery stream can have up to 50 tags. + * Management User Guide.
Each Firehose stream can have up to 50 tags. *
This operation has a limit of five transactions per second per account. *
Removes tags from the specified delivery stream. Removed tags are deleted, + *
Removes tags from the specified Firehose stream. Removed tags are deleted, * and you can't recover them after this operation successfully completes.
*If you specify a tag that doesn't exist, the operation ignores it.
*This operation has a limit of five transactions per second per account. @@ -623,13 +632,13 @@ namespace Firehose } /** - *
Updates the specified destination of the specified delivery stream.
+ *Updates the specified destination of the specified Firehose stream.
*Use this operation to change the destination type (for example, to replace * the Amazon S3 destination with Amazon Redshift) or change the parameters * associated with a destination (for example, to change the bucket name of the * Amazon S3 destination). The update might not occur immediately. The target - * delivery stream remains active while the configurations are updated, so data - * writes to the delivery stream can continue during this process. The updated + * Firehose stream remains active while the configurations are updated, so data + * writes to the Firehose stream can continue during this process. The updated * configurations are usually effective within a few minutes.
Switching * between Amazon OpenSearch Service and other services is not supported. For an * Amazon OpenSearch Service destination, you can only update to another Amazon diff --git a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/AmazonOpenSearchServerlessBufferingHints.h b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/AmazonOpenSearchServerlessBufferingHints.h index 7a2ca9587f9..55526e93088 100644 --- a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/AmazonOpenSearchServerlessBufferingHints.h +++ b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/AmazonOpenSearchServerlessBufferingHints.h @@ -53,7 +53,7 @@ namespace Model *
Buffer incoming data to the specified size, in MBs, before delivering it to * the destination. The default value is 5.
We recommend setting this * parameter to a value greater than the amount of data you typically ingest into - * the delivery stream in 10 seconds. For example, if you typically ingest data at + * the Firehose stream in 10 seconds. For example, if you typically ingest data at * 1 MB/sec, the value should be 10 MB or higher.
*/ inline int GetSizeInMBs() const{ return m_sizeInMBs; } diff --git a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/AmazonopensearchserviceBufferingHints.h b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/AmazonopensearchserviceBufferingHints.h index 6459fb69a97..58ce0a75bc9 100644 --- a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/AmazonopensearchserviceBufferingHints.h +++ b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/AmazonopensearchserviceBufferingHints.h @@ -52,7 +52,7 @@ namespace Model *Buffer incoming data to the specified size, in MBs, before delivering it to * the destination. The default value is 5.
We recommend setting this * parameter to a value greater than the amount of data you typically ingest into - * the delivery stream in 10 seconds. For example, if you typically ingest data at + * the Firehose stream in 10 seconds. For example, if you typically ingest data at * 1 MB/sec, the value should be 10 MB or higher.
*/ inline int GetSizeInMBs() const{ return m_sizeInMBs; } diff --git a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/AmazonopensearchserviceDestinationUpdate.h b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/AmazonopensearchserviceDestinationUpdate.h index ec786166bfb..a389ab741f9 100644 --- a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/AmazonopensearchserviceDestinationUpdate.h +++ b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/AmazonopensearchserviceDestinationUpdate.h @@ -111,9 +111,9 @@ namespace Model *The Amazon OpenSearch Service type name. For Elasticsearch 6.x, there can be * only one type per index. If you try to specify a new type for an existing index * that already has another type, Firehose returns an error during runtime.
- *If you upgrade Elasticsearch from 6.x to 7.x and don’t update your delivery + *
If you upgrade Elasticsearch from 6.x to 7.x and don’t update your Firehose * stream, Firehose still delivers data to Elasticsearch with the old index name - * and type name. If you want to update your delivery stream with a new index name, + * and type name. If you want to update your Firehose stream with a new index name, * provide an empty string for TypeName.
*/ inline const Aws::String& GetTypeName() const{ return m_typeName; } diff --git a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/BufferingHints.h b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/BufferingHints.h index 792cbe039ef..5797eef376e 100644 --- a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/BufferingHints.h +++ b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/BufferingHints.h @@ -47,7 +47,7 @@ namespace Model * specify a value for it, you must also specify a value for *IntervalInSeconds
, and vice versa. We recommend setting this * parameter to a value greater than the amount of data you typically ingest into - * the delivery stream in 10 seconds. For example, if you typically ingest data at + * the Firehose stream in 10 seconds. For example, if you typically ingest data at * 1 MiB/sec, the value should be 10 MiB or higher.
*/ inline int GetSizeInMBs() const{ return m_sizeInMBs; } diff --git a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/CatalogConfiguration.h b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/CatalogConfiguration.h index 7f6cc3cc57c..6caf511c807 100644 --- a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/CatalogConfiguration.h +++ b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/CatalogConfiguration.h @@ -25,8 +25,7 @@ namespace Model /** *Describes the containers where the destination Apache Iceberg Tables are - * persisted.
Amazon Data Firehose is in preview release and is subject to - * change.
Specifies the Glue catalog ARN indentifier of the destination Apache Iceberg + *
Specifies the Glue catalog ARN identifier of the destination Apache Iceberg
* Tables. You must specify the ARN in the format
- * arn:aws:glue:region:account-id:catalog
.
Amazon Data - * Firehose is in preview release and is subject to change.
+ *arn:aws:glue:region:account-id:catalog
.
*/
inline const Aws::String& GetCatalogARN() const{ return m_catalogARN; }
inline bool CatalogARNHasBeenSet() const { return m_catalogARNHasBeenSet; }
@@ -55,10 +53,28 @@ namespace Model
inline CatalogConfiguration& WithCatalogARN(Aws::String&& value) { SetCatalogARN(std::move(value)); return *this;}
inline CatalogConfiguration& WithCatalogARN(const char* value) { SetCatalogARN(value); return *this;}
///@}
+
+ ///@{
+ /**
+ *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const Aws::String& GetWarehouseLocation() const{ return m_warehouseLocation; } + inline bool WarehouseLocationHasBeenSet() const { return m_warehouseLocationHasBeenSet; } + inline void SetWarehouseLocation(const Aws::String& value) { m_warehouseLocationHasBeenSet = true; m_warehouseLocation = value; } + inline void SetWarehouseLocation(Aws::String&& value) { m_warehouseLocationHasBeenSet = true; m_warehouseLocation = std::move(value); } + inline void SetWarehouseLocation(const char* value) { m_warehouseLocationHasBeenSet = true; m_warehouseLocation.assign(value); } + inline CatalogConfiguration& WithWarehouseLocation(const Aws::String& value) { SetWarehouseLocation(value); return *this;} + inline CatalogConfiguration& WithWarehouseLocation(Aws::String&& value) { SetWarehouseLocation(std::move(value)); return *this;} + inline CatalogConfiguration& WithWarehouseLocation(const char* value) { SetWarehouseLocation(value); return *this;} + ///@} private: Aws::String m_catalogARN; bool m_catalogARNHasBeenSet = false; + + Aws::String m_warehouseLocation; + bool m_warehouseLocationHasBeenSet = false; }; } // namespace Model diff --git a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/CloudWatchLoggingOptions.h b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/CloudWatchLoggingOptions.h index 45f97a08067..f29c9c19bcf 100644 --- a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/CloudWatchLoggingOptions.h +++ b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/CloudWatchLoggingOptions.h @@ -24,7 +24,7 @@ namespace Model { /** - *Describes the Amazon CloudWatch logging options for your delivery + *
Describes the Amazon CloudWatch logging options for your Firehose * stream.
The name of the delivery stream. This name must be unique per Amazon Web - * Services account in the same Amazon Web Services Region. If the delivery streams - * are in different accounts or different Regions, you can have multiple delivery + *
The name of the Firehose stream. This name must be unique per Amazon Web + * Services account in the same Amazon Web Services Region. If the Firehose streams + * are in different accounts or different Regions, you can have multiple Firehose * streams with the same name.
*/ inline const Aws::String& GetDeliveryStreamName() const{ return m_deliveryStreamName; } @@ -68,10 +69,10 @@ namespace Model ///@{ /** - *The delivery stream type. This parameter can be one of the following + *
The Firehose stream type. This parameter can be one of the following * values:
DirectPut
: Provider applications access
- * the delivery stream directly.
- * KinesisStreamAsSource
: The delivery stream uses a Kinesis data
+ * the Firehose stream directly.
+ * KinesisStreamAsSource
: The Firehose stream uses a Kinesis data
* stream as a source.
When a Kinesis data stream is used as the source for the delivery stream, a + *
When a Kinesis data stream is used as the source for the Firehose stream, a * KinesisStreamSourceConfiguration containing the Kinesis data stream * Amazon Resource Name (ARN) and the role ARN for the source stream.
*/ @@ -185,19 +186,19 @@ namespace Model ///@{ /** - *A set of tags to assign to the delivery stream. A tag is a key-value pair + *
A set of tags to assign to the Firehose stream. A tag is a key-value pair * that you can define and assign to Amazon Web Services resources. Tags are * metadata. For example, you can add friendly names and descriptions or other - * types of information that can help you distinguish the delivery stream. For more + * types of information that can help you distinguish the Firehose stream. For more * information about tags, see Using * Cost Allocation Tags in the Amazon Web Services Billing and Cost Management - * User Guide.
You can specify up to 50 tags when creating a delivery + * User Guide.
You can specify up to 50 tags when creating a Firehose * stream.
If you specify tags in the CreateDeliveryStream
* action, Amazon Data Firehose performs an additional authorization on the
* firehose:TagDeliveryStream
action to verify if users have
* permissions to create tags. If you do not provide this permission, requests to
- * create new Firehose delivery streams with IAM resource tags will fail with an
+ * create new Firehose Firehose streams with IAM resource tags will fail with an
* AccessDeniedException
such as following.
* AccessDeniedException
User: arn:aws:sts::x:assumed-role/x/x is * not authorized to perform: firehose:TagDeliveryStream on resource: @@ -253,8 +254,7 @@ namespace Model ///@{ /** - *
Configure Apache Iceberg Tables destination.
Amazon Data Firehose is - * in preview release and is subject to change.
+ *Configure Apache Iceberg Tables destination.
*/ inline const IcebergDestinationConfiguration& GetIcebergDestinationConfiguration() const{ return m_icebergDestinationConfiguration; } inline bool IcebergDestinationConfigurationHasBeenSet() const { return m_icebergDestinationConfigurationHasBeenSet; } @@ -263,6 +263,19 @@ namespace Model inline CreateDeliveryStreamRequest& WithIcebergDestinationConfiguration(const IcebergDestinationConfiguration& value) { SetIcebergDestinationConfiguration(value); return *this;} inline CreateDeliveryStreamRequest& WithIcebergDestinationConfiguration(IcebergDestinationConfiguration&& value) { SetIcebergDestinationConfiguration(std::move(value)); return *this;} ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const DatabaseSourceConfiguration& GetDatabaseSourceConfiguration() const{ return m_databaseSourceConfiguration; } + inline bool DatabaseSourceConfigurationHasBeenSet() const { return m_databaseSourceConfigurationHasBeenSet; } + inline void SetDatabaseSourceConfiguration(const DatabaseSourceConfiguration& value) { m_databaseSourceConfigurationHasBeenSet = true; m_databaseSourceConfiguration = value; } + inline void SetDatabaseSourceConfiguration(DatabaseSourceConfiguration&& value) { m_databaseSourceConfigurationHasBeenSet = true; m_databaseSourceConfiguration = std::move(value); } + inline CreateDeliveryStreamRequest& WithDatabaseSourceConfiguration(const DatabaseSourceConfiguration& value) { SetDatabaseSourceConfiguration(value); return *this;} + inline CreateDeliveryStreamRequest& WithDatabaseSourceConfiguration(DatabaseSourceConfiguration&& value) { SetDatabaseSourceConfiguration(std::move(value)); return *this;} + ///@} private: Aws::String m_deliveryStreamName; @@ -309,6 +322,9 @@ namespace Model IcebergDestinationConfiguration m_icebergDestinationConfiguration; bool m_icebergDestinationConfigurationHasBeenSet = false; + + DatabaseSourceConfiguration m_databaseSourceConfiguration; + bool m_databaseSourceConfigurationHasBeenSet = false; }; } // namespace Model diff --git a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/CreateDeliveryStreamResult.h b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/CreateDeliveryStreamResult.h index 6139320ff9a..04b87262df9 100644 --- a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/CreateDeliveryStreamResult.h +++ b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/CreateDeliveryStreamResult.h @@ -34,7 +34,7 @@ namespace Model ///@{ /** - *The ARN of the delivery stream.
+ *The ARN of the Firehose stream.
*/ inline const Aws::String& GetDeliveryStreamARN() const{ return m_deliveryStreamARN; } inline void SetDeliveryStreamARN(const Aws::String& value) { m_deliveryStreamARN = value; } diff --git a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/DatabaseColumnList.h b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/DatabaseColumnList.h new file mode 100644 index 00000000000..8a6dde0c9fb --- /dev/null +++ b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/DatabaseColumnList.h @@ -0,0 +1,84 @@ +/** + * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. + * SPDX-License-Identifier: Apache-2.0. + */ + +#pragma once +#include
Amazon Data Firehose is in preview release and is subject to + * change.
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const Aws::Vector
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const Aws::Vector
Amazon Data Firehose is in preview release and is subject to + * change.
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const Aws::Vector
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const Aws::Vector
Amazon Data Firehose is in preview release and is subject to + * change.
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const Aws::String& GetId() const{ return m_id; } + inline bool IdHasBeenSet() const { return m_idHasBeenSet; } + inline void SetId(const Aws::String& value) { m_idHasBeenSet = true; m_id = value; } + inline void SetId(Aws::String&& value) { m_idHasBeenSet = true; m_id = std::move(value); } + inline void SetId(const char* value) { m_idHasBeenSet = true; m_id.assign(value); } + inline DatabaseSnapshotInfo& WithId(const Aws::String& value) { SetId(value); return *this;} + inline DatabaseSnapshotInfo& WithId(Aws::String&& value) { SetId(std::move(value)); return *this;} + inline DatabaseSnapshotInfo& WithId(const char* value) { SetId(value); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const Aws::String& GetTable() const{ return m_table; } + inline bool TableHasBeenSet() const { return m_tableHasBeenSet; } + inline void SetTable(const Aws::String& value) { m_tableHasBeenSet = true; m_table = value; } + inline void SetTable(Aws::String&& value) { m_tableHasBeenSet = true; m_table = std::move(value); } + inline void SetTable(const char* value) { m_tableHasBeenSet = true; m_table.assign(value); } + inline DatabaseSnapshotInfo& WithTable(const Aws::String& value) { SetTable(value); return *this;} + inline DatabaseSnapshotInfo& WithTable(Aws::String&& value) { SetTable(std::move(value)); return *this;} + inline DatabaseSnapshotInfo& WithTable(const char* value) { SetTable(value); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const Aws::Utils::DateTime& GetRequestTimestamp() const{ return m_requestTimestamp; } + inline bool RequestTimestampHasBeenSet() const { return m_requestTimestampHasBeenSet; } + inline void SetRequestTimestamp(const Aws::Utils::DateTime& value) { m_requestTimestampHasBeenSet = true; m_requestTimestamp = value; } + inline void SetRequestTimestamp(Aws::Utils::DateTime&& value) { m_requestTimestampHasBeenSet = true; m_requestTimestamp = std::move(value); } + inline DatabaseSnapshotInfo& WithRequestTimestamp(const Aws::Utils::DateTime& value) { SetRequestTimestamp(value); return *this;} + inline DatabaseSnapshotInfo& WithRequestTimestamp(Aws::Utils::DateTime&& value) { SetRequestTimestamp(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const SnapshotRequestedBy& GetRequestedBy() const{ return m_requestedBy; } + inline bool RequestedByHasBeenSet() const { return m_requestedByHasBeenSet; } + inline void SetRequestedBy(const SnapshotRequestedBy& value) { m_requestedByHasBeenSet = true; m_requestedBy = value; } + inline void SetRequestedBy(SnapshotRequestedBy&& value) { m_requestedByHasBeenSet = true; m_requestedBy = std::move(value); } + inline DatabaseSnapshotInfo& WithRequestedBy(const SnapshotRequestedBy& value) { SetRequestedBy(value); return *this;} + inline DatabaseSnapshotInfo& WithRequestedBy(SnapshotRequestedBy&& value) { SetRequestedBy(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const SnapshotStatus& GetStatus() const{ return m_status; } + inline bool StatusHasBeenSet() const { return m_statusHasBeenSet; } + inline void SetStatus(const SnapshotStatus& value) { m_statusHasBeenSet = true; m_status = value; } + inline void SetStatus(SnapshotStatus&& value) { m_statusHasBeenSet = true; m_status = std::move(value); } + inline DatabaseSnapshotInfo& WithStatus(const SnapshotStatus& value) { SetStatus(value); return *this;} + inline DatabaseSnapshotInfo& WithStatus(SnapshotStatus&& value) { SetStatus(std::move(value)); return *this;} + ///@} + + ///@{ + + inline const FailureDescription& GetFailureDescription() const{ return m_failureDescription; } + inline bool FailureDescriptionHasBeenSet() const { return m_failureDescriptionHasBeenSet; } + inline void SetFailureDescription(const FailureDescription& value) { m_failureDescriptionHasBeenSet = true; m_failureDescription = value; } + inline void SetFailureDescription(FailureDescription&& value) { m_failureDescriptionHasBeenSet = true; m_failureDescription = std::move(value); } + inline DatabaseSnapshotInfo& WithFailureDescription(const FailureDescription& value) { SetFailureDescription(value); return *this;} + inline DatabaseSnapshotInfo& WithFailureDescription(FailureDescription&& value) { SetFailureDescription(std::move(value)); return *this;} + ///@} + private: + + Aws::String m_id; + bool m_idHasBeenSet = false; + + Aws::String m_table; + bool m_tableHasBeenSet = false; + + Aws::Utils::DateTime m_requestTimestamp; + bool m_requestTimestampHasBeenSet = false; + + SnapshotRequestedBy m_requestedBy; + bool m_requestedByHasBeenSet = false; + + SnapshotStatus m_status; + bool m_statusHasBeenSet = false; + + FailureDescription m_failureDescription; + bool m_failureDescriptionHasBeenSet = false; + }; + +} // namespace Model +} // namespace Firehose +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/DatabaseSourceAuthenticationConfiguration.h b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/DatabaseSourceAuthenticationConfiguration.h new file mode 100644 index 00000000000..08519375d32 --- /dev/null +++ b/generated/src/aws-cpp-sdk-firehose/include/aws/firehose/model/DatabaseSourceAuthenticationConfiguration.h @@ -0,0 +1,58 @@ +/** + * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. + * SPDX-License-Identifier: Apache-2.0. + */ + +#pragma once +#include
Amazon Data Firehose is in preview release and is subject to + * change.
Amazon Data Firehose is in preview release and is subject to + * change.
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const DatabaseType& GetType() const{ return m_type; } + inline bool TypeHasBeenSet() const { return m_typeHasBeenSet; } + inline void SetType(const DatabaseType& value) { m_typeHasBeenSet = true; m_type = value; } + inline void SetType(DatabaseType&& value) { m_typeHasBeenSet = true; m_type = std::move(value); } + inline DatabaseSourceConfiguration& WithType(const DatabaseType& value) { SetType(value); return *this;} + inline DatabaseSourceConfiguration& WithType(DatabaseType&& value) { SetType(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const Aws::String& GetEndpoint() const{ return m_endpoint; } + inline bool EndpointHasBeenSet() const { return m_endpointHasBeenSet; } + inline void SetEndpoint(const Aws::String& value) { m_endpointHasBeenSet = true; m_endpoint = value; } + inline void SetEndpoint(Aws::String&& value) { m_endpointHasBeenSet = true; m_endpoint = std::move(value); } + inline void SetEndpoint(const char* value) { m_endpointHasBeenSet = true; m_endpoint.assign(value); } + inline DatabaseSourceConfiguration& WithEndpoint(const Aws::String& value) { SetEndpoint(value); return *this;} + inline DatabaseSourceConfiguration& WithEndpoint(Aws::String&& value) { SetEndpoint(std::move(value)); return *this;} + inline DatabaseSourceConfiguration& WithEndpoint(const char* value) { SetEndpoint(value); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline int GetPort() const{ return m_port; } + inline bool PortHasBeenSet() const { return m_portHasBeenSet; } + inline void SetPort(int value) { m_portHasBeenSet = true; m_port = value; } + inline DatabaseSourceConfiguration& WithPort(int value) { SetPort(value); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const SSLMode& GetSSLMode() const{ return m_sSLMode; } + inline bool SSLModeHasBeenSet() const { return m_sSLModeHasBeenSet; } + inline void SetSSLMode(const SSLMode& value) { m_sSLModeHasBeenSet = true; m_sSLMode = value; } + inline void SetSSLMode(SSLMode&& value) { m_sSLModeHasBeenSet = true; m_sSLMode = std::move(value); } + inline DatabaseSourceConfiguration& WithSSLMode(const SSLMode& value) { SetSSLMode(value); return *this;} + inline DatabaseSourceConfiguration& WithSSLMode(SSLMode&& value) { SetSSLMode(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const DatabaseList& GetDatabases() const{ return m_databases; } + inline bool DatabasesHasBeenSet() const { return m_databasesHasBeenSet; } + inline void SetDatabases(const DatabaseList& value) { m_databasesHasBeenSet = true; m_databases = value; } + inline void SetDatabases(DatabaseList&& value) { m_databasesHasBeenSet = true; m_databases = std::move(value); } + inline DatabaseSourceConfiguration& WithDatabases(const DatabaseList& value) { SetDatabases(value); return *this;} + inline DatabaseSourceConfiguration& WithDatabases(DatabaseList&& value) { SetDatabases(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const DatabaseTableList& GetTables() const{ return m_tables; } + inline bool TablesHasBeenSet() const { return m_tablesHasBeenSet; } + inline void SetTables(const DatabaseTableList& value) { m_tablesHasBeenSet = true; m_tables = value; } + inline void SetTables(DatabaseTableList&& value) { m_tablesHasBeenSet = true; m_tables = std::move(value); } + inline DatabaseSourceConfiguration& WithTables(const DatabaseTableList& value) { SetTables(value); return *this;} + inline DatabaseSourceConfiguration& WithTables(DatabaseTableList&& value) { SetTables(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const DatabaseColumnList& GetColumns() const{ return m_columns; } + inline bool ColumnsHasBeenSet() const { return m_columnsHasBeenSet; } + inline void SetColumns(const DatabaseColumnList& value) { m_columnsHasBeenSet = true; m_columns = value; } + inline void SetColumns(DatabaseColumnList&& value) { m_columnsHasBeenSet = true; m_columns = std::move(value); } + inline DatabaseSourceConfiguration& WithColumns(const DatabaseColumnList& value) { SetColumns(value); return *this;} + inline DatabaseSourceConfiguration& WithColumns(DatabaseColumnList&& value) { SetColumns(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const Aws::Vector
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const Aws::String& GetSnapshotWatermarkTable() const{ return m_snapshotWatermarkTable; } + inline bool SnapshotWatermarkTableHasBeenSet() const { return m_snapshotWatermarkTableHasBeenSet; } + inline void SetSnapshotWatermarkTable(const Aws::String& value) { m_snapshotWatermarkTableHasBeenSet = true; m_snapshotWatermarkTable = value; } + inline void SetSnapshotWatermarkTable(Aws::String&& value) { m_snapshotWatermarkTableHasBeenSet = true; m_snapshotWatermarkTable = std::move(value); } + inline void SetSnapshotWatermarkTable(const char* value) { m_snapshotWatermarkTableHasBeenSet = true; m_snapshotWatermarkTable.assign(value); } + inline DatabaseSourceConfiguration& WithSnapshotWatermarkTable(const Aws::String& value) { SetSnapshotWatermarkTable(value); return *this;} + inline DatabaseSourceConfiguration& WithSnapshotWatermarkTable(Aws::String&& value) { SetSnapshotWatermarkTable(std::move(value)); return *this;} + inline DatabaseSourceConfiguration& WithSnapshotWatermarkTable(const char* value) { SetSnapshotWatermarkTable(value); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const DatabaseSourceAuthenticationConfiguration& GetDatabaseSourceAuthenticationConfiguration() const{ return m_databaseSourceAuthenticationConfiguration; } + inline bool DatabaseSourceAuthenticationConfigurationHasBeenSet() const { return m_databaseSourceAuthenticationConfigurationHasBeenSet; } + inline void SetDatabaseSourceAuthenticationConfiguration(const DatabaseSourceAuthenticationConfiguration& value) { m_databaseSourceAuthenticationConfigurationHasBeenSet = true; m_databaseSourceAuthenticationConfiguration = value; } + inline void SetDatabaseSourceAuthenticationConfiguration(DatabaseSourceAuthenticationConfiguration&& value) { m_databaseSourceAuthenticationConfigurationHasBeenSet = true; m_databaseSourceAuthenticationConfiguration = std::move(value); } + inline DatabaseSourceConfiguration& WithDatabaseSourceAuthenticationConfiguration(const DatabaseSourceAuthenticationConfiguration& value) { SetDatabaseSourceAuthenticationConfiguration(value); return *this;} + inline DatabaseSourceConfiguration& WithDatabaseSourceAuthenticationConfiguration(DatabaseSourceAuthenticationConfiguration&& value) { SetDatabaseSourceAuthenticationConfiguration(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const DatabaseSourceVPCConfiguration& GetDatabaseSourceVPCConfiguration() const{ return m_databaseSourceVPCConfiguration; } + inline bool DatabaseSourceVPCConfigurationHasBeenSet() const { return m_databaseSourceVPCConfigurationHasBeenSet; } + inline void SetDatabaseSourceVPCConfiguration(const DatabaseSourceVPCConfiguration& value) { m_databaseSourceVPCConfigurationHasBeenSet = true; m_databaseSourceVPCConfiguration = value; } + inline void SetDatabaseSourceVPCConfiguration(DatabaseSourceVPCConfiguration&& value) { m_databaseSourceVPCConfigurationHasBeenSet = true; m_databaseSourceVPCConfiguration = std::move(value); } + inline DatabaseSourceConfiguration& WithDatabaseSourceVPCConfiguration(const DatabaseSourceVPCConfiguration& value) { SetDatabaseSourceVPCConfiguration(value); return *this;} + inline DatabaseSourceConfiguration& WithDatabaseSourceVPCConfiguration(DatabaseSourceVPCConfiguration&& value) { SetDatabaseSourceVPCConfiguration(std::move(value)); return *this;} + ///@} + private: + + DatabaseType m_type; + bool m_typeHasBeenSet = false; + + Aws::String m_endpoint; + bool m_endpointHasBeenSet = false; + + int m_port; + bool m_portHasBeenSet = false; + + SSLMode m_sSLMode; + bool m_sSLModeHasBeenSet = false; + + DatabaseList m_databases; + bool m_databasesHasBeenSet = false; + + DatabaseTableList m_tables; + bool m_tablesHasBeenSet = false; + + DatabaseColumnList m_columns; + bool m_columnsHasBeenSet = false; + + Aws::Vector
Amazon Data Firehose is in preview release and is subject to + * change.
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const DatabaseType& GetType() const{ return m_type; } + inline bool TypeHasBeenSet() const { return m_typeHasBeenSet; } + inline void SetType(const DatabaseType& value) { m_typeHasBeenSet = true; m_type = value; } + inline void SetType(DatabaseType&& value) { m_typeHasBeenSet = true; m_type = std::move(value); } + inline DatabaseSourceDescription& WithType(const DatabaseType& value) { SetType(value); return *this;} + inline DatabaseSourceDescription& WithType(DatabaseType&& value) { SetType(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const Aws::String& GetEndpoint() const{ return m_endpoint; } + inline bool EndpointHasBeenSet() const { return m_endpointHasBeenSet; } + inline void SetEndpoint(const Aws::String& value) { m_endpointHasBeenSet = true; m_endpoint = value; } + inline void SetEndpoint(Aws::String&& value) { m_endpointHasBeenSet = true; m_endpoint = std::move(value); } + inline void SetEndpoint(const char* value) { m_endpointHasBeenSet = true; m_endpoint.assign(value); } + inline DatabaseSourceDescription& WithEndpoint(const Aws::String& value) { SetEndpoint(value); return *this;} + inline DatabaseSourceDescription& WithEndpoint(Aws::String&& value) { SetEndpoint(std::move(value)); return *this;} + inline DatabaseSourceDescription& WithEndpoint(const char* value) { SetEndpoint(value); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline int GetPort() const{ return m_port; } + inline bool PortHasBeenSet() const { return m_portHasBeenSet; } + inline void SetPort(int value) { m_portHasBeenSet = true; m_port = value; } + inline DatabaseSourceDescription& WithPort(int value) { SetPort(value); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const SSLMode& GetSSLMode() const{ return m_sSLMode; } + inline bool SSLModeHasBeenSet() const { return m_sSLModeHasBeenSet; } + inline void SetSSLMode(const SSLMode& value) { m_sSLModeHasBeenSet = true; m_sSLMode = value; } + inline void SetSSLMode(SSLMode&& value) { m_sSLModeHasBeenSet = true; m_sSLMode = std::move(value); } + inline DatabaseSourceDescription& WithSSLMode(const SSLMode& value) { SetSSLMode(value); return *this;} + inline DatabaseSourceDescription& WithSSLMode(SSLMode&& value) { SetSSLMode(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const DatabaseList& GetDatabases() const{ return m_databases; } + inline bool DatabasesHasBeenSet() const { return m_databasesHasBeenSet; } + inline void SetDatabases(const DatabaseList& value) { m_databasesHasBeenSet = true; m_databases = value; } + inline void SetDatabases(DatabaseList&& value) { m_databasesHasBeenSet = true; m_databases = std::move(value); } + inline DatabaseSourceDescription& WithDatabases(const DatabaseList& value) { SetDatabases(value); return *this;} + inline DatabaseSourceDescription& WithDatabases(DatabaseList&& value) { SetDatabases(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const DatabaseTableList& GetTables() const{ return m_tables; } + inline bool TablesHasBeenSet() const { return m_tablesHasBeenSet; } + inline void SetTables(const DatabaseTableList& value) { m_tablesHasBeenSet = true; m_tables = value; } + inline void SetTables(DatabaseTableList&& value) { m_tablesHasBeenSet = true; m_tables = std::move(value); } + inline DatabaseSourceDescription& WithTables(const DatabaseTableList& value) { SetTables(value); return *this;} + inline DatabaseSourceDescription& WithTables(DatabaseTableList&& value) { SetTables(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const DatabaseColumnList& GetColumns() const{ return m_columns; } + inline bool ColumnsHasBeenSet() const { return m_columnsHasBeenSet; } + inline void SetColumns(const DatabaseColumnList& value) { m_columnsHasBeenSet = true; m_columns = value; } + inline void SetColumns(DatabaseColumnList&& value) { m_columnsHasBeenSet = true; m_columns = std::move(value); } + inline DatabaseSourceDescription& WithColumns(const DatabaseColumnList& value) { SetColumns(value); return *this;} + inline DatabaseSourceDescription& WithColumns(DatabaseColumnList&& value) { SetColumns(std::move(value)); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const Aws::Vector
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const Aws::String& GetSnapshotWatermarkTable() const{ return m_snapshotWatermarkTable; } + inline bool SnapshotWatermarkTableHasBeenSet() const { return m_snapshotWatermarkTableHasBeenSet; } + inline void SetSnapshotWatermarkTable(const Aws::String& value) { m_snapshotWatermarkTableHasBeenSet = true; m_snapshotWatermarkTable = value; } + inline void SetSnapshotWatermarkTable(Aws::String&& value) { m_snapshotWatermarkTableHasBeenSet = true; m_snapshotWatermarkTable = std::move(value); } + inline void SetSnapshotWatermarkTable(const char* value) { m_snapshotWatermarkTableHasBeenSet = true; m_snapshotWatermarkTable.assign(value); } + inline DatabaseSourceDescription& WithSnapshotWatermarkTable(const Aws::String& value) { SetSnapshotWatermarkTable(value); return *this;} + inline DatabaseSourceDescription& WithSnapshotWatermarkTable(Aws::String&& value) { SetSnapshotWatermarkTable(std::move(value)); return *this;} + inline DatabaseSourceDescription& WithSnapshotWatermarkTable(const char* value) { SetSnapshotWatermarkTable(value); return *this;} + ///@} + + ///@{ + /** + *
Amazon Data Firehose is in preview release and is subject to + * change.
+ */ + inline const Aws::Vector