copyright | lastupdated | keywords | subcollection | ||
---|---|---|---|---|---|
|
2024-10-08 |
schematics agent planning, planning agent, agent planning, command-line, api, ui |
schematics |
{{site.data.keyword.attribute-definition-list}}
{: #plan-agent-overview}
{{site.data.keyword.bpshort}} Agent extends the ability to work directly on your private network or any isolated network zones. Agents put users in control of the network configuration and access that they give to an agent to run workspace and action jobs. Agents are designed without inbound access from {{site.data.keyword.bpshort}} and the opening of inbound firewall or network access ports. All communication between the agent and {{site.data.keyword.bpshort}} is outbound from the agent and under user control.
{{site.data.keyword.bpshort}} Agent is a collection of microservices that runs on the Kubernetes clusters in your account. Also they use an {{site.data.keyword.objectstorageshort}} bucket as an intermediate or temporary data store for the log files and state-files generated by workspace or action jobs. {: shortdesc}
Review and complete the listed tasks to prepare your {{site.data.keyword.cloud}} environment to deploy a new agent.
Account and networks : An agent provides {{site.data.keyword.bpshort}} the ability to run workspace and action jobs within a target account and the accounts private network. Network policies must be configured to allow the cluster that the agent is deployed on to communicate back to Schematics, also to the {{site.data.keyword.cloud_notm}} APIs, services and, for instance, to a user private Git or Vault instances. For more information, see the section on Planning agent network access and configuration. - Record information about the allowed network zones and infrastructure accessible to the agent.
Cluster
: A {{site.data.keyword.bpshort}} Agent can be deployed on existing private or public {{site.data.keyword.containerlong_notm}} and {{site.data.keyword.redhat_openshift_notm}} {{site.data.keyword.containershort_notm}} clusters. You can use an existing cluster or provision a new cluster with the following minimum configuration.
- For the {{site.data.keyword.containerlong_notm}} v1.28 and later cluster versions. You need to update the network path so that images are pulled through a VPE gateway instead of a private service endpoint or enhance the {{site.data.keyword.bpshort}} agent template.
- Minimum configuration: Three worker nodes with b4x16
flavor. This configuration can be used to run four workspace or action jobs in parallel.
- Record information about the cluster such as cluster ID
, cluster resource group
, and region
for the later use.
To support agents on the {{site.data.keyword.redhat_openshift_notm}} {{site.data.keyword.containershort_notm}} based on the requirement, You can control egress traffic through security groups and network access control lists (ACLs).
You need to define any security group rules and ACLs at the VPC level before deploying an agent on the cluster. For more information, see the [Terraform script to define security groups and ACLs on a VPC](https://github.com/Cloud-Schematics/schematics-agents/blob/main/templates/infrastructure/vpc/network_acl.tf){: external}.
{: note}
{{site.data.keyword.cos_full_notm}}
: The {{site.data.keyword.bpshort}} Agent uses a {{site.data.keyword.objectstorageshort}} bucket to store temporary data. The {{site.data.keyword.cos_full_notm}} instance must be in the same resource group as the cluster. Also the new bucket must be in the same region as the cluster.
- To deploy an agent, you must have the necessary privileges to create the HMAC credentials
for the {{site.data.keyword.objectstorageshort}} bucket and store the credential as a Kubernetes secret.
- The {{site.data.keyword.cos_full_notm}} instance and bucket must be created for the successful deployment.
- Record information about the {{site.data.keyword.cos_full_notm}} resources such as COS instance name
, COS bucket name
, and bucket region
for the later use.
IAM access permission
: At a minimum you must have access permissions for the Kubernetes service, Resource Group, {{site.data.keyword.objectstorageshort}}, and the {{site.data.keyword.bpshort}} service to deploy an agent.
- To deploy an agent in another account by using a ServiceID
or APIKey
, you must see that the account administrator gives permission for all the services enlisted in permission to deploy an agent.
{{site.data.keyword.cloud_notm}} CLI : Use the recent version of {{site.data.keyword.cloud_notm}} CLI and the {{site.data.keyword.bpshort}} CLI v1.12.12 or higher plug-in to install an agent. For more information about plug-in installation, see installing {{site.data.keyword.bpshort}} CLI plug-in.
Terraform version support
: Agent supports the workspace by using Terraform v1.5
, and v1.6
. Workspaces with older versions of Terraform must be updated to one of the supported versions to support by an agent. For more information, see the deprecation schedule and user actions to upgrade.
You can deploy only one agent instance on a Kubernetes cluster. To deploy multiple agents in a single {{site.data.keyword.cloud_notm}} account, they must be deployed to different Kubernetes clusters. Each agent and cluster can cater to different network isolation zones in your Cloud environment. {: note}
An agent can be associated with and run jobs for one {{site.data.keyword.cloud_notm}} account and geographic region. Agents cannot be shared with other accounts or run jobs for multiple accounts. The diagram represents the association of agents with a {{site.data.keyword.bpshort}} geographic region. Here multiple agents with access to local private resources in remote locations are associated with different {{site.data.keyword.bpshort}} geographic instances.
{: caption="Agent association with {{site.data.keyword.bpshort}} instances" caption-side="bottom"}
This image is an artistic representation and does not reflect actual political or geographic boundaries. {: note}
{: #agentb1-network-config}
{{site.data.keyword.bpshort}} Agent enables workspace and action jobs to be ran on your private network with direct access to work with resources on your private network and data centers. The following diagram illustrates a possible agent deployment model on a cluster environment with multiple VPCs connected through a transit gateway.
{: caption="{{site.data.keyword.bpshort}} Agent connectivity" caption-side="bottom"}
To work with private resources, your private cloud environment must be configured to allow the cluster to run on your agent. And has access to the APIs, services and resources to enable workspace and actions jobs to run. Typically Terraform uses HTTPS to configure service over port 443. Whereas Ansible uses SSH through port 22 to perform post-provisioning VSI configuration. These HTTPS and SSH network paths are illustrated in the diagram.
VPC Security Group or Access Control List policies must be configured to allow the agent cluster to access {{site.data.keyword.cloud_notm}} APIs by using HTTPS and any target VSIs by using SSH.
Access to data center resources can be configured by using Direct Link or a VPN connection.
With agents, you need to do the network security policies for the Kubernetes cluster, and any VPC Security Group or Access Control List policies for the running agent. Therefore, determining the ability of workspace and action jobs to access private cloud resources and the {{site.data.keyword.cloud_notm}} APIs for the service provisioning and configuration.
{: #agent-capacity-planning}
You need to monitor the resource usage for the {{site.data.keyword.bpshort}} Agent pods to scale the worker nodes in the Kubernetes cluster based on the number of concurrent jobs. To make the following changes, you can use the Kubernetes dashboard or kubectl commands. - The number of concurrent Terraform, and Ansible jobs. - The number of Terraform and Ansible pods. - The resource limits for the agent deployment.
{: #agent-plan-nextsteps}
The next step is to deploy an agent.