With ARC SaaS, we’re introducing a pioneering SaaS factory model based control plane microservices and IaC modules that promises to revolutionize your SaaS journey.
SourceFuse Reference Architecture to implement a sample EKS Multi-Tenant SaaS Solution. This solution will use AWS Codepipeline to deploy all the control plane infrastructure component of Networking, Compute, Database, Monitoring & Logging and Security alongwith the control plane application using helm chart. This solution will also setup tenant codebuild projects which is responsible for onboarding of new silo and pooled tenant. Each tenant will have it's own infrastructure and application helm chart Which will be managed using gitops tool like ArgoCD and Argo Workflow. This solution will also have strict IAM policy and Kubernetes Authorization Policy for tenants to avoid cross namespace access.
For more details, you can go through the eks saas architecture documentation.
- AWS Account
- Terraform CLI
- AWS CLI
⚠️ Please ensure you are logged into AWS Account as an IAM user with Administrator privileges, not the root account user.
- If you don't have registered domain in Route53 then register domain in Route53. (If you have domain registered with 3rd party registrars then create hosted zone on route53 for your domain.)
- Generate Public Certificate for the domain using AWS ACM. (please ensure to give both wildcard and root domain in Fully qualified domain name while generating ACM, e.g. if domain name is xyz.com then use both xyz.com & *.xyz.com in ACM)
- SES account should be setup in production mode and
domain
should be verified. Generate smtp credentials and store them in ssm parameter store asSecureString
. (using parameter name - /{namespace}/ses_access_key & /{namespace}/ses_secret_access_key wherenamespace
is project name) - Generate Github Tokenand store them in ssm parameter as
SecureString
. (using parameter name - /github_user & /github_token)
NOTE: If you are using organization github account then create github token with organization scope and provide necessary permission to create, manage repository & merge into the repository. Otherwise you can create github token for your personal user. Update the
.tfvars
file ofterraform/tenant-codebuilds
folder.
-
Create a codepipeline connection for github with your github account and repository.
-
If you want to use client-vpn to access opensearch dashboard then enable it using variable defined in
.tfvars
file ofterraform/client-vpn
folder. [follow doc to connect with VPN ] -
We are using pubnub and vonage credentials for the application plane and we have stored them in parameter store so if you want to use the same application plane then create your own credentials and store them in parameter store. Please check application helm chart and values.yaml.template file stored in
files/tenant-samples
folder for pubnub configuration.
- First clone/fork the Github repository.
- Based on the requirements, change
terraform.tfvars
file in all the terraform folders. - Update the variables namespace,environment,region,domain_name in the
script/replace-variable.sh
file. - Execute the script using command
./scripts/replace-variable.sh
- Execute the script for pushing decoupling orchestrator image to ECR repository using command
./scripts/push-orchestrator-image.sh
- Update the codepipeline connection name (created in pre-requisite section), github repository name and other required variables in
terraform.tfvars
file ofterraform/core-infra-pipeline folder
. - Check if AWSServiceRoleForAmazonOpenSearchService Role is already created in your AWS account then set
create_iam_service_linked_role
variables to false in tfvars file ofterraform/opensearch
otherwise set it to true. - Update the ACM ((created in pre-requisite section)) ARN in
terraform.tfvars
file of terraform/istio folder. - Go thorugh all the variables decalred in tfvars file and update the variables according to your requirement.
Once the variables are updated, We will setup terraform codepipeline which will deploy all control plane infrastructure components alongwith control plane helm. We have multiple option to do that -
- Using Github Actions ::
NOTE: We are using slef hosted github runners to execute workflow action. please follow this document to setup runners.
- First create an IAM role for github workflow actions and update the role name and other required variables like environment etc. in workflow yaml files defined under
.github
directory. - Add AWS_ACCOUNT_ID in github repository secret.
- Execute the workflow of apply-bootstrap.yaml & apply-pipeline.yaml by updating the github events in these files. Currently these workflows will be executed when pull request will be merged to main branch so change the invocation of these workflow files according to you.
- Push the code to your github repository.
NOTE: If you want to run other workflows, which are terraform plans, make sure to update the workflow files. Terraform bootstrap is one time activity so once bootstrap workflow is executed, please disable that to run again.
- Using Local ::
AWS CLI version2 & Terraform CLI version 1.7 must be installed on your machine. If not installed, then follow the documentation to install aws cli & terraform cli.
-
Configure your terminal with aws.
-
Go to the
terraform/bootstrap
folder and run the floowing command to deploy it -terraform init terraform plan terraform apply
-
After that, Go to the
terraform/core-infra-pipeline
and update the bucket name, dynamodb table name (created in above step) and region inconfig.hcl
.
NOTE: Update config.hcl
file based using s
-
Push the code to your github repository.
-
Run the Followign command to create terraform codepipeline -
terraform init --backend-config=config.hcl terraform plan terraform apply
NOTE: All Terraform module README files are present in respective folder.
Once the codepipeline is created, Monitor the pipeline and when Codepipeline is executed successfully then create the following records in route53 hosted zone of the domain, using Load Balancer DNS address.
Record Entry | Type | Description |
---|---|---|
{domain-name} | A | control-plane application URL. |
argocd.{domain-name} | CNAME | ArgoCD URL |
argo-workflow.{domain-name} | CNAME | Argo Workflow URL |
grafana.{domain-name} | CNAME | Grafana Dashboard URL |
NOTE: All authentication password will be saved in SSM Paramater store. On Grafana, Please add athena, cloudwatch and prometheus data source and import the dashboard using json mentioned in billing and observability folder.
After Creating record in the Route53, you can access the control plane application using {domain-name}
URL (eg. if your domain name is xyz.com then control plane will be accessible on xyz.com). Tenant onboarding can be done using the URL {domain-name}/tenant/signup
. Once the tenant will be onboarded successfully then you can access the tenant application plane on URL {tenant-key}.{domain-name}
This project is authored by below people
- SourceFuse ARC Team
Distributed under the MIT License. See LICENSE for more information.