From 289c1c7daa8de6386c56d3a6c72afcda71e54557 Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Fri, 31 May 2024 17:05:26 +0530 Subject: [PATCH] Ingress --- Keda101/keda-lab.md | 4 +++- Keda101/keda-prometheus.md | 6 +++++- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/Keda101/keda-lab.md b/Keda101/keda-lab.md index 957d68ea..4c6dc70e 100644 --- a/Keda101/keda-lab.md +++ b/Keda101/keda-lab.md @@ -251,4 +251,6 @@ With the above configuration, a new Keda job will start every time a message is ## Conclusion -This wraps up the lesson on KEDA. What we tried out was a simple demonstration of a MySQL scaler followed by a demonstration of using various authentication methods to connect and consume messages from AWS SQS. This is a good representation of what you can expect from other data sources. If you were considering using this with a different Kubernetes engine running on a different cloud provider, the concept would still work. Make sure you read through the authentication page, which contains different methods of authentication for different cloud providers. If you want to try out other scalers, make sure you check out the [official samples page](https://github.com/kedacore/samples). \ No newline at end of file +This wraps up the lesson on KEDA. What we tried out was a simple demonstration of a MySQL scaler followed by a demonstration of using various authentication methods to connect and consume messages from AWS SQS. This is a good representation of what you can expect from other data sources. If you were considering using this with a different Kubernetes engine running on a different cloud provider, the concept would still work. Make sure you read through the authentication page, which contains different methods of authentication for different cloud providers. Next up, we will look at how you can use KEDA alongside Prometheus and Linkerd to scale your pods based on the number of requests reaching your endpoints. + +[Next: Scaling with KEDA and Prometheus](./keda-prometheus.md) \ No newline at end of file diff --git a/Keda101/keda-prometheus.md b/Keda101/keda-prometheus.md index 2a7e2f0d..b69faeb2 100644 --- a/Keda101/keda-prometheus.md +++ b/Keda101/keda-prometheus.md @@ -99,4 +99,8 @@ This should send 100 requests to your Nginx application. Once you have run this ## Considerations -Now that you have seen how KEDA and Prometheus work together with linkerd to scale your application workloads, you might have already noticed a few considerations you need to take into account. Both Linkerd and KEDA are well-used tools that are generally used in production workloads, so there is little reason to be worried about them breaking. While the same holds true for Prometheus, you might notice here that Prometheus runs in a single pod. For starters, this Prometheus instance is introduced by Linkerd but is not supposed to be an official Prometheus instance that is supposed to cater to your entire cluster. If you need something like that, you will have to set up Prometheus yourself and bring the linkerd metrics to your Prometheus instance. [This document](https://linkerd.io/2.15/tasks/external-prometheus/) should be able to help you with that. This small prometheus instance only runs one replica, but if you are going to be relying on it for your scaling in a production environment, it is advisable to extend it to run several replicas across multiple availability zones. It also helps to have alarms and alerts configured so that you know that the Prometheus instance is running fine. Next, while keda and karpenter are robust applications, if you are relying on them to keep your production workloads running, it's best to have them monitored too since they are moving parts in your infrastructure. \ No newline at end of file +Now that you have seen how KEDA and Prometheus work together with linkerd to scale your application workloads, you might have already noticed a few considerations you need to take into account. Both Linkerd and KEDA are well-used tools that are generally used in production workloads, so there is little reason to be worried about them breaking. While the same holds true for Prometheus, you might notice here that Prometheus runs in a single pod. For starters, this Prometheus instance is introduced by Linkerd but is not supposed to be an official Prometheus instance that is supposed to cater to your entire cluster. If you need something like that, you will have to set up Prometheus yourself and bring the linkerd metrics to your Prometheus instance. [This document](https://linkerd.io/2.15/tasks/external-prometheus/) should be able to help you with that. This small prometheus instance only runs one replica, but if you are going to be relying on it for your scaling in a production environment, it is advisable to extend it to run several replicas across multiple availability zones. It also helps to have alarms and alerts configured so that you know that the Prometheus instance is running fine. Next, while keda and karpenter are robust applications, if you are relying on them to keep your production workloads running, it's best to have them monitored too since they are moving parts in your infrastructure. + +## Conclusion + +This brings us to the end of this section on using Keda with Prometheus to scale your pods based on the number of replicas. If you want to try out other scalers, make sure you check out the [official samples page](https://github.com/kedacore/samples). \ No newline at end of file