The KEDA Blog
Updates, tutorials, and more
KEDA is graduating to CNCF Graduated project 🎉
KEDA Maintainers
August 22, 2023
In 2019, KEDA embarked on a mission to make application autoscaling on Kubernetes dead-simple. Our aim was to make sure that every Kubernetes platform can use it to scale applications without having to worry about the underlying autoscaling infrastructure. As part of that mission, we wanted to build a vendor-neutral project that is open to everyone and nicely integrates with other tools. Because of that, the KEDA maintainers decided that the Cloud Native Computing Foundation (CNCF) was a natural fit and got accepted as a sandbox project in 2020.
Securing autoscaling with the newly improved certificate management in KEDA 2.10
Jorge Turrado (SCRM Lidl International Hub)
May 16, 2023
Recently, we have release KEDA v2.10 that introduced a key improvement for managing certificates and your autoscaling security: Encryption of any communication between KEDA components. Support for providing your own certificates for internal communications. Support for using customs certificate authorities (CA). With these new improvements, we can dramatically improve the security between KEDA components, the Kubernetes API server and scaler sources. Let’s take a closer look. Where do we come from?
Help shape the future of KEDA with our survey đź“ť
KEDA Maintainers
May 4, 2023
As maintainers, we are always eager to learn who is using KEDA (become a listed end-user!) and how they are using KEDA to scale their cloud-native workloads. Our job is to make sure that your are able to scale the workloads that you run with as less friction as possible, production-grade security, insights on what is going on, etc. In order to be successful, we need to learn how big of KEDA deployments end-users are running, what is causing frustration and what we can improve.
Jeff Hollan (Snowflake), Tom Kerkhove (Microsoft) and Zbynek Roubalik (Red Hat)
December 12, 2022
We recently completed our most recent release: 2.9.0 🎉! Here are some highlights: Newly published Deprecations and Breaking Change policy (docs) Introduce new CouchDB, Etcd & Loki scalers Introduce off-the-shelf Grafana dashboard for application autoscaling Introduce improved operational metrics in Prometheus Introduce capability to cache metric values for a scaler during the polling interval (experimental feature) Performance improvements and architecture changes on how metrics are exposed to Kubernetes Azure Key Vault authentication provider now supports pod identities for authentication A ton of new features and fixes for some of our 50+ scalers Potential breaking changes and deprecations include:
HTTP add-on is looking for contributors by end of November
Tom Kerkhove (KEDA)
September 27, 2022
On Nov 25, 2020, we started the HTTP add-on based on @arschles his initial POC which closed a big gap in KEDA’s story - HTTP autoscaling without a dependency on an external system, such as Prometheus. To this day, the autoscaling community has a very high demand for a solution in this area that auto scales and works in the same manner as the KEDA core. With the add-on, we want to cover all traffic patterns ranging from ingress, to service meshes and service-to-service communication and make it super simple to autoscale (and with scale-to-zero support).
Jeff Hollan (KEDA), Tom Kerkhove (KEDA)
August 10, 2022
We recently completed our most recent release: 2.8.0 🎉! Here are some highlights: Introduction of new AWS DynomoDB Streams & NATS JetStream scalers. Introduction of new Azure AD Workload Identity authentication provider. Support for specifying minReplicaCount in ScaledJob. Support to customize the HPA name. Support for permission segregation when using Azure AD Pod / Workload Identity Additional features to various scalers such as AWS SQS, Azure Pipelines, CPU, GCP Stackdriver, Kafka, Memory, Prometheus Here are the new deprecation(s) as of this release:
Ratnadeep Debnath (Zapier)
March 10, 2022
RabbitMQ is at the heart of Zap processing at Zapier. We enqueue messages to RabbitMQ for each step in a Zap. These messages get consumed by our backend workers, which run on Kubernetes. To keep up with the varying task loads in Zapier we need to scale our workers with our message backlog. For a long time, we scaled with CPU-based autoscaling using Kubernetes native Horizontal Pod Autoscale (HPA), where more tasks led to more processing, increasing CPU usage, and triggering our workers’ autoscaling.
Introducing PredictKube - an AI-based predictive autoscaler for KEDA made by Dysnix
Daniel Yavorovych (Dysnix), Yuriy Khoma (Dysnix), Zbynek Roubalik (KEDA), Tom Kerkhove (KEDA)
February 14, 2022
Introducing PredictKube—an AI-based predictive autoscaler for KEDA made by Dysnix Dysnix has been working with high-traffic backend systems for a long time, and the efficient scaling demand is what their team comes across each day. The engineers have understood that manually dealing with traffic fluctuations and preparations of infrastructure is inefficient because you need to deploy more resources before the traffic increases, not at the moment the event happens. This strategy is problematic for two reasons: first, because it’s often too late to scale when traffic has already arrived and second, resources will be overprovisioned and idle during the times that traffic isn’t present.
How CAST AI uses KEDA for Kubernetes autoscaling
Žilvinas Urbonas (CAST AI), Annie Talvasto (CAST AI), and Tom Kerkhove (KEDA)
August 4, 2021
How CAST AI uses KEDA for Kubernetes autoscaling Kubernetes comes with several built-in autoscaling mechanisms - among them the Horizontal Pod Autoscaler (HPA). Scaling is essential for the producer-consumer workflow, a common use case in the IT world today. It’s especially useful for monthly reports and transactions with a huge load where teams need to spin up many workloads to process things faster and cheaper (for example, by using spot instances).
Announcing KEDA HTTP Add-on v0.1.0
Aaron Schlesinger and Tom Kerkhove
June 24, 2021
Over the past few months, we’ve been adding more and more scalers to KEDA making it easier for users to scale on what they need. Today, we leverage more than 30 scalers out-of-the-box, supporting all major cloud providers & industry-standard tools such as Prometheus that can scale any Kubernetes resource. But, we are missing a major feature that many modern, distributed applications need - the ability to scale based on HTTP traffic.
Autoscaling Azure Pipelines agents with KEDA
Troy Denorme
May 27, 2021
With the addition of Azure Piplines support in KEDA, it is now possible to autoscale your Azure Pipelines agents based on the agent pool queue length. Self-hosted Azure Pipelines agents are the perfect workload for this scaler. By autoscaling the agents you can create a scalable CI/CD environment. đź’ˇ The number of concurrent pipelines you can run is limited by your parallel jobs. KEDA will autoscale to the maximum defined in the ScaledObject and does not limit itself to the parallel jobs count defined for the Azure DevOps organization.
Why Alibaba Cloud uses KEDA for application autoscaling
Yan Xun, Andy Shi, and Tom Kerkhove
April 6, 2021
This blog post was initially posted on CNCF blog and is co-authored by Yan Xun, Senior Engineer from Alibaba Cloud EDAS team & Andy Shi, Developer Advocator from Alibaba Cloud. When scaling Kubernetes there are a few areas that come to mind, but if you are new to Kubernetes this can be a bit overwhelming. In this blog post; we will briefly explain the areas that need to be considered, how KEDA aims to make application auto-scaling simple, and why Alibaba Cloud’s Enterprise Distributed Application Service (EDAS) has fully standardized on KEDA.
Migrating our container images to GitHub Container Registry
KEDA Maintainers
March 26, 2021
We provide various ways to deploy KEDA in your cluster including by using Helm chart, Operator Hub and raw YAML specifications. These deployment options all rely on the container images that we provide which are available on Docker Hub, the industry standard for public container images. However, we have found that Docker Hub is no longer the best place for our container images and are migrating to GitHub Container Registry (Preview).
Announcing KEDA 2.0 - Taking app autoscaling to the next level
KEDA Maintainers
November 4, 2020
A year ago, we were excited to announce our 1.0 release with a core set of scalers, allowing the community to start autoscaling Kubernetes deployments. We were thrilled with the response and encouraged to see many users leveraging KEDA for event driven and serverless scale within any Kubernetes cluster. With KEDA, any container can scale to zero and burst scale based directly on event source metrics. While KEDA was initially started by Microsoft & Red Hat we have always strived to be an open & vendor-neutral project in order to support everybody who wants to scale applications.
Give KEDA 2.0 (Beta) a test drive
KEDA Maintainers
September 11, 2020
Today, we are happy to share that our first beta version of KEDA 2.0 is available! 🎊 Highlights With this release, we are shipping majority of our planned features. Here are some highlights: Making scaling more powerful Introduction of ScaledJob (docs) Introduction of Azure Log Analytics scaler (docs) Support for scaling Deployments, Stateful Sets and/or any Custom Resources (docs) Support for scaling on standard resource metrics (CPU/Memory) Support for multiple triggers in a single ScaledObject (docs) Support for scaling to original replica count after deleting ScaledObject (docs) Support for controlling scaling behavior of underlying HPA Easier to operate KEDA Introduction of readiness and liveness probes Introduction of Prometheus metrics for Metrics Server (docs) Provide more information when querying KEDA resources with kubectl Extensibility Introduction of External Push scaler (docs) Introduction of Metric API scaler (docs) Provide KEDA client-go library For a full list of changes, we highly recommend going through our changelog!
Kubernetes Event-driven Autoscaling (KEDA) is now an official CNCF Sandbox project 🎉
KEDA Maintainers
March 31, 2020
Over the past year, We’ve been contributing to Kubernetes Event-Driven Autoscaling (KEDA), which makes application autoscaling on Kubernetes dead simple. If you have missed it, read about it in our “Exploring Kubernetes-based event-driven autoscaling (KEDA)” blog post. We started the KEDA project to address an essential missing feature in the Kubernetes autoscaling story. Namely, the ability to autoscale on arbitrary metrics. Before KEDA, users were only able to autoscale based on metrics such as memory and CPU usage.